Printer Friendly

An overview of various image fusion techniques for remotely sensed images.


Development of different types of remote sensors, biosensors and chemical sensors on board satellites, more and more data have become existing for scientific researches. As the volume of data grows, so does the need to combine data to gather from different sources to extract the most valuable information. Different terms such as data interpretation, combined analysis, data integrating have been used. Since early 1990's, "Data fusion" has been implement and extensively used.

Image fusion is a component of data fusion when data type is strict to image format. It is an effective way for optimum utilization of large volumes of image from multiple sources. Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual observation or computer processing [8]. In 1998, Pohl and Genderen [19] gave in-depth review paper on multiple sensors data fusion techniques. This paper explained the concepts, methods and applications of image fusion as a contribution to multi-sensor integration oriented data processing. Since then, image fusion has received increasing awareness. Further scientific credentials on image fusion have been published with an emphasis on improving fusion quality and finding more application areas. The majority of the earth observing satellites launched in the last twenty years such as Quickbird, Formosat, SPOT-5, Landsat-7, Landsat-8, Ikonos, Geoeye, Kompsat and more recently Worldview-2, collect at the same time a panchromatic image with a higher spatial resolution and many multispectral bands with higher spectral and lower spatial resolution. The high spatial resolution is a essential in order to map and detect with accurateness small features and structures. At the same time, the high spectral resolution is considered necessary in order to classify and discriminate different land use and land-cover types.

Most remote sensing applications require at the same time high spatial and spectral resolution images that the satellites cannot provide due to technical constraints such as small data transfer rate, limited energy autonomy, and limited storage capacity. Under these restrictions, it is measured that the most effective solution for providing high spectral resolution and high spatial resolution remote sensing images is to develop effective image fusion techniques. Fusing panchromatic and multispectral images with corresponding characteristics can improve the visualization of the study area and produce better results [20].

In this Literature, many image fusion methods for classification have been developed to tackle the image Classification problem. The rest of this paper is organized as follows. Section 2 reviews the various Image fusion levels, section 3 describes about the detailed literature review about various image fusion techniques, section 4 describes various fusion quality metrics to evaluate effectiveness of fused image and section 5 draws the conclusion.

Stages of Image Fusion:

In the past decades fusion has been applied to different fields such as pattern recognition, visual enhancement, object detection and area surveillance. In 1997, Hall and Llinas [28] gave a wide-ranging introduction to multi sensor data fusion methods. Another in-depth review paper on multiple sensors data fusion techniques was published in 1998 [19]. This paper explained the concepts, methods and applications of image fusion as a contribution to multi-sensor incorporation oriented data processing. Since then, image fusion has received growing attention. A very wide field of applications and approaches in image fusion are summarized by synergy [7] or synergism of remote sensing data. It requires the input of data that provide complementary rather than redundant information.

Figure. 1 shows flow task of different level of image fusion in remote sensing image processing. Image fusion can be performed at four different stages: signal level, pixel level, feature level, and decision level. In Signal-based fusion, signals from different sensors are combined to create a new signal with a better signal to noise ratio than the original signals of sensors.

Pixel-based fusion is performed on a pixel-by-pixel basis. Fusion done at the lowest processing level referring to the merging of measured physical parameters. It generates a fused image in which information associated with each pixel is determined from a set of pixels in source images to improve the performance of image processing tasks such as segmentation. The synthesis of MS image at higher spatial resolution by exploiting a PAN image is usually called pansharpening of MS image. Main drawback is the decision on whether a source image contributes to the fused image is prepared pixel by pixel and it may origin of spatial distortion in the fused image.

Feature-based fusion at feature level requires an extraction of objects predictable in the various data sources. That requires the extraction of significant features which are depending on their environment such as pixel intensities, edges or textures. These similar features from input images are fused and identity declaration made on joint feature vector. It deals with data at higher processing levels than pixel level methods.

Decision-level or interpretation level fusion consists of merging information at a higher level of generalization, combines the results from multiple algorithms to yield a final fused decision. Information is extracted from input images and obtained information is combined by applying decision rules to reinforce common interpretation. In that receive results from different local classifiers will be combined to determine the final decision. Techniques involved in feature level and decision level data fusion are used in wide range of areas including statistical estimation, artificial intelligence, information theory, pattern recognition and other areas.

Image Fusion Methods:

Fusion method based on NSST and PCNN:

Jin et al., [25] proposed image fusion method based on nonsubsampled shearet transform (NSST) and pulse coupled neural network (PCNN) to improve performance and effectiveness of fused image. Figure 2 represent a frame work of CIELab Colour fusion method in detail. In first step, panchromatic (PAN) and multispectral (MS) are transformed into CIELab color space to get different color components. MS images are three-channel color images so it can be directly transformed into CIELab color space [9] but PAN Images are one-channel gray image. So translates one-channel PAN images into three-channel RGB images, and then the translated three-channel PANRGB is transformed into Lab color space. In second step, PAN and L component of MS are decomposed by the NSST [12] to obtain corresponding the low-frequency coefficients and high-frequency coefficients. Third step, the low-frequency coefficients are fused by intersecting cortical model (ICM); According to the OFG of PCNN [18], and ICM, the new low-frequency coefficients and high frequency coefficients will be got. The low-frequency coefficients of L are fused by ICM. At last stage, the fused L component image is obtained by inverse NSST transform method, and the fused RGB color image is obtained by inverse CIELab transform.

Fusion method based on SITF--FANN:

Rai et., al, [11] proposed a new method based on Fast Approximate Nearest Neighbor (FANN) [17] for automatic registration of low spatial resolution multi-spectral QuickBird satellite image (sensed image) with high spatial resolution panchromatic QuickBird satellite image prior to fusion. In the registration steps, Scale Invariant Feature Transform (SIFT) by Lowe et al., [14] is used to extract key points from both PAN and MS images. The key points are then matched using the automatic tuning algorithm of FANN. This algorithm automatically selects the most suitable indexing for the dataset with reduced matching time for large number of high dimensional key points by indexing. The indexed features are then matched using approximate nearest neighbor. After the images are registered, the next stage is to fuse registered image with the reference image. The HIS based fusion method is applied to automatic registered image to fuse it with the HSR-PAN. This method achieves both speed and accuracy in different steps.

Spatiotemporal Data Fusion:

Rao et al., [21] proposed novel method to create a fine spatial and high temporal resolution images at a ground-based data processing system. Satellites images are acquired at LSR with high temporal resolution (AWiFS) were called as low spatial and high temporal (LSHT) images. The images that are acquired at high spatial resolution with low temporal resolution (LISS III) were called as fine spatial and low temporal (FSLT) images. In temporal data composition, spurious spatial discontinuities are inevitable for land-cover type changes. These discontinuities were identified with temporal edge primitives and were smoothed with a spatial-profile-averaging method. In Temporal High-Pass Modulation stage, high-frequency details were injected into the LSHT image by deriving the high-resolution spatial weights with a sub pixel relationship between the single LSHT-FSLT images. In Temporal Edge Primitives stage, spurious spatial discontinuities were detected by extract high-frequency details from the LSHT images using a 3*3 Laplacian high-pass filter. In the third stage, these spurious spatial discontinuities were smoothed with the spatial-profile-averaging method.

Fusion method based on Compressive Sensing:

Compressive Sensing is a new algorithm proposed by Zhu et al., [6] for multi-spectral image fusion to further improve image fusion effect and shorten fusion time-consuming. Compressed sensing mainly consists of three key elements such as signal sparse representation, design of observation matrix and reconstruction algorithm. First high and low frequency components of multi-spectral image and panchromatic image are decomposed by orthogonal wavelet transform (WT). Then, high frequency coefficients of WT are observed by using CS algorithm [6] and Gauss random matrix used to measurement matrix, Orthogonal Matching Pursuit (OMP) algorithm proposed by Tropp et al., [23] used to restore high frequency coefficients. It is a kind of greedy iterative algorithm to improve computing speed and easy to implement. Finally, the fused multi-spectral image is obtained by inverse wavelet transform and inverse IHS transform respectively. Simulation result of proposed CS multispectral images fusion algorithm is superior to other related methods in subjective visual effect and objective evaluation parameters, and about 60 seconds of fusion time-consuming is reduced.

Fusion method based on WPA:

Bao et al., [24] proposed an improved image fusion method called Bi-orthogonal wavelet packet transform (WPA) for high spectrum effect and spatial information. It decomposes the low frequency information and the high-frequency one in all wavelet packet sub-bands at the same time. But for WA [4], low frequency band only decomposed and it losses some useful detail information of signal. Firstly, there can be a HSV model gained after converting the original MS image into HSV space. From the HSV model the Value ingredient was extracted and it was decomposed into various frequency domains by using Bi-orthogonal wavelet packet transform at the third scale. The PAN image is also decomposed in the similar way in the past intervening time. Secondly, according to the features of different frequency bands, the Region Classification rule is used in the low frequency band, and the high frequency coefficients are calculated by Modulus Lifting rule. A fused value ingredient can be provided by inverse wavelet packet transform. At lastly, after finishing the space transform from HSV to RGB a merged image produced. By comparing with other fusion methods, analysis result shows that the proposed algorithm can achieve the most acceptable spectrum effect and spatial quantity.

Fusion method based on WT and Sparse Representation:

Cheng et al., [10] proposed a new remote sensing image fusion method which combines the wavelet transform [13, 15] and sparse representation [1, 5] to obtain fusion images with high spectral resolution and high spatial resolution. First step, intensity-hue-saturation (IHS) transform method is applied on Multi-Spectral (MS) images and wavelet transform (WT) is used on the intensity component of MS images and the Panchromatic (Pan) image to construct the multi-scale representation. By using multi-scale representation, different fusion strategies are taken on the low-frequency and the high-frequency sub-images. In low-frequency sub-image fusion method sparse representation with training dictionary is introduced. The spatial frequency maximum method used to define the fusion rule for the sparse representation coefficients of the low-frequency sub-images. For high-frequency sub-images with prolific detail information, the fusion rule is established by the images information fusion measurement. At last stage, the fused image results are obtained through inverse IHS transform and inverse wavelet transform. The wavelet transform has the ability to extract the spectral information and the global spatial details from the original pair wise images, while sparse representation can extract the local structures of images effectively. Therefore, proposed fusion method can well conserve the spectral and spatial information of the original images. The experimental results on the remote sensing images have demonstrated that proposed method could well preserve the spectral characteristics of fusion images with a high spatial resolution.

LSAT data fusion method:

Rao et al., [21] proposed computationally efficient technique used to create a FSHT resolution image. Spatiotemporal data fusion methods have been defined and implemented only for MODIS and Landsat images. There is a large spatial resolution difference between these images. The pixel-to-pixel correspondence suffers from geometric and interpolation errors. The spatial resolution ratio of AWiFS to LISS III images is smaller than that of MODIS to Landsat images--i.e. one AWIFS pixel is equivalent to approximately 2.38 x 2.38 LISS III pixels. The proposed method used to creates a synthetic FSHT resolution image with 5 day temporal and 23.5 m spatial resolution. This method is referred to as 'LISS III spatial and AWiFS temporal' (LSAT) data fusion. The LSAT data fusion method is mostly based on a sub-pixel relationship between the single AWiFSLISS III image pair, which was acquired before or after the prediction time. A synthetic LISS III image at the time tk is synthesized using an AWiFS image at time tk and a single AWiFS-LISS III image pair at time to. Timing Changes in the data are computed from AWiFS data sets at times to and tk. Sub-pixel relationship between the AWiFS image from time tk and the AWiFS-LISS III pair from time to are used to derive Spatial weights for time tk. The LSAT model was tested on real data sets acquired by the LISS III and AWiFS sensors. The proposed method was compared with the newly developed spatiotemporal data fusion methods. The investigational result shows consistent prediction accuracy and computationally efficiency of proposed method.

IVIFS Image Fusion method:

In this image fusion method Anathi and Balasubramaiam [2] proposed a novel way to fuse several images using interval-valued intuitionistic fuzzy sets (IVIFSs). Figure 3 represent frame work of IVIFS fusion method. Several images obtained from various modalities were blending into a single composite and improved image. Most of the digital images have uncertainties acquired during acquirement or during transmission. Those IVIFSs are more suitable for uncertain image fusion. Numbers of various domains such as medicine, data processing and mining. Bustince and Burillo [3] are initiated to proposed IVIFS from IFS in theoretical point of view and failed to explain about the type of uncertainty and how it can be modeled for applications. In 2010, Xu and Wu [26] have extended c-means clustering algorithm for clustering IVIFSs. IVIFIs are generated initially for the two source images. Spatial and textural information of the images are extracted and blended using maximum and minimum operations in IVIFSs.

Firstly, source images are fuzzified and membership and non-membership degrees are calculated at the peak entropy. Intuitionistic Fuzzy Set (IFS) are generated from these values and IVIFIs are then constructed from IFSs. These IVIFIs are decomposed into small blocks of same size. The texture and spatial information for all the blocks of the two different IVIFIs are extracted separately. Spatial information of the corresponding blocks of two IVIFIs is fused using spatial fusion rule and Textural information of the corresponding blocks of the two IVIFIs are fused using texture fusion rules. Fused spatial and textural information of the two images are again amalgamated by using maximum and minimum operations in IVIFS. Then intuitionistic fuzzy image is obtained by fed the fused images with high contrast and luminance into type-reduction process. Finally, a crisp-fused image is obtained by defuzzification method. The proposed algorithm compared with various existing techniques. It shows that proposed method renders a fused image with better luminance and contrast than other existing techniques.

Fusion Quality Metics:

Many researchers have been proposed different quality metrics in terms of both qualitative and quantitative analyses of remote sensing process. In fusion methods, main objective is the evaluation of the fusion methods' performances is also a challenging topic. In this paper, these fusion quality metrics are divided into two major classes. In the qualitative analysis, the performance of the fused image is determined by visual comparison. On the other hand, the performance of the fused image is determined by quantitative analysis using two main approaches: with reference image and without reference image.

Measures which require the reference image:

Spectral Angle Mapper (SAM):

SAM calculate by the angle between the corresponding pixels of the fused and the reference image in the space defined by considering each spectral band as a coordinate axis. SAM between [I.sub.{i}] and [J.sub.{i}] is defined as:

SAM([I.sub.{i}],[J.sub.{i}]) = arccos (<I{i}, J{i}>/[parallel][I.sub.{i}][parallel][parallel][J.sub.{i}][parallel])

Let, [I.sub.{n}] = [[I.sub.1,{n}], ..., [I.sub.N,{n}]] be a pixel vector of MS image I with N bands. The optimum value of SAM equal to 0 means absence of spectral distortion, but radiometric distortion is possible (i.e. the two pixel vectors are parallel but have different lengths).

Root Mean Square Error (RMSE):

RMSE is the root mean square difference between the reference and the fused image. Smaller RMSE indicates better fusion result. It is the simple and most widely used method to measure image quality.

RMSE ([I.sub.R] [I.sub.F]) = [square root of (1/MN [M.summation over (i=1)][N.summation over (j=1)] [([I.sub.R](i, j)[I.sub.F](i, j)).sup.2]

Erreur Relative Globale Adimensionnelle de Synthese (ERGAS):

ERGAS also called relative global dimensional synthesis error and composed by a sum of RMSE values. It is a more credited global index proposed for pansharpening as follows:

ERGAS([I.sub.K][J.sub.K]) = 100/R [square root of (1/N [[summation].sup.N.sub.K=1)] [(RMSE([I.sub.K][J.sub.K])/[mu]([I.sub.K])).sup.2])]

Where RMSE means Root Mean Square Error and p denotes mean of the image. Smaller ERGAS indicates better fusion results and its optimal value is 0.

Reconstruction Signal to Noise Ratio (RSNR):

RSNR is defined as

RSNR(X,[??]) = 10 [log.sub.10] ([[parallel]X[parallel].sup.2.sub.F]/[[parallel]X - [??][parallel].sup.2.sub.F])

Where X and [??] denotes the actual and fused image. The large value of RSNR provides better fusion quality.

Peak Signal to Noise Ratio (PSNR):

PSNR is the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the reliability of its representation. The PSNR measure is given by

PSNR = 10 x [log.sub.10] [peak.sup.2]/[square root of ([[summation].sup.M.sub.i=1][[summation].sup.N.sub.j=1] [(B'(i,j) - B(i, j)).sup.2])]

Where, B is the reference image, B' is the fused image to be assessed, i is the pixel row index, j is the pixel column index, M is number of rows, N is the number of columns and [peak.sup.2] is the maximum possible pixel value of the images. Larger value of PSNR indicates less amount of image distortion. It reflects the quality of reconstruction.

Measures which do not require the reference image:

Quality with no reference (QNR) index:

QNR index used to quantifies the spectral and spatial distortions, composed by the product, weighted by some coefficients, of two separate values. It is defined as

QNR = [(1 - [D.sub.[lambda]]).sup.[alpha]] [(1 - [D.sub.s]).sup.[beta]]

The higher the QNR index, the better the quality of the fused product. The maximum theoretical value of this index is 1, when both [D.sub.[lambda]] and [D.sub.S] are equal to 0.

The spectral distortion is estimated by

[D.sub.[lambda]] = [p.square root of (1/N(N - 1) [N.summation over (i=1)][N.summation over (j=1)] [[absolute value of ([d.sub.i,j](MS, [??]))].sup.p])]

The spatial distortion is calculated by

[D.sub.S] = [q.square root of (1/N [N.summation over (i=1)] [[absolute value of (Q([??],P) - Q(M[S.sub.i], [P.sub.LP]))].sup.q])]

Standard Deviation (STD):

It measures the contrast of the fused image. It defined as

STD = [square root of (1/M X N [M.summation over (u=1)][N.summation over (v=1)] [(F(u, v) - [bar.F]).sup.2])]

where F(u, v) stands for the pixel value of fused image F at (u, v), [bar.F] denotes the mean value of the fused image. An image with high contrast would have a high standard deviation. STD reflects the discrete ability of the image gray level relative to the average brightness.


Entropy is an important indicator for measure image information quality. The value of entropy is high then more information is contained in the image, and provides better fusion result. Based on the principle of Shannon information theory image entropy is defined as follows:

E = - [L-1.summation over (i=0)] [p.sub.i] [log.sub.2] [p.sub.i]

Where, E is the entropy of fused image, L is image's total grayscale, pi is the probability of grey level at pixel i.

Fusion mutual information (MI):

Mutual information provides the measure of information from source image to the fused result. The value of indicator is larger, then information in fused image s always richer. Generally, MI between two random variables is given by

M[I.sub.XY] (x, y) = [summation over (x, y)] [P.sub.XY] (x, y) log [P.sub.XY](x, y)/PX(x)[P.sub.Y](y)

Where, X and Y are the any two random variables with corresponding probability distributions [P.sub.X](x) and [P.sub.Y](y) respectively.


In this paper, different image fusion levels, current state of art of image fusion methods in remote sensing, and image fusion evaluation metrics have been reviewed. Concluding remarks for all these sections are organized as follows: Image fusion methods obtain more reliable and accurate image information by eliminating redundancy. Analysis of different research papers shows that different image fusion methods suits different applications. The pixel level fusion has been extensively researched for different approaches, since it gives comparatively better quality of fused results; but at the expense of more time consumption. For evaluation of image fusion algorithms, there is no standardized reference to test the algorithms on large number of datasets and to use best possible fusion strategy depending on applications. It reduces data storage and data transmission to increase conditional and situational awareness. Along with the objective evaluation, many times it is reported that the fused result should be subjectively evaluated based on visual characteristics. Together with the overview, this study can be seen as a preparation to the comparative study of different methods for image fusion in remote sensing.


[1.] Aharon, M., M. Elad and A. Bruckstein, 2006. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process, 54: 4311-4322.

[2.] Ananthi, V.P and P. Balasubramaniam, 2015. Image fusion using interval-valued intuitionistic fuzzy sets. International Journal of Image and Data Fusion.

[3.] Bustince, H. and P. Burillo, 1995. A theorem for constructing interval valued intuitionistic fuzzy sets from intuitionistic fuzzy sets. Notes on Intuitionistic Fuzzy Sets, 1: 5-16.

[4.] Chen, H., Y.Y. Liu and Y.J. Wang, 2008. A novel image fusion method based on wavelet packet transform. IEEE International Symposium on Knowledge Acquisition and Modeling Workshop, pp: 462-465.

[5.] Elad, M., M.A haron, 2006. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process., 15(12); 3736-3745.

[6.] Fuzhen Zhu., Xiaofei Wang., Hongchang He and Qun Ding, 2015. A New Multi-spectral Image Fusion Algorithm Based on Compressive Sensing. Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control

[7.] Genderen, J.L. and C. Pohl, 1994. Image fusion: Issues, techniques and applications. Strasbourg, France, pp: 18-26.

[8.] Guest editorial, 2007. Image fusion: Advances in the state of the art. Information Fusion, 8: 114-118.

[9.] Hammond, D.L., 2007. Validation of LAB color mode as a nondestructive method to differentiate black ballpoint pen inks. Journal of Forensic Science, 52: 967-973.

[10.] Jian Cheng., Haijun Liu., Ting Liu., Feng Wang and Hongsheng Li., 2015. Remote sensing image fusion via wavelet transform and sparse representation. ISPRS Journal of Photogrammetry and Remote Sensing, 104: 158-173.

[11.] Kunal Kumar Rai., Aparna Rai., Kanishka Dhar., J. Senthilnath, S.N. Omkar and K.N. Ramesh, 2016. SIFT-FANN: An efficient framework for spatio-spectral fusion of satellite images. Journal of Indian Society Remote Sensing.

[12.] Labate, D., G. Easley and W.Q. Lim, 2008. Sparse directional image representations using the discrete shearlet transform. Applied Comput. Harmon. Anal., 25: 25-46.

[13.] Li, H., B.S. Manjunath and S.K. Mitra, 1995. Multisensor image fusion using the wavelet transform. Graphical Models Image Process., 57: 235-245.

[14.] Lowe, D.G., 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60: 91-110.

[15.] Mallat, S.G., 1989. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. Pattern Anal. Machine Intell., 11: 674-693.

[16.] Maria, G., L.S. Jose, G.C. Raquel and G. Rafael, 2004. Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Trans. Geosci. Remote Sensing, 42: 1291-1299.

[17.] Muja, M and D.G. Lowe, 2009. Fast approximate nearest neighbors with automatic algorithm configuration. Proceedings of the International Conference on Computer Vision Theory and Applications.

[18.] Nie, R., 2016. Facial feature extraction using frequency map series in PCNN. Journal of Sensor.

[19.] Pohl, C and J.L.Van Genderen, 1998. Multisensor image fusion in remote sensing: concepts, methods and applications. International Journal of Remote Sensing, 19: 823-854.

[20.] Ranchin, T., and L. Wald, 2000. Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation. Photogrammetric Engineering and Remote Sensing, 66: 49-61.

[21.] Rao, C.V., J. Malleswara Rao, A. Senthil Kumar, D.S. Jain and V.K. Dadhwal, 2015. High Spatial and Spectral Details Retention Fusion and Evaluation. Journal Indian Soc Remote Sensing.

[22.] Thierry Ranchin., Bruno Aiazzi., Luciano Alparone., Stefano Baronti and Lucien Wald, 2003. Image fusion-the ARSIS concept and some successful implementation schemes. ISPRS Journal of Photogrammetry and Remote Sensing, 58: 4-18.

[23.] Tropp, J.A and A.C. Gilbert, 2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. On Information Theory, 53: 4655-4666.

[24.] Wenxing Bao and Xiaoliang Zhu, 2015. A Novel Remote Sensing Image Fusion Approach Research Based on HSV Space and Bi-orthogonal Wavelet Packet Transform. Journal of Indian Society Remote Sensing.

[25.] Xin Jin., Dongming Zhou., Shaowen Yao., Rencan Nie., Chuanbo Yu and Tingting Ding, 2016. Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks. Journal of Applied Remote Sensing, 10.

[26.] Xu, Z and J. Wu, 2010. Intuitionistic fuzzy c-means clustering algorithms. Journal of Systems Engineering and Electronics, 21: 580-590.

[27.] Zhou, X., W. Wang and R.A. Liu, 2014. Image Fusion Based on Compressed Sensing. The Proceedings of the Second International Conference on Communications Signal Processing, and Systems, pp: 137-143.

[28.] Hall, L., James Llinas, 1997. An Introduction to Multisensor Data Fusion. Proceedings of the IEEE, 85: 0623.

(1) K. Uma Maheswari and (2) S. Rajesh

(1) Assistant Professor, Department of Computer Science and Engineering, University College of Engineering Ramanathapuram, Ramanathapuram 623513, India.

(2) Professor, Department of Information Technology, Mepco Schienk Engineering College, Sivakasi, India.

Received 28 January 2017; Accepted 22 March 2017; Available online 4 April 2017

Address For Correspondence:

K. Uma Maheswari, Assistant Professor, Department of Computer Science and Engineering, University College of Engineering Ramanathapuram, Ramanathapuram 623513, India.


Caption: Fig. 1: Flow of Image Fusion in Remote Sensing Image Process

Caption: Fig. 2: Frame work of CIELab Colour fusion method proposed by Jin et al., (2016)

Caption: Fig. 3: Frame work of IVIFS fusion method proposed by Ananthi et al., (2015)
COPYRIGHT 2017 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Maheswari, K. Uma; Rajesh, S.
Publication:Advances in Natural and Applied Sciences
Article Type:Report
Date:Apr 1, 2017
Previous Article:Generic image quality assessment of portable fundus camera photographs.
Next Article:A survey of handling weakly labeled images on image retrieval system.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters