Printer Friendly

A Novel Image Dehazing Algorithm Based on Dual-tree Complex Wavelet Transform.

1. Introduction

The quality of outdoor images usually deteriorates due to bad weather conditions such as haze, fog, and smoke, phenomena caused by atmospheric absorption and scattering. The degraded images lose contrast, visual vividness and color fidelity resulting in poor visibility. Thus, image dehazing is of great importance and is an inevitable task in the visual based surveillance. Dehazing is a challenging task. The general contrast enhancement methods [1,2] can effectively improve the visual effect and the color saturation. However, these methods are unable to effectively handle heavy haze images and would distort the haze-free regions.

The scattering model based image dehazing methods [3,4,5] heavily rely on the prior knowledge or strong assumptions. These methods are able to handle far-field objects even in the heavy haze image. Fattal et al. estimated the albedo of the scene and then inferred the medium transmission by assuming that the transmission and the surface shading were locally uncorrelated [3]. However, the assumption isn't always true in practical applications. He et al. proposed the dark channel prior (DCP) based on the statistics of haze-free outdoor images to estimate the medium transmission [4]. They then used the soft matting interpolation method [6] to refine the transmission. However, the DCP becomes ineffective when the scene objects are as bright as atmospheric light and no shadow is casting on them. In addition, the DCP transmission estimate will introduce a halo effect around the occlusion boundaries. Moreover, the soft matting interpolation algorithm is time consuming. In order to avoid the halo effect of DCP and produce a smoother transmission estimate, Gibson proposed median DCP (MDCP) method by using a median operator [5]. However, the MDCP is also invalid in bright areas. To address the invalidity of the DCP, Liu et al. estimated the bright areas (EBA) by computing the absolute value of the difference of the atmospheric light and the dark channel in order to obtain a balance between the size of the structural element in the gray-scale opening operation and the threshold [7]. However, the method required manual adjustment of two parameters. Gao et al. found that the contrast of the images can be enlarged and their saturation can also be increased through revising their negative images (RNI) [8]. Thus, instead of estimating the transmission map, Gao et al. estimated the correction factor of negative and used it to rectify the corresponding haze images. Their method performs well for haze images, but fails to achieve satisfactory results for fog and non-uniform illumination images. Moreover, the method may produce halo artifacts. Berman et al. proposed a dehazing algorithm based on the assumption that a haze-free image can be faithfully represented with just a few hundreds of distinct colors [9]. In RGB space, this corresponds to few hundred tight color clusters that can be changed into haze-lines (HAL). Berman et al. used these haze-lines to recover both the distance map and the haze-free image. However, the algorithm has similar problems of failing in thick haze condition and causing color distortion.

In order to overcome the above-mentioned disadvantages, Schaul et al. proposed a dehazing method using the near-infrared (NIR) images [10]. The NIR images are generally devoid of haze as it is an inherent function of the wavelengths and it is easy to be acquired with normal digital cameras. They proposed a dehazing method by fusing the NIR information with the color image under the Weighted Least Squares (WLS) framework [11] without considering the atmospheric light detection and depth information. This approach is simple and effective, because no prior knowledge or strong assumptions are required. However, this method would suffer from color shifting artifacts in non-haze regions.

The majority of previous works suffers from either detail loss or color shifting. In order to address the problem, this paper tries to recover the color and transfer the details simultaneously. Instead of using WLS filtering, the proposed method in this paper employs the dual-tree complex wavelet transform (DT-CWT) [12, 13] to execute haze removal. The fusion image color is regulated based on haze density in order to keep the haze-free regions free from color distortion. Finally, the guided image filtering (GIF) [14] is used to eliminate the small halo artifacts caused by the DT-CWT, while preserving the edge information of the image. In the proposed method, the regions or the images without haze remain unaltered. Therefore, the proposed method can be applied whether haze is actually present or not. The schematic diagram of the proposed algorithm framework is shown in Fig. 1.

The rest of the paper is organized as follows: Section 2 briefly introduces the background knowledge of the proposed method; In Section 3, the description of the proposed method is presented; Experimental results and comparisons are presented in Section 4 followed by conclusion in Section 5.

2. Background

2.1 Near Infrared Image Acquisition

The observed intensity of the scattered light is determined by two variables of the incident light: the wavelength [lambda] and the size of the scattering particles. The scattering law of fog follows the equation when the aerosol particles are smaller than [lambda]/10 [15].

[E.sub.s][varies][[E.sub.0]/[[lambda].sup.4]] (1)

Eq. (1) infers that the scattering intensity [E.sub.s] is proportional to the observed intensity [E.sub.0] and inversely proportional to the biquadrate of wavelength [lambda]. The larger the wavelength is, the less intense the scatter will be. In radiations captured by the camera, the 400-700 nm and the 700-1100nm wavelength ranges belong to the visible band and the near-infrared band, respectively. It means that the near-infrared light has strong penetrating power, and the near-infrared image includes more scene details of object of interest in hazy weather. Using a CCD camera with good spectral response from 400 to 1100 nm, a near-infrared image and a visible image can be obtained respectively by simply removing/adding the filter [16].

2.2 Multi-resolution Image Fusion

In this paper, the visible and the near-infrared images of the same scene are fused to achieve haze removal. Image fusion is the process of merging two or more images by combining the complementary information of each image to obtain a more informative image. Since the NIR image has only intensity channel information, the visible image is converted into HSI space, and its intensity channel is fused with the NIR image.

All the traditional image fusion algorithms, such as Laplacian pyramid (LAP) [17], ratio pyramid (RP) [18] and curvelet (CVT) [19], have the shortcomings of correlation between the layers that introduces halo artifacts into the images by blurring the over sharp edges. The reason is that the filter is applied iteratively, which at higher levels is equivalent to a large filter kernel.

Schaul applied an edge-preserving multiresolution decomposition based on the Weighted Least Squares [11] optimization framework to both the visible and the NIR images, and used a pixel level fusion criterion to maximize the contrast [10]. Although, there were no halo artifacts in the fused image, but the processing time increased sharply with the increase of the number of iterations. Shibata proposed a local saliency based fusion algorithm for the visible and the NIR images. The gradient information was fused and the output image was constructed by a Poisson image editing[20]. The method can preserve the gradient information of both images. However, the fusion image may show serious color distortion. An image fusion method based on DT-CWT is applied in this paper. The fusion method has similar translation invariance, good direction selectivity, limited data redundancy, and superior computational efficiency. The proposed method can also reflect the change of image along multiple directions at different resolutions, and effectively describe the texture features of the image.

In the proposed algorithm, two independent wavelet transforms are performed on the same data. The decomposition process of DT-CWT is to decompose the current scale into 2 low-frequency parts and 6 high-frequency detail parts corresponding to [+ or -]15[degrees], [+ or -]45[degrees] and [+ or -]75[degrees] directions, respectively. The decomposition ensures that the six high-frequency detail parts of each layer correspond to the information of six different directions in the image. Since the DT-CWT has information from more than three directions, it can effectively retain the details of the image information to improve the image decomposition and synthesis accuracy. However, due to the difference between the near-infrared and the visible-light, the problem of color distortion in the haze-free region emerges in the fusion images based on DT-CWT. In this paper, the dark channel prior is used to ensure that the haze-free regions remain unaltered. The state-of-art dark channel prior effectively prevents the appearance of color distortion in haze-free region, but also produces blocking artifacts. In order to eliminate the blocking artifacts, guided image filter is employed.

2.3 Guided Image Filtering

The guided filter computes the output by considering the contents of a guidance image in a variety of computer vision and graphics applications. The guided filter is both effective and efficient in edge-aware smoothing and detail enhancement. The output image can fully obtain the edge detail of the guide image preserving the features of the original image. A local linear model between the guidance I and the filtering output q is the key assumption of the guided filter. q is a linear transform of I in a square window [[omega].sub.k] at the pixel k:

[q.sub.i] = [a.sub.k][I.sub.i] + [b.sub.k], [for all]i [member of] [[omega].sub.k] (2)

where ([a.sub.k] [b.sub.k]) are the linear coefficients and assumed to be constant in [[omega].sub.k]. The radius of the square window is r. The local linear model ensures that if I has an edge, q will correspondingly contain an edge, because [nabla]q = a[nabla]I. In order to determine the linear coefficients ([a.sub.k] [b.sub.k] ), the difference between the input image p and the output image q should be minimized. The cost function in window [[omega].sub.k] is denoted by:

E([a.sub.k], [b.sub.k]) = [summation over (i[member of] [[[omega].sub.k]])] [[([a.sub.k][I.sub.i] + [b.sub.k] - [p.sub.i]).sup.2] + sak2 ] (3)

where [epsilon] is a variance adjustment parameter to penalize [a.sub.k]. The output image preserves the overall characteristics of the input image due to the constraint of Eq. (3). At the same time, the filtering result maintains the change details of the guide image by the linear model established in Eq. (2). Fig. 2 shows the schematic diagram of the guided image filtering.

3. Method

In order to fuse the visible and the NIR images, the visible image is transformed into a luminance chrominance color space. The intensity of the visible or the NIR image is extracted to obtain a low frequency sub-band image and a series of high frequency sub-band images using DT-CWT.

[[I.sup.L], [I.sup.H.sub.l,[theta]]]=DTCWT(I,l) (4)

Where I is the visible or the NIR image, [I.sup.L] is the low frequency sub-band image, [I.sup.H.sub.l,[theta]] is a series of high frequency sub-band images, l is the number of decomposition layers and [theta] is the decomposition direction. The corresponding fusion rules are designed according to the different characteristics of the low and high frequency components.

The low-frequency part of the image mainly contains the contour information of the origin image. The visible and the NIR images have similar contour information. To reduce the amount of computation, the weighted algorithm is used to fuse the low-frequency part of the visible and the NIR images.

FL = 0.5 x[V.sup.L] + 0.5 x [N.sup.L] (5)

Where FL is the low frequency fusion image, V is the visible image and N is the NIR image. The high-frequency sub-band of the image reflects the features and the edge information of the source image. The image space frequency (SF) denotes the image's capability of expressing the subtle details, which reflects the clarity of the fusion image.

SF = [square root of R[F.sup.2]+C[F.sup.2]] (6)

Where RF and CF represent the row and the column frequencies of the image, respectively. The local space frequency of the pixel (m, n) in the neighborhood [-d, d] is given by:

[mathematical expression not reproducible] (7)

[mathematical expression not reproducible] (8)

F(i, j) is the value at the image pixel point (i, j). The larger the local space frequency is, the clearer the image region will be. Therefore, the fusion rule is established as:

[mathematical expression not reproducible] (9)

where FH is the high frequency fusion image, l is the number of decompositions and [theta] is the decomposition direction. [beta] is the fusion weighting coefficient, and its values are computed as follows:

[mathematical expression not reproducible] (10)

where S[F.sub.v] and S[F.sub.N] are the local space frequencies of the visible and the NIR images, respectively. The objective is to reconstruct the fusion image to achieve a dehazing image. Unfortunately, the texture details in the non-haze regions will also be boosted in the intensity channel resulting in the color shifting artifacts.

The multiresolution image fusion method improves the image quality of the blur region by sacrificing the quality of the clear region in order to obtain the clear fusion result [21]. Although the visible image has the same detail information as the near-infrared image in the haze-free region, but there is a large difference in the intensity value that would result in color distortion of the fusion image, as shown in Fig 3.

The ideal fusion rule should be as follows: the haze regions in the visible image are replaced by corresponding near-infrared image regions, while the haze-free regions remained unchanged. In order to avoid color distortion, the fusion image color is regulated through the haze concentration estimated by the dark channel prior. The DCP [4] is an effective approach to estimate the medium transmission and is defined as:

[mathematical expression not reproducible] (11)

where [J.sup.c] and [J.sup.dark] are the color and the dark channels of image J, respectively, and [OMEGA](x) is an area centered on x. The observation says that except for the sky region, the intensity of [J.sup.dark] is low and tends to be zero, if J is a haze-free outdoor image. Due to the additive atmospheric light, a haze image is brighter than the haze-free image having low transmission. Therefore, the dark channel of the haze image will have higher intensity in regions with denser haze [4]. It can be considered that the greater brightness, the greater concentration of haze, as shown in Fig. 4.

Thus, the color can be adjusted through haze concentration:

[F.sub.1] = [F.sub.0] xW + [V.sub.I] x(1-W) (12)

where [F.sub.0] is the fusion image of visible and NIR images, W is the normalized value of [J.sup.dark] (0 [less than or equal to] W [less than or equal to] 1 ), [V.sub.I] is the intensity channel of visible image, and [F.sub.I] is the image after modification. The intensity values of the visible and the NIR images are very close in large bright regions. Thus, the value of W in Eq. (12) will not cause difference of fusion image in these regions. Namely, although DCP cannot correctly handle large white objects, it will not result in color distortion in fusion image.

The blocking artifacts resulting from W and the small halo induced by the DT-CWT need to be eliminated. The guided image filtering is applied to solve the problem of artifacts and halos, then Equation (2) and (3) become:

F= [a.sub.k]V + [b.sub.k] (13)

E([a.sub.k], [b.sub.k] ) = [summation over (i=[[omega].sub.k])] [[([a.sub.k]V + [b.sub.k]-[F.sub.I]).sup.2] +[epsilon][a.sub.k.sup.2]] (14)

where [F.sub.I]' is the output of guided image filtering, V is the guided image, that is, the visible image.

4. Results and Comparisons

4.1 Experimental results

This section presents some experiments to evaluate the performance of the proposal dehazing method. The proposed algorithm is tested using MATLAB on a desktop computer with 4.0GHz processor and 8GB RAM.

Fig.5 (a) and (b) show the original visible and the NIR images, respectively. LAP, RP and CVT methods are selected as references to evaluate the performance of the proposed image fusion algorithm. Fig. 5 (c)-(f) show the fusion results of LAP, RP, CVT and DT-CWT methods, respectively. The red rectangles in the resultant images in Fig. 5 are zoomed and shown in Fig. 6. It can be seen that there are serious halo artifacts at the edge of trees in Fig. 6 (a), (b) and (c). The reason is that LAP, RP and CVT based image fusion methods apply the filter iteratively, which is equivalent to a large filter kernel. The DT-CWT method has the most satisfactory performance at edge-preserving with the fewest halo artifacts, as shown in Fig. 6 (d).

In this experiment, the extracted intensity of the visible image or the NIR image is decomposed to obtain a low-frequency sub-band image and a series of high-frequency sub-band images using the DT-CWT. However, all the above methods result color distortion in haze-free area.

The fusion image color is regulated according to Eq. (12). In order to estimate the haze density, the dark channel is computed using a typical patch size of 15 x 15 [4]. Fig. 7 (a) and (b) show the dark channel of the visible image and the image after color regulated, respectively. It can be seen that the image after color regulated has no distortion, but the image has become lumpy.

Theoretically, the lumpy effect can be effectively decreased if the patch size of is reduced. However, the dark channel cannot reflect the haze density when the patch size is too small. Fig. 8 shows the dark channels with different patch sizes ranging from 25 x 25 to 3 x 3. It can be seen in Fig. 8 (f) that the lumpy effect disappears when the patch size is 3 x 3. However, color distortion appears in haze-free region as highlighted with red rectangle.

The guide image filtering is used to remove the lumpy effect and eliminate the small halo artifacts. Different radii of square windows and regularization parameters are selected to evaluate the filter outputs. As shown in Fig. 9, as r increases and [epsilon] decreases, the filtered output contains more detailed information of the guided image. However, the detailed information of the input image is almost lost when r is too large and [epsilon] is too small. Thus, r=60 and [epsilon]=0.01 are selected to preserve the edge information and eliminate the blocking artifacts simultaneously. As shown in Fig. 10 (f), the distant mountains are clear while the haze-free regions are unchanged and have the same color information as the original image.

4.2 Comparisons

In this section, comparisons are conducted to describe the effectiveness in several groups of typical images. Fig. 10 (c)-(f) show the dehazing results obtained by the DCP algorithm with GIF, the MDCP algorithm, the RNI algorithm and the proposed algorithm, respectively.

It is clear that the DCP and the MDCP algorithms are unable to achieve dehazing in the sky region, as shown in Fig. 11 (a)-(b). This is because that dark channel does not work in the bright area. The result of RNI algorithm has effectively removed the haze in the bright area compared to DCP and MDCP algorithms, but it has introduced halos in the haze-free region. Fig. 11 (d) shows that there are lots of halos at the edge of water pipe.

The result of the proposed method contains more details and texture for the distant objects compared with the results of DCP and MDCP algorithms, as shown in Fig. 11 (c). The underlying reason is that the NIR algorithm can penetrate further than the visible band in the haze. Compared with the RNI algorithm, the proposed method contains more texture information and does not yield halo artifacts in haze-free region as shown in Fig. 11 (d) and (e).

Fig. 12 (b) and (c) show the dehazing results obtained by the HAL algorithm and the proposed algorithm, respectively. Compared with the HAL algorithm, the proposed algorithm has satisfactory performance in dehazing the thick haze image, where the distance moutain is clear while the result of the HAL algorithm is obscure, as shown in Fig.13 (a) and (b), respectively. The proposed algorithm avoids generating halos in the dehazing image at the same time, as shown in Fig.13 (d). However, the HAL algorithm has produced some halo artifacts, as shown in Fig.13 (c), the edge of the pine tree is blurred by halos.

Fig. 14 (b) and (c) show the dehazing results obtained by the WLS algorithm and the proposed algorithm, respectively. Compared with the WLS method, the proposed algorithm has not only enhanced the local details, but has also recovered the original color of the scene. Besides, there is no oversaturation in intensity in the haze-free region. Fig.15 (c) shows that the distant mountain is clear and the nearby tree has original color information at the same time. However, the WLS method brings in color distortion in the haze-free region, as shown Fig. 15 (b).

The evaluation criteria are mean squared error (MSE) and structural similarity (SSIM) proposed in [22] for quantitative analysis of the experimental results. Using the ground truth images of the 10 pairs of synthetic images, we further compare those methods with the MSE and SSIM. MSE is defined by

[mathematical expression not reproducible] (15)

where c [member of] {r, g, b} , I is the hazy image and J is the ground truth image. Lower MSE means better performance. One of the most commonly used measures for image visual quality assessment is SSIM, which is given by

[mathematical expression not reproducible] (16)

where [[mu].sub.I(i,j,c)] and [[sigma].sup.2.sub.(i,j,c)] are the local mean and variance of the haze-free image computed on a block centered on the pixel at the i'th row and j'th column of the c channel in I. [[mu].sub.I(i,j,c)] and [[sigma].sup.2.sub.(i,j,c)] are their counterparts for the dehazed image J. And [[sigma].sub.IJ(i,j,c)] is the covariance of the pixels at the i' th row and j' th column of the c channel of the hazy and dehazed images on the same window, and it is given by:

[mathematical expression not reproducible] (17)

where R is the radius of the windows. The constants c= 0.01 and c = 0.03 are chosen as recommended by Wang et al [22].

Higher SSIM means the dehazed image is more similar to the ground truth. Average MSE and SSIM indexes of the 10 pairs of synthetic images by those methods are shown in Table 1. The WLS method have the maximum MSE, while the proposed method has the minimum. RNI method and our method have better SSIM than other methods.

In order to evaluate the dehazing results more comprehensively, the blind dehazing assessment by gradient rationing at visible edges [23] is used. The method contains 3 blind assessment indicators, including the rate of new visible edges e, the average gradient ratio [bar.r] and the percentage [sigma] of pixels which are saturated (black or white). They are computed as

e = [[n.sub.r]-[n.sub.o]/[n.sub.o]] (18)

[mathematical expression not reproducible] (19)

[sigma]= [[n.sub.s]/[dim.sub.x]x[dim.sub.y]] (20)

where [n.sub.0] and [n.sub.r] respectively denote the cardinal numbers of the set of visible edges in the original image [I.sub.o] and in the restored image [I.sub.r]. p r is the set of visible edges in [I.sub.r], [P.sub.i] is the pixel of visible edge in the restored image [I.sub.r]. [r.sub.t] is the ratio of the Sobel gradient at [P.sub.i] in the dehazing image to that in the original image. [n.sub.s] is the number of pixels which are saturated after applying the contrast restoration but were not before. [dim.sub.x] and [dim.sub.y] denote the width and the height of the image respectively. Hence, good dehazing results are described by high values of e and [bar.r] and low values of [sigma] . These indicators are computed for 10 pairs of images used in Matlab. The average e, [bar.r] and [sigma] are given in Table 2. The proposed method has the highest values of e and [bar.r], and the lower values of [sigma] , which further demonstrates that the proposed method has better dehazing results than other methods.

The experiments are conducted with various image sizes. 10 images for each size are selected and the average time is recorded as presented in Table 3. Compared with other algorithms, the proposed algorithm significantly reduces the computation time, especially for large size images.

5. Conclusion

In this study, a dehazing method is purposed by fusing a pair of visible and NIR images. Compared with the existing image enhancement and model-based dehazing algorithms, the proposed image fusion method is based on DT-CWT that can obtain satisfactory dehazing result without computing complicated atmospheric scattering model. The image color is effectively regulated depending on the haze concentration estimated by the dark channel. Finally, the guided image filter is used to eliminate the blocking artifacts induced by the dark channel and remove the halos caused by the DT-CWT, while preserving the edge information of the original image. The experimental results have demonstrated and validated that the proposed approach has better performance than the other related methods. Moreover, the proposed method does not change the regions or images without haze. Therefore, the proposed method is applicable whether haze is actually present or not.

Acknowledgments

This research was supported by the Sichuan University Scientific Research Foundation for the Introduction of Talent under Grant 2082204194074, the National Natural Science Foundation of China and the Civil Aviation Authority Jointly Funded Projects under Grant U1433126, and the National High-tech R&D Program (863 Program) under Grant 2015AA016405.

References

[1] J. H. Kim, J. Y. Sim and C. S. Kim, "Single image dehazing based on contrast enhancement," in Proc. of IEEE Conf. on Acoustics, Speech and Signal Processing, pp.1273-1276, May, 2011. Article (CrossRef Link)

[2] Y. Zhou, Q. W. Li and G. Y. Huo, "Adaptive image enhancement using nonsubsampled contourlet transform domain histogram matching," Chinese Optics Letters. vol.12, no. s2, pp.S21002-321005, 2014. Article (CrossRef Link)

[3] R. Fattal, "Single image dehazing," ACM Transactions on Graphics, vol. 27, no. 3, pp.1-9, 2008. Article (CrossRef Link)

[4] K. M. He, J. Sun, X. O. Tang. "Single image haze removal using dark channel prior," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp.1956-1963, June, 2009. Article (CrossRef Link)

[5] K. B. Gibson, D. T. Vo and T. Q. Nguyen. "An investigation of dehazing effects on image and video coding," IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 662-73, 2012. Article (CrossRef Link)

[6] A. Levin, D. Lischinski and Y. Weiss. "A closed form solution to natural image matting," IEEE Transactions on Pattern Analysis and Machine Intelligence. vol. 30, no. 2, pp. 228-242, 2008. Article (CrossRef Link)

[7] H. B. Liu, J. Yang, Z. P. Wu et al. "Fast single image dehazing based on image fusion," Journal of Electronic Imaging, vol. 24, no. 1, pp. 013020(1-10), 2015. Article (CrossRef Link)

[8] Y. Y. Gao, H. M. Hu and S. H. Wang. "A fast image dehazing algorithm based on negative correction," Signal Processing, vol. 103, no. 10, pp. 380-398, 2014. Article (CrossRef Link)

[9] D. Berman, T. Treibitz and S. Avidan. "Non-local image dehazing," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1674-1682, 2016. Article (CrossRef Link)

[10] L. Schaul, C. Fredembach and S. S'usstrunk. "Color image dehazingg using the near-infrared," in Proc. of IEEE Conf. on Image Processing, pp. 1629-1632, 2009. Article (CrossRef Link)

[11] Z. Farbman, R. Fattal, D. Lischinski. "Edge-preserving decompositions for multi-scale tone and detail manipulation," ACM Transactions on Graphics, vol. 27, no. 3, pp.1-10, 2008. Article (CrossRef Link)

[12] N.G. Kingsbury. "The dual-tree complex wavelet transform: a new technique for shift invariance and directional filters," in Proc. of IEEE Conf. on Image Processing, pp. 319 - 322, 1998. Article (CrossRef Link)

[13] N. G. Kingsbury. "A dual-tree complex wavelet transform with improved orthogonality and symmetry properties," in Proc. of IEEE Conf. on Image Processing, vol. 2, no. 4, 375-378, 2000. Article (CrossRef Link)

[14] K. M. He, J. Sun and X. O. Tang. "Guided image filtering," in Proc. of IEEE Conf. on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, 2013. Article (CrossRef Link)

[15] AT Young. "Rayleigh scattering," Applied Optics, vol. 20, no. 4, pp.533, 1981. Article (CrossRef Link)

[16] C. Fredembach and S. Susstrun. "Colouring the near infrared," in Proc. of IS&T Conf. on Color Imaging, pp. 176-182, 2008. Article (CrossRef Link)

[17] W. Wang and F. Chang. "A Multi-focus image fusion method based on laplacian pyramid," Journal of Computers, vol. 6, no.12, pp. 2559-2566, 2011. Article (CrossRef Link)

[18] A. Toet. "Image fusion by a ratio of low-pass pyramid," Pattern Recognition Letters, vol. 9, no. 4, pp.245-253, 1989. Article (CrossRef Link)

[19] F. Nencini, A. Garzelli, S. Baronti and L. Alparone. "Remote sensing image fusion using the curvelet transform," Information Fusion, vol. 8, no.2, pp.143-156, 2007. Article (CrossRef Link)

[20] T. Shibata. M. Tanaka and M. Okutomi. "Versatile visible and near-infrared image fusion based on high visibility area selection," Journal of Electronic Imaging, vol. 25, no. 1, vol. 25, no. 1, pp.013016, 2016. Article (CrossRef Link)

[21] H. Wang, Z. L. Jing and J. X. Li. "Multi-focus image fusion using image black segment," Journal of Shanghai Jiaotong University, vol. 37, no. 11, pp. 1743 -1750, 2003. Article (CrossRef Link)

[22] Z. Wang, A. C. Bovik and H. R. Sheikh, "Image quality assessment: from error visibility to structural similarity," IEEE Transaction on Image Process. Vol. 13, no. 4, pp. 600-612, 2004. Article (CrossRef Link)

[23] N. Hautiere, J. P Tarel, D. Aubert et al, "Blind contrast enhancement assessment by gradient ratioing at visible edges," Image Analysis and Stereology Journal, Vol. 27, no. 2, pp. 87-95,2011. Article (CrossRef Link)

Changxin Huang received the B.S. degree in Automation from School of Automation Science and Engineering, South China University of Technology in 2015. Currently, he pursues a M.S. degree in advanced guidance and flight simulation technology in School of Aeronautics and Astronautics, Sichuan University. His research interests include camera sensor networks, target detection and recognition.

Wei Li received the B.S. and Ph.D. degrees from School of Mechetronical Engineering, Beijing Institute Technology, China, in 2003 and 2008 respectively. From 2010 to 2011, he was a postdoctoral member at Center of Industrial Electronics, Polytechnic University of Madrid, Spain. Currently, he is an associate professor at School of Astronautics and Astronautics in Sichuan University, China. His research interests include quality of service of wireless sensor networks, target tracking in camera sensor networks, surveillance in camera networks.

Songchen Han received the B.S. degree in flying vehicle control from Harbin Engineering University in 1987, his M.S. degree in general dynamics from Harbin Institute of Technology in 1990, and his Ph.D. degree in flying vehicle design from Harbin Institute of Technology in 1997. He is currently a professor with School of Aeronautics and Astronautics, Sichuan University. His research interests include air traffic control, surveillance in airport surface, and sensor networks.

Binbin Liang received the MS degree in Engineering from the College of Civil Aviation at Nanjing University of Aeronautics and Astronautics in 2015. Currently, he pursues a doctor degree in advanced guidance and flight simulation technology in School of Aeronautics and Astronautics, Sichuan University. His research interests cover artificial intelligence and civil aviation system.

Peng Cheng received Ph.D. degree from Sichuan University, Chengdu, China, in 2014. Currently, he is a Lecturer at School of Aeronautics and Astronautics in Sichuan University, China. His research interests include image registration, image fusion and computer vision.

Changxin Huang (1), Wei Li (1,2), Songchen Han (1,2), Binbin Liang (1) and Peng Cheng (1,2)

(1) School of Aeronautics and Astronautics, Sichuan University Chengdu, 610065 - China

[e-mail: hchangxin@163.com]

(2) National Key Laboratory of air traffic control automation system technology Chengdu, 610065-China

[e-mail: pengchengscu@163.com]

* Corresponding author: Peng Cheng

Received December 10, 2017; revised March 8, 2018; accepted April 15, 2018; published October 31, 2018

http://doi.org/10.3837/tiis.2018.10.022
Table 1. Average MSE and SSIM indexes of five methods.

Methods   MSE     SSIM

DCP       7.325   0.713
WLS       9.040   0.744
RNI       6.660   0.823
HAL       4.139   0.528
Ours      3.642   0.836

Table 2. Average e, [bar.r] and [sigma] of five methods.

Methods   e       [bar.r]  [sigma] (%)

DCP       0.754   1.691       0.555
WLS       0.142   1.224       0.011
RNI       0.646   1.630       0.452
HAL       0.863   1.425       0.586
Ours      1.290   1.831       0.058

Table 3 Computational time(s) comparison.

Image        1200x1680   800x1100  400x600   200x300  200x200
Resolution

DCP          16.554      8.879     2.442     0.585    0.411
WLS          62.136     20.443     4.937     1.479    0.542
RNI           9.777      3.802     0.998     0.388    0.256
HAL          18.315     11.589     8.292     3.751    2.322
Ours          5.141      2.341     1.350     0.145    0.081
COPYRIGHT 2018 KSII, the Korean Society for Internet Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Huang, Changxin; Li, Wei; Han, Songchen; Liang, Binbin; Cheng, Peng
Publication:KSII Transactions on Internet and Information Systems
Article Type:Report
Date:Oct 1, 2018
Words:5653
Previous Article:Hyperspectral Image Classification via Joint Sparse representation of Multi-layer Superpixles.
Next Article:JPEG-based Variable Block-Size Image Compression using CIE La*b*Color Space.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters