Printer Friendly

Gray-Scale Image Dehazing Guided by Scene Depth Information.

1. Introduction

Image dehazing is a basal methodology to restore image with hazy or foggy scenes, which has been drawing increasing attention in recent years due to its crucial role played in numerous fields like aerial photography. To date, a considerable number of methods for image dehazing have emerged, which can be categorized into two types of strategies with one based on images enhancement and the other one on atmospheric physical model. Nevertheless, both of the aforementioned methods suffer from some drawbacks, which inhibit their further widespread applications. More specifically, in the former strategy the limited resolution of human vision is primarily considered, however, ignoring the physical origin of image degradation. The haze-free image will be recovered by adjusting the local and global contrasts and color coefficients in the corresponding algorithms. Resultantly, the ultimate dehazing effect of images with different scenes differs a lot because the detail information may be enhanced excessively for complex scene images, though this kind of method is characterized by low complexity, wide range of application, and satisfactory validity for both haze and haze-free images.

In comparison, the physical origin of image degradation is taken into account in the latter strategy, wherein the degradation model of haze image is built to more accurately compensate the lost information in the process of imaging in foggy weather. Even though this kind of strategy becomes the leading tendency with abovementioned advantages, it suffers from high computational complexity because the prior information, for example, scene depth and contrast statistics of haze image, should be obtained in advance. What is more, once the transcendental hypothesis fails, the model cannot be mathematically solved.

The independent component analysis for estimating the scene reflection intensity is proposed by Fattal [1], assuming that the reflection intensity of a local region in haze image is constant, and there is no relevance between the transmission and the surface color of objects. However, this assumption is not considered to be solid because the statistical characteristics will be ineffective due to the lack of sufficient color information for dehazing thick haze images. On the other hand, the assumption of dark channel prior is proposed by He et al. [2] based on the statistics of large numbers of haze-free images in outdoor scenes. Followed by He et al.'s seminal work, the corresponding fast algorithm is proposed in [3], and the halo effect is remarkably reduced by template segmentation in the dark channel calculation in [4]. Nevertheless, this proposal drastically suffers from high computational complexity, and it yet fails to work for images with thick haze or large sky region due to the occurrence of serious color distortion. Therefore, it is urgent to propose a method to tackle the difficulty regarding images with thick haze or large sky region, which is one aim of this paper.

In addition, there have been other emerging important methods regarding image dehazing in recent years. For example, Ancuti firstly demonstrates the utility and effectiveness of a fusion-based technique for dehazing based on a single degraded image [5]. Chen proposes a novel visibility restoration approach based on Bi-Histogram modification, which can integrate a haze density estimation module and a haze formation removal module for accurate estimation of haze density in the transmission map [6]. One of the seminal representatives of nighttime dehazing method is reported by Li et al. [7], where a frameworkis proposed in order to reduce the effect of glow in the original night time image, leading the original image to consist of direct transmission and airlight only. Huang in [8] presents a novel Laplacian-based visibility restoration approach to effectively solve inadequate haze thickness estimation and alleviate color cast problems. Meng benefits much from an exploration on the inherent boundary constraint that is combined with a weighted [L.sub.1]-norm based contextual regularization, and it is modeled into an optimization problem to estimate the unknown scene transmission in [9].

Huang et al. employ the combinations of three major modules, namely, a depth estimation module, a color analysis module, and a visibility restoration module for visibility restoration [10], and also present an effective haze removal approach to remedy problems caused by localized light sources and color shifts, which thereby achieves superior restoration results for single hazy images [11]. Zhu proposes a simple but powerful color attenuation prior by creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method [12].

In terms of gray-scale image dehazing, the above mentioned methods based on atmospheric physical model are rather restricted because they solely work for color images. Comparably, He's method still works, while very strict restrictions are made on the gray-scale images [2]. In brief, a color image generally contains much more information than a gray-scale image, so it makes the dehazing model solved more easily. In contrast, there is no confirmed solution to gray-scale image by the abovementioned methods owing to less information contained in the gray-scale image. As an indispensable part of image processing, the resolution of gray-scale image is almost four times more than that of color image with the same image size, leading to the increasing significance of gray-scale image dehazing owing to its crucial role in the field of high resolution observation, for example, aerospace industry, which makes up the other aim of this paper.

In this paper, we present a novel method for gray-scale image dehazing, which takes into account the advantages of two aforementioned categorized dehazing strategies. The performance of our proposed method is shown in Figure 1, in which Figure 1(a) is the original gray-scale image with thick haze and large sky region and Figures 1(b) and 1(c) illustrate the haze-free image and the image of scene depth proportion, respectively. More specifically, the image of scene depth proportion is estimated according to the original gray-scale image directly. Scene depth is the straightforward reflection of enhancement demands for various regions in the original image, so the processing of enhancement can be automatically layered guided by the image of scene depth proportion. With this differential compensation, the invalidity of atmospheric physical model for the image with thick haze or large sky region can be reasonably solved. Meanwhile, the phenomenon of excessive enhancement is avoided in the enhanced image, and the brightness consistency can also be guaranteed simultaneously.

The framework of this paper is organized as follows. In Section 2 the problems in existing methods and corresponding solutions are stated. In Section 3 the proposed method is introduced, and the experimental results are shown in Section 4 to demonstrate the effectiveness of the proposed method. The paper is ultimately concluded in Section 5.

2. Issues in Existing Methods and Corresponding Solutions

2.1. Guided Image Filter and Modified Guided Image Filter. Nowadays, image enhancement can be effectively realized by guided image filter [13], which is widely prevalent in the field of computer vision. Assuming that the original image is expressed by p and the output image q is linearly related to the guide image I in a local window [[omega].sub.k], then the output image q in this window with radius r centered at the pixel k can be written as

[q.sub.i] = [a.sub.k][I.sub.i] + [b.sub.k], [for all]i [member of] [[omega].sub.k], (1)

where ([a.sub.k], [b.sub.k]) is the constant coefficient in window [[omega].sub.k] and the pixel i is involved in all overlapping windows that cover i collectively.

Hence the values of [q.sub.i] in (1) are not equal when computing in different windows, and then the average value strategy is adopted as stated in [13]. Meanwhile, due to the symmetry of the box window, the filtering output [q.sub.i] is computed by

[q.sub.i] = [[bar.a].sub.i][I.sub.i] + [[bar.b].sub.i], (2)

where [mathematical expression not reproducible] are the average coefficients of all windows overlapping i.

Based on the definition above, the filtering output q can be obtained and the difference value (I - q) is regarded as boundary information. Here the image noise is neglected temporarily because it is the secondary high frequency information compared to the primary image boundary. Ultimately, the image boundary is amplified several times and added to q, and the enhanced image [I.sub.e] can be expressed as

[I.sub.e] = q + [epsilon] (I - q), (3)

where [epsilon] is the enhancement coefficient. Since boundary preserving is the most prominent characteristic for guided image filter, all the boundary in original image p can be theoretically preserved in filtering output image q. Realistically, some of the boundaries will be removed by filtering operation owing to the solution of filtering output being an approximate value. In brief, the characteristics of boundary preserving are at any rate favorable to image filtering, but it is adverse to the image enhancement.

Since the boundary information can also be preserved in the filtering output image q, it may lead to incomplete extraction of boundary, which is also the major disadvantage of haze image degradation. In order to tackle this problem, a modified guided image filter is built by mean filtering for guide image [I.sub.i] only in (2) to reduce the ability of boundary preserving for image enhancement.

We compare the performance of common and modified guided image filtering, and the results are illustrated in Figure 2. To improve the fairness, the parameters of both common and modified guided image filters in Figure 2 are all set to the same values. Figure 2(a) shows the original haze image, and the extracted boundary images using common and modified guided image filters are illustrated in Figures 2(b) and 2(c), respectively. Here all the pixel values of Figures 2(b) and 2(c) are uniformly amplified 5 times for facilitating display. Note that the test images in this paper are taken from the Internet.

It is observed that the image contrast of extracted boundary is obviously increased by proposed method, especially for the boundary with low brightness (i.e., dark pixels) in Figures 2(b) and 2(c). Figure 2(d) shows another haze image with thicker haze layer, and the extracted boundary images using common guided image filter and modified guided image filter are showed in Figures 2(e) and 2(f), respectively, which are also amplified 5 times uniformly for display. As can also be easily seen, the modified guided image filter is advantageous over the common one because more details are preserved in Figure 2(f).

2.2. Dark Channel Prior and Image of Scene Depth Proportion. McCartney atmospheric physical model is widely adopted in the second dehazing category, and it is defined as

I(x) = J(x)t(x) + A(1 - t(x)), (4)

where I(x) is the original hazy image, J(x) is the haze-free image, A is the atmospheric light value, t(x) is the medium transmission, and x is the corresponding pixel. The process of image dehazing is to estimate A and t(x) from I(x) and then recover the ultimate haze-free image J(x).

Dark channel prior [2] is considered to be the most effective method to solve the atmospheric physical model in (1) and is described by

[mathematical expression not reproducible], (5)

where [L.sup.C] is a channel of image L and [OMEGA](x) is a template centered at x. He et al. [2] reported that the intensity of dark channel in nonsky region [L.sup.dark] [right arrow] 0, and then the complicated multiplicative term J(x)t(x) in (4) can be eliminated to estimate the transmission image t(x), and ultimately the approximate solution of haze-free image J(x) in (4) can be obtained. Note that the solution procedure is in allusion to the color image because it contains much more information conducive to solving the atmospheric physical model.

The soft matting adopted in [2] can be replaced by guided image filter [13] to improve the processing speed. However, both the soft matting and guided image filtering are all staying at the stage of optimizing transmission image t(x). Differently from the color image, there is no confirmed solution of t(x) for the gray-scale image due to too little information contained in the gray-scale image to solve the atmospheric physical model. As a result, the haze-free image J(x) cannot be recovered accordingly. In order to overcome this difficulty, the scene depth information will be adequately used in this paper, which is estimated directly from original gray-scale image, and its reverse can be regarded as the approximate estimation of transmission image. The detailed process of acquiring the image of scene depth proportion is as follows.

First of all, atmospheric light value A is estimated, which is the furthest pixel rather than the brightest one in the original haze image I(x) according to [14]. On a foggy day, there is a large quantity of fog droplets suspending in the atmosphere. In this case, the scattering intensity of light within the visible spectrum is almost the same on the whole, resulting in the sky region with the bright white color. The furthest pixel generally belongs to the sky region, but it is not always the brightest one because the pixels in the nearby scenes maybe with the bright color (e.g., artificial light source and bright white object), which might be brighter than the furthest pixel value A in I(x). Therefore, it is incorrect for the gray-scale image to take the brightest pixel in I(x) as the estimation of A.

To exclude the abovementioned nearby pixels first, the local minimum filtering is adopted for the gray-scale image in this paper. With this operation, differently from the far scene pixels, the brightness of nearby pixel can be markedly reduced after this minimum filtering, especially in the case of adopting large filtering template, because there are always some pixels with low brightness around the bright pixels in the nearby scenes. As a result, the remaining bright pixels after minimum filtering are limited to the sky region, and the position of the brightest pixel in the sky region is exactly the furthest pixel in I(x), so its intensity value can then be regarded as the accurate approximate of A. Figures 3(a), 3(b), and 3(c) show the atmospheric light value A found by proposed method. Here the position of A is marked with black, and the area of black pixel is expanded for facilitating display as indicated by the arrow.

Based on the analysis above, the image of scene depth proportion D can be obtained by calculating the difference values between A and other pixels in minimum filtered image [L.sup.dark min], so the concrete expression of D is given by

D = l -(A - [L.sup.dark min]). (6)

Figures 3(d), 3(e), and 3(f) illustrate the corresponding D images of Figures 3(a), 3(b), and 3(c), respectively. As can be seen, under normal conditions, the further the pixel is, the higher its D value will be.

The abovementioned algorithm for estimating A is in allusion to haze image I(x), and this algorithm can also be directly applied to the haze-free image J(x) to acquire its atmospheric light value A'. Since the haze layer mainly reflects the natural light with balanced RGB values, the whole brightness of J(x) is lower than that of I(x). Therefore, the value of A' in the haze-free image J(x) is larger than A in the haze image I(x). As can be seen from the process of estimating A', its value is at or near the brightest pixel value in J(x), so A' is larger than the pixel value in J(x) as a whole. Based on (4), we have

I (x) = J(x)t(x) + A(l - t(x)) > J(x)t(x) + A'(l - t(x)) > J(x)t(x) + J(x)(l - t(x)) = J(x). (7)

One can see from (7) that the brightness of haze-free image J(x) is lower than that of haze image I(x) as a whole. Mathematically, it is out of the question to estimate the image J(x) according to (4). However, the visual effect of haze-free image is inconsistent with the haze image. Comparing the brightness of images before and after dehazing, it is thought that they are under the inconsistency illumination for human vision. In Section 4, the contrast experiments will be further carried out on the issue of brightness consistency.

3. Proposed Method

The low frequency layer and the three high frequency layers (in horizontal, vertical, and diagonal directions) are produced by wavelet decomposition of the original image. Since the different features of the original image are preserved in the low and high frequency layers, respectively, different dehazing strategies are adopted for different frequency layers. Resultantly, the low frequency layer after wavelet decomposition has the smoother boundary and contains less noise than the original image, which makes up the foundation of detail enhancement for image dehazing. The three high frequency layers contain almost the whole boundary information of original image, which plays a key role in the ultimate effect of image enhancement. In a word, the boundary information of original image can be extracted and preserved in the high frequency layers by simple wavelet decomposition for the sake of avoiding damaging the boundary in the subsequent operating, for example, image filtering. In practice, the boundary information is significant for human vision to understand and recognize the image scenes as a whole. This crucial information in the haze image has different levels of degradation by the haze layer, so the simple and effective method for image dehazing is to enhance the boundary information accordingly.

The scheme sketch of proposed method is illustrated in Figure 4. Based on the modified guided image filtering proposed in this paper, the difference value between the original image and filtering output image is called detail layer. After being amplified by a certain coefficient, the amplified detail layer is added to the filtering output image to achieve the image enhancement. This enhancement method is applied to the three high frequency layers after wavelet decomposition, respectively. Due to the simple feature of high frequency layer, this straightforward enhancement can be regarded as the effective scheme. However, for the low frequency layer, the complex feature may lead to excessive enhancement in the region with the obvious boundary and insufficient enhancement in the region with the fuzzy boundary.

There is a direct relationship between the brightness of boundary pixel and its scene depth. Since the light reaching the camera from the far scene is easy to be scatted by the haze layer, the value of boundary pixel is low at large. To the contrary, the light from the nearby scene is less affected by the haze layer, so the value of boundary pixel is generally high. Hence it is reasonable to enhance the haze image with the help of the image of scene depth proportion, so the enhancement coefficient is set to the higher value in the region of far scene and the lower one in the region of nearby scene. Resultantly, this enhancement method guided by the scene depth information is applied to the fundamental low frequency layer.

The process of gray-scale image dehazing by proposed method is illustrated in Figure 5. Figure 5(a) is the original gray-scale image, and Figures 5(b) and 5(d) are the low frequency layer and high frequency layer in horizontal direction, respectively. Note that only the enhancement process of high frequency layer in horizontal direction is shown here, and the process of other high frequency layers is based on the same way. The enhancement effect of low frequency layer using the modified guided image filtering by 4 times enhancement is illustrated in Figure 5(c), which is guided by the image of scene depth proportion, and Figure 5(e) shows the enhancement effect of high frequency layer in horizontal direction using the modified image guided filtering by 2 times enhancement directly. Here all the pixel values of Figures 5(d) and 5(e) are amplified 5 times for facilitating display.

Figure 5(f) illustrates the ultimate dehazing result of gray-scale image by wavelet reconstruction of enhanced low frequency layer and three high frequency layers. As can be seen from Figure 5(f), it has the remarkable effect compared to the original image in Figure 5(a). Since the enhancement process is guided by the image of scene depth proportion, the dehazing effect of far scenes can be guaranteed, and the phenomenon of excessive enhancement, meanwhile, does not appear in the nearby scenes. To verify how useful the image of scene depth proportion is, the modified guided image filtering is directly applied to enhance the original haze image with the same enhancement coefficient 4, for the sake of comparing it with our method, and the dehazing effect of nearby scenes shows the obvious excessive enhancement as shown in Figure 5(g). Scene depth information is the straightforward reflection of enhancement demands for various regions in the original image, so that the processing of image enhancement can be automatically layered, and the original method with the uniform enhancement becomes an efficient hierarchical method.

4. Experimental Results

The comparison between the image-enhancement-based method and our method has been illustrated in Figure 5, and the classic and effective methods based on atmospheric physical model, for example, Fattal [1], He et al. [2], Meng et al. [9], and Li et al. [7], will be further compared with our method in this section. In order to fully demonstrate the power of proposed method, the major parameters in the abovementioned methods are set to the appropriate values to get the satisfactory results in this section. Moreover, the guided image filtering [13] is applied to the method of He et al. [2] further to improve its dehazing effect. The experiment is performed using the software Matlab2010b, which is installed on a personal computer with an Intel Core i7-4510U processor.

The experiment is composed of two main parts, and the test image with thin haze and small sky region is processed primarily, which is easier to deal with under normal circumstances. The experimental results are shown in Figure 6. The original haze image is illustrated in Figure 6(a); Figures 6(b), 6(c), 6(d), and 6(e) are the dehazing results by the method of Fattal [1], He et al. [2], Meng et al. [9], and Li et al. [7], respectively. One can see that both of them exhibit outstanding dehazing effect for this type of image. However, the inconsistency of brightness is obvious when compared with the original image in Figure 6(a). To test the dehazing effect for gray-scale image, we transform the original color image (Figure 6(a)) into the corresponding gray-scale one (Figure 6(f)). Figure 6(e) shows the gray-scale dehazing result by proposed method, and it is easily observed that the dehazing effect for gray-scale image is remarkable.

Applying our proposed method directly to each RGB channel of a color image will acquire the dehazing result of the color image in Figure 6(h). It is observed that the dehazing effect of proposed method for color image is comparable to that of other methods but has the better brightness consistency between the original image and haze-free image. These two images are easier for human vision to consider that the imaging process is on the condition of the same illumination.

To verify the performance of proposed method, the images with thick haze and large sky region are selected as the test images, and the results are shown in Figure 7. Figures 7(a) and 7(f) are the original color haze image and its corresponding gray-scale one, respectively. The results by the methods of Fattal [1], He et al. [2], Meng et al. [9], and Li et al. [7] are illustrated in Figures 7(b), 7(c), 7(d), and 7(e), respectively. It can be obviously seen that serious color distortion appears in Figures 7(b), 7(c), 7(d), and 7(e), especially in the large sky region. Figure 7(g) shows the result of gray-scale image dehazing by our method, and Figure 7(h) illustrates the result of corresponding color result by our method in order to make a comparison with other methods.

One can see from Figures 7(g) and 7(h) that the obvious dehazing effects of our method for both gray-scale and color images with thick haze and large sky region are experimentally demonstrated, and the distortion is avoided effectively without changing the visual brightness. Besides, the appropriate haze layer will be automatically preserved in the region of distant sky in this paper, because there are few boundaries there, and the presence of appropriate haze is in favor of human vision to perceive scene depth [2].

To further verify the advantages of our method for the image with thick haze and large sky region, the gray-scale image of Figure 1(a) and its corresponding color image are used to make a contrast experiment as illustrated in Figure 8. Figure 8(a) shows the color haze image, and Figures 8(b), 8(c), 8(d), and 8(e) are the dehazing results by the methods of Fattal [1], He et al. [2], Meng et al. [9], and Li et al. [7], respectively. The gray-scale hazy image is shown in Figure 8(f), and Figures 8(g) and 8(h) illustrate the dehazing results of gray-scale image and corresponding color image by our method, respectively. One can easily conclude from Figure 8 that there is no obvious distortion when our method is applied to either the gray-scale or color image, and its visual luminance is also consistent with the original image. Therefore, our proposed method has been demonstrated to be satisfactorily effective in dehazing both color and gray-scale images and advantageous over the state-of-the-art dehazing methods.

5. Conclusion

In this paper, we report a novel and efficient method for gray-scale image dehazing, in which the advantages of the dehazing strategies based on the image enhancement and atmospheric physical model are taken into account simultaneously. For the image-enhancement-based strategy, the characteristics of its simplicity, effectiveness, and no color distortion are preserved in the proposed method. Mean-while, the shortcomings of its excessive detail enhancement, reckoning without image degradation model, are overcome. Compared with the dehazing strategy based on the atmospheric physical model, our method also exhibits superiority. When dealing with the image with the thin haze or the large sky region, the proposed method effectively avoids the color distortion and the inconsistency of visual brightness. Besides, our method shows the higher performance when dealing with the image with thick haze and large sky region.

It is particularly important for the proposed method to overcome the essential shortcoming of the abovementioned methods, which are mainly working for color image. The guided image filter used for boundary preserving is also modified to improve the dehazing effect in this paper. Moreover, a high-quality image of scene depth proportion can be obtained directly in the process of image dehazing as a byproduct.

Additional Points

Limitations. It should be addressed that the effectiveness of our method also has two limitations. The imaging degradation under hazy weather conditions is inversely proportional to the imaging distance of scene. This means that the objects that are close to camera are less degraded by haze compared to those that are far away. Unfortunately, owing to the lack of obvious boundary structures in the thick haze region far away, our method may fail to recover the true scenes there. Moreover, image degradation by the suspended haze in the atmosphere is divided into two processes in essence: (1) direct attenuation which describes the percentage of the scene radiance reaching the camera and (2) additive airlight which represents nonscene light participates in imaging process, leading to the shift of scene colors. General enhancement methods suffer from a common problem; that is, although problem (1) is relatively easy to deal with, problem (2) cannot be effectively solved. And we leave these problems for future research.

http://dx.doi.org/10.1155/2016/7809214

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by Foundation of Key Laboratory of Space Active Opto-Electronics Technology of Chinese Academy of Sciences (no. AOE-2016-A02), National Natural Science Foundation of China (nos. 61379010 and 41601353), Natural Science Basic Research Plan in Shaanxi Province of China (no. 2015JM6293), and Scientific Research Program Funded by Shaanxi Provincial Education Department (no. 16JK1765).

References

[1] R. Fattal, "Single image dehazing," ACM Transactions on Graphics, vol. 27, no. 3, article 72, pp. 1-9, 2008.

[2] K. He, J. Sun, and X. Tang, "Single image haze removal using dark channel prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, 2011.

[3] F.-C. Cheng, C.-H. Lin, and J.-L. Lin, "Constant time O(1) image fog removal using lowest level channel," Electronics Letters, vol. 48, no. 22, pp. 1404-1406, 2012.

[4] T. M. Bui, H. N. Tran, W. Kim, and S. Kim, "Segmenting dark channel prior in single image dehazing," Electronics Letters, vol. 50, no. 7, pp. 516-518, 2014.

[5] C. O. Ancuti and C. Ancuti, "Single image dehazing by multi-scale fusion," IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271-3282, 2013.

[6] B.-H. Chen, S.-C. Huang, and J. H. Ye, "Hazy image restoration by bi-histogram modification," ACM Transactions on Intelligent Systems and Technology, vol. 6, no. 4, article 50, 2015.

[7] Y. Li, R. T. Tan, and M. S. Brown, "Nighttime haze removal with glow and multiple light colors," in Proceedings of the IEEE International Conference on Computer Vision (ICCV '15), pp. 226-234, Santiago, Chile, December 2015.

[8] S.-C. Huang, J.-H. Ye, and B.-H. Chen, "An advanced single-image visibility restoration algorithm for real-world hazy scenes," IEEE Transactions on Industrial Electronics, vol. 62, no. 5, pp. 2962-2972, 2015.

[9] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, "Efficient image dehazing with boundary constraint and contextual regularization," in Proceedings of the 14th IEEE International Conference on Computer Vision (ICCV '13), pp. 617-624, IEEE, Sydney, Australia, December 2013.

[10] S.-C. Huang, B.-H. Chen, and W.-J. Wang, "Visibility restoration of single hazy images captured in real-world weather conditions," IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 10, pp. 1814-1824, 2014.

[11] S.-C. Huang, B.-H. Chen, and Y.-J. Cheng, "An efficient-visibility enhancement algorithm for road scenes captured by intelligent transportation systems," IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 5, pp. 2321-2332, 2014.

[12] Q. Zhu, J. Mai, and L. Shao, "A fast single image haze removal algorithm using color attenuation prior," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, 2015.

[13] K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, 2013.

[14] B. Jiang, W. Zhang, H. Meng, Y. Ru, Y. Zhang, and X. Ma, "Single image haze removal on complex imaging background," in Proceedings of the 6th IEEE International Conference on Software Engineering and Service Science (ICSESS '15), pp. 280-283, IEEE, Beijing, China, September 2015.

Bo Jiang, (1) Wanxu Zhang, (1) Jian Zhao, (1) Yi Ru, (1) Min Liu, (2) Xiaolei Ma, (3) Xiaoxuan Chen, (1) and Hongqi Meng (1)

(1) School of Information Science and Technology, Northwest University, Xi'an 710127, China

(2) Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai 200083, China

(3) Department of Physics, Emory University, Atlanta, GA 30322, USA

Correspondence should be addressed to Hongqi Meng; mhq@stumail.nwu.edu.cn

Received 18 February 2016; Revised 14 September 2016; Accepted 26 September 2016

Academic Editor: Ruben Specogna

Caption: FIGURE 1: Gray-scale image dehazing using proposed method. (a) Gray-scale image. (b) Haze-free image. (c) Image of scene depth proportion.

Caption: FIGURE 2: Contrast results between common and modified guided image filters. (a) and (d) Hazy images. (b) and (e) Extracted detail images using common guide image filter. (c) and (f) Extracted detail images using modified guided image filter.

Caption: FIGURE 3: Image of scene depth proportion. (a), (b), and (c) Gray-scale hazy images. (d), (e), and (f) Corresponding images of scene depth proportion of (a), (b), and (c), respectively.

Caption: FIGURE 4: Scheme sketch of proposed method.

Caption: FIGURE 5: The process of gray-scale image dehazing by proposed method. (a) Original gray-scale hazy image. (b) Low frequency layer. (c) Enhanced image of (b). (d) High frequency layer in horizontal direction. (e) Enhanced image of (d). (f) Dehazing result with the image of scene depth proportion. (g) Dehazing result using modified guided image filtering directly.

Caption: FIGURE 6: Experiment on image with thin haze and small sky region. (a) Color hazy image. (b) Result by method of Fattal [1]. (c) Result by method of He et al. [2]. (d) Result by method of Meng et al. (A is acquired manually) [9]. (e) Result by method of Li et al. [7]. (f) Gray-scale hazy image. (g) Gray-scale result by proposed method. (h) Color result by proposed method.

Caption: FIGURE 7: Experiment I on image with thick haze and large sky region. (a) Color hazy image. (b) Result by method of Fattal [1]. (c) Result by method of He et al. [2]. (d) Result by method of Meng et al. (A is acquired manually) [9]. (e) Result by method of Li et al. [7]. (f) Gray-scale hazy image. (g) Gray-scale result by proposed method. (h) Color result by proposed method.

Caption: FIGURE 8: Experiment II on image with thick haze and large sky region. (a) Color hazy image. (b) Result by method of Fattal [1]. (c) Result by method of He et al. [2]. (d) Result by method of Meng et al. (A is acquired manually) [9]. (e) Result by method of Li et al. [7]. (f) Gray-scale hazy image. (g) Gray-scale result by proposed method. (h) Color result by proposed method.
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Jiang, Bo; Zhang, Wanxu; Zhao, Jian; Ru, Yi; Liu, Min; Ma, Xiaolei; Chen, Xiaoxuan; Meng, Hongqi
Publication:Mathematical Problems in Engineering
Date:Jan 1, 2016
Words:5707
Previous Article:Potential Odor Intensity Grid Based UAV Path Planning Algorithm with Particle Swarm Optimization Approach.
Next Article:Corrigendum to "Naive Bayesian Classifier for Selecting Good/Bad Projects during the Early Stage of International Construction Bidding Decisions".

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters