Printer Friendly

A Single Image Dehazing Method Using Average Saturation Prior.

1. Introduction

Due to the atmospheric suspended particles (aerosols, water droplets, etc.) that absorb and scatter light before reaching a camera, outdoor images that are captured in bad weather (haze, fog, etc.) are significantly degraded and yield poor visibility, such as blurred scene content, reduced contrast, and faint surface color. The majority of applications in computer vision and computer graphics, such as motion estimation [1, 2], satellite imaging [3, 4], object recognition [5, 6], and intelligent vehicles [7], are based on the assumption that the input images have clear visibility. Thus, eliminating the negative visual effects and recovering the true scene, which is often referred to as "dehazing," are highly desired and have strong implications. However, dehazing is a challenge problem since the magnitude of degradation is fundamentally spatial-variant.

As a challenging problem, a variety of methods have been proposed to address this task using different strategies. The first category of methods removes haze based on traditional image processing techniques, such as histogram-based [8, 9] and Retinex-based methods [10]. However, the recovered results may suffer from haze residue and an unpleasing global visual effects, since the adjustment strategies do not consider the spatial relations of the degradation mechanism. A more sophisticated category of haze removal methods attempts to improve the dehazing performance by employing multiples images that are taken in different atmosphere conditions [1113]. Although the dehazing effect can be enhanced since extra information about the hazy image is obtained through different atmospheric properties, the limitation of these methods is evident because the acquisition step is difficult to perform. Another category of methods estimates the haze effects using a camera and polarization filter which is identically positioned [14-16]. But these methods are only valid for mist images, where the polarization light is the major degradation factor [17]; moreover, these methods are normally time-consuming.

Recently, benefiting from the atmospheric scattering model, many state-of-the-art model-based single image dehazing methods have been proposed [11, 18-28]. Although significant progress has been made, two main limitations remain. First, the atmospheric scattering model that researchers have adopted is only valid in homogeneous atmosphere conditions, as we noted in Section 2, and therefore these model-based methods commonly lack robustness. Second, many model-based single image dehazing methods that recover clear-day images rely on the error-prone empirical assumption as the atmospheric scattering coefficient due to the estimation difficulty and model complexity and therefore are limited in terms of robustness and effectiveness.

For instance, He et al. [21] proposed the dark channel prior based on the statistical observation, which enables us to directly obtain an approximate estimation of the transmission map. Despite its effectiveness in most cases, this method cannot process inhomogeneous hazy images due to the atmospheric scattering model limitation and may fail for the sky region where the prior is broken. Based on [21], Meng et al. [23] added a boundary constraint and estimated the transmission map via a weighted contextual regularization. However, it is subject to color distortion for the white object, since it cannot fundamentally address the problem of ambiguity between the surface color and haze. Fattal's method [19] is proposed with the assumption that the surface shading factor and transmission functions are statistically uncorrelated in a local patch and estimate the transmission within the segmented scenes which have the constant scene albedo. Although it can achieve an impressive effect when recovering a homogeneous mist image, it fails for dense haze scenes and inhomogeneous hazy images where the assumption is invalid. Tan [18] proposed a novel dehazing method by assuming that the clear-day images have higher contrast than the hazy images; however, results generated using a Markov Random Field (MRF) tend to be oversaturated, since this method is similar to the contrast stretching in general. Based on the Bayesian probabilistic model and the atmospheric scattering model, Nishino et al. [22] jointly estimated the scene albedo and depth by completely leveraging its latent statistical structures. Despite the almost perfect dehazing results for dense hazy images, the results tend to be overenhanced for the mist image. Tarel et al.'s method [20] estimates the veil by using combinations of filters; the advantage of this method is its linear complexity and therefore it can be implemented in real time. Nevertheless, the dehazing effect is prone to be invalid where depth changes drastically, since the median filter involved provides poor edge-preserving performance. Zhu et al. [24] proposed the color attenuation prior, and the depth information can be well estimated via this prior and a linear model; nevertheless, it fails in inhomogeneous atmosphere conditions since the atmospheric scattering model it adopted is invalid. Wang et al. [26] proposed a fusion-based method to remove haze, but haze remains when processing inhomogeneous hazy images because the atmospheric scattering model may be invalid in these cases. Besides, this method is based on wavelet fusion and tends to be invalid for dense hazy images because the ambiguity between the image color and haze cannot be well separated. Although Jiang et al. [25] introduced an efficiency hierarchical method for gray-scale image dehazing, it cannot well handle inhomogeneous scenes and suffers from color distortion due to the limitations of the model, which is similar to [26].

In this paper, we propose an improved atmospheric scattering model and a corresponding single image dehazing method that is aimed at overcoming the two main aforementioned limitations. Compared with the previous methods, major contributions of our method are presented. (1) We propose an improved atmospheric scattering model to address the limitation of the current model, which is only valid under homogeneous atmosphere conditions. By considering the inhomogeneous atmosphere, the proposed model has better properties with respect to the validity and robustness. (2) Based on the proposed model, we create a haze density distribution map and train the relevant parameters using a supervised learning method, which enables us to segment the hazy image into scenes based on the haze density similarity. Therefore, the inhomogeneous atmosphere problem can be effectively converted into a group of homogeneous atmosphere problems. (3) Using the segmented scenes, combined with the proposed scene weight assignment function, we can effectively improve the estimation accuracy of the atmospheric light by excluding most of the potential errors. (4) Few dehazing methods estimate the atmospheric scattering coefficient but simply use the error-prone empirical values due to their difficulty and model complexity; even this problem has been noticed by many researchers. We estimate the atmospheric scattering coefficient via the proposed ASP.

The remainder of this paper is structured as follows. In the next section, we propose the improved atmospheric scattering model based on the limitation analysis of the current model. In Section 3, we present a novel single image dehazing method, which includes three key steps: scene segmentation via a haze density distribution map, improved atmospheric light estimation, and scene albedo recovery via the ASP. In Section 4, we present and analyze the experimental results. In Section 5, we summarize our method.

2. Improved Atmospheric Scattering Model

In computer vision and computer graphics, most model-based dehazing methods [11, 18-28] rely on the following atmospheric scattering model, which is formulated under the homogeneous atmosphere assumption [11, 12, 29, 30]:

I(x, y) = A x [rho] (x, y) x [e.sup.(-[beta]xd(x,y))] + A x (1 - [e.sup.(-[beta]xd(x,y))]), (1)

where (x, y) is the pixel index, I(x, y) denotes the hazy image, J(x, y) = A x [rho](x, y) represents the corresponding clear-day image, A is the atmospheric light, which is a constant value throughout the whole image, [rho](x, y) is the scene albedo, and t(x, y) is the transmission, which is defined as

t(x, y) = [e.sup.(-[beta]xd(x,y))], (2)

where d(x, y) is the scene depth and [beta] is the atmospheric scattering coefficient which describes the ability of a unit volume of atmosphere to scatter light in all directions [11, 24, 29].

Note that the atmospheric scattering coefficient [beta] is a fixed scalar in (1), which indicates that the attenuation magnitude is constant throughout the entire hazy image. However, according to [11, 19, 24, 29], [beta] is determined by the density of atmospheric suspended particles in the atmosphere for a particular light wavelength. Consequently, the constant setting of [beta] in (1) is only valid in homogeneous atmosphere conditions, as discussed by [21, 24, 26, 29]. That is, when we simply regard the atmospheric scattering coefficient [beta] as a constant in inhomogeneous atmosphere conditions, the transmission in some scenes is inevitably prone to be underestimated or overestimated.

In addition, the atmosphere is inhomogeneous in most practical scenarios, since haze has an inherent dynamic diffusion property according to Fick's laws of diffusion [31]. As shown in Figure 1, the haze density spatially varies between different boxes within a hazy image.

However, this problem can be relatively alleviated. Although the haze density spatially varies across the entire image, the haze density within a particular local region is approximately the same, since the haze diffusion is physically smooth [11]. For instance, the haze density is generally similar within the same boxes in Figure 1. Thus, the inhomogeneous hazy image can be converted into a group of homogeneous scenes based on the haze density similarity, and each scene can be regarded as an independent homogeneous subimage. Based on this notion and inspired by [32, 33], we redefine the atmospheric scattering coefficient [beta] in (1) as a scene-wise variable and propose an improved atmospheric scattering model as

I(x, y) = A x [rho] (x, y) x [e.sup.(-[beta](i)xd(x,y)] + A x (1 - [e.sup.(-[beta](i)xd(x,y)]), (x, y) [member of] [OMEGA](i), (3)

where i is the scene index, [OMEGA](i) is the pixel index for the zth scene, and [beta](i) is the scene atmospheric scattering coefficient, which is constant within a scene but varies between scenes. With this improvement, we addressed the inherent limitation of the atmospheric scattering model, because all types of hazy images can be precisely modeled.

Recovering the clear-day image is a challenging problem because the inverse problem via (1) is fundamentally ill-posed. This problem is further aggravated because the atmospheric scattering coefficient cannot be simply set using an error-prone empirical value as in most of the state-of-the-art dehazing methods [11, 18-28]. We resolve this problem in Section 3.

3. A Novel Single Image Dehazing Method

In this section, we present a novel single image dehazing method that is based on the proposed improved atmospheric scattering model. In this method, we first create a haze density distribution map to describe the spatial relations of the haze density for a hazy image and further segment the hazy image into scenes based on the haze density similarity. Then, as a by-product of segmentation, we can improve the estimation accuracy of atmospheric light using a proposed scene weight assignment function. Next, based on the proposed ASP and the depth information provided via [24], we can estimate the scene atmospheric scattering coefficient and recover the truth scene albedo.

3.1. Scene Segmentation via a Haze Density Distribution Map

3.1.1. Definition of a Haze Density Distribution Map. Based on the improved atmospheric scattering model, we need to segment the hazy image into scenes based on the similarity of the haze density. Thus, the spatial distribution of haze density for a hazy image should be obtained. However, to our knowledge, there does not yet exist a pixel-based nonreference haze density distribution model that is consistent well with the practical judgement of haze density. Choi et al. [34] propose the fog aware density evaluator (FADE), which is a patch-based evaluator to assess the fog density for an entire hazy image or local patch. However, it is a patch-based assessment that is relatively computationally expensive to obtain and therefore cannot be implemented as an intermediate step. Thus, a high efficiency pixel-based strategy for describing the spatial relations of the haze density for a hazy image is required.

According to [34-36], the haze density representation is primarily correlated with three measurable statistical features of a hazy image I(x, y), including the brightness component l'(x, y), texture details component [nabla]I(x, y), and saturation component I[degrees](x, y). Thus, inspired by [24], we create a linear model named the haze density distribution map that can be expressed as

D (x, y) = [[gamma].sub.1] x I' (x, y) + [[gamma].sub.2] x [nabla]I (x, y) + [[gamma].sub.3] x I[degrees] (x, y) + [[gamma].sub.4], (4)

where D(x, y) is the haze density distribution map, [[gamma].sub.1], [[gamma].sub.2], and [[gamma].sub.3] are the corresponding unknown parameters for each component. Inspired by [37, 38], we should further consider the representation error, such as the quantization error caused by the three components and noise. Thus, we set the total representation error as [[gamma].sub.4]. According to (4), all the components are combined to yield a description of a haze density distribution. Note that the three components are relatively independent, and the slight deviation of a component will therefore not affect the other components.

To obtain all relevant parameters of the linear model, we employ a supervised learning method. We employ 500 training samples to train this linear model; each training sample consists of a pair of hazy images and the corresponding truth haze density map (in order to prepare the training data, we collect 500 various types of hazy images from the Internet and use them to produce the corresponding truth haze density). Considering the superior accuracy of FADE, we adopt it as the reference for the truth haze density representation. The training strategy is designed as the following:

[mathematical expression not reproducible] (5)

We utilize the gradient descent algorithm to estimate the linear parameters [[gamma].sub.1], [[gamma].sub.2], [[gamma].sub.3] and [[gamma].sub.4]. By taking the partial derivatives of [psi](x) with respect to [[gamma].sub.1], [[gamma].sub.2], [[gamma].sub.3] and [[gamma].sub.4] respectively, we can obtain the following expressions:

[mathematical expression not reproducible] (6)

where [absolute value of ([LAMBDA])] is the total number of pixels within the training hazy images and i' is the pixel index of the training hazy images. The updating procedure of the linear parameters is as follows:

[[gamma].sub.l] := [[gamma].sub.l] + [partial derivative][psi]/ [partial derivative][[gamma].sub.l] s.t. l [member of] {l, 2, 3, 4}, (7)

where the notation := indicates that the value of [[gamma].sub.l] in the left term is set to be the value of the right term. After training, we obtain the following optimal model parameters (retain four decimal places for precision) as [[gamma].sub.1] = 0.9313, [[gamma].sub.2] = 0.1111, [[gamma].sub.3] = -1.4634, and [[gamma].sub.4] = -0.0213. The most important advantage of this model is its linear complexity. Once the model parameters have been determined, this model can be used for modeling the haze density distribution of any hazy image.

In Figure 2, we select several inhomogeneous and homogeneous hazy images with various haze densities (see Figure 2(a)) and demonstrate the corresponding haze density distribution maps (see Figure 2(b)). The dark blue areas indicate the thinnest haze scene and the dark red areas represent the densest haze scene, and the color changes from dark blue to dark red along with increasing haze density. Note that the generated haze density distribution maps are visually consistent with the spatial feature of haze density.

However, we note that the haze density distribution map contains excessive texture details. This finding is caused by the depth structure of the scene objects, which may affect the components (brightness, texture details, and saturation) that we adopted to model the density distribution map. The haze density distribution should be flat and independent of any image structure [33]. Although the excessive texture details imply the microscopic haze density difference, additional processing will incur extra computational cost. The accuracy will be slightly sacrificed if we eliminate part of the excessive texture details; however, we consider this to be a reasonable trade-off. Thus, we utilize the guided total variation model [33] to refine the haze density distribution map in the following:

[mathematical expression not reproducible] (8)

where [D.sub.ref] is the refined haze density distribution map, W is the weight function, D is the haze density distribution map, and G is the guidance image, which is defined as the haze density distribution map. According to [33], (8) can be expressed and processed using an iteration form, and we have [[alpha].sub.1] = 1, [[alpha].sub.2] = 12, and [[alpha].sub.3] = 1 as the regularization parameters for the approximation term, smoothing term, and edge-preserving term, respectively. By comparing Figures 2(b) and 2(c), we note that the excessive texture details have been significantly eliminated.

3.1.2. Scene Segmentation. Using the refined haze distribution map [D.sub.ref], our goal in this step is to segment the map into a group of scenes based on the haze density similarity. After segmentation, pixels within a particular scene should share an approximately identical haze density, which further implies that they have the same scene atmospheric scattering coefficient. This problem is fundamentally similar to data clustering; thus, we convert this segmentation process into a clustering problem and adopt the k-means clustering algorithm [39,40]. The clustering procedure can be expressed as

[mathematical expression not reproducible] (9)

where the k is the cluster number, [OMEGA](i) is the ith cluster, and [[phi].sub.i] is the cluster center. After extensive experiments and qualitative and quantitative comparisons (as demonstrated in Section 4), we obtain the relatively balanced cluster number k = 3. The k-means clustering algorithm iteratively forms mutually exclusive clusters of a particular spatial extent by minimizing the mean square distance from each pattern to the cluster center. The difference after the jth iteration can be expressed as

[mathematical expression not reproducible] (10)

where j is the iteration index. This iteration procedure stops when a convergence criterion is satisfied, and we adopt the typical convergence criteria [40]: no (or minimal) differences after the jth iteration; that is,

[absolute value of (E(j) - E(j - 1))] < [epsilon]. (11)

And we set [epsilon] = [10.sup.-4] to terminate this procedure. Note that because the clustering step optimizes the within-cluster sum of squares (WCSS) objective and there only exist a finite number of such partitions, the algorithm must converge to (local) optimum.

However, the segmentation results may exhibit instability or oversegmentation because the k-means clustering algorithm is uncorrelated with the spatial location, and there is no guarantee that the global optimum is obtained. Thus, we further refined it via a fast MRF method [41]. Then, we denote the further refined result as [D.sub.mrf]. Figure 3 shows the corresponding results [D.sub.mrf] of Figure 2.

3.2. Estimation of Atmospheric Light. The atmospheric light A is an RGB vector that describes the intensity of the ambient light in the hazy image. As discussed by [42], current single image dehazing methods estimate the atmospheric light either by user interactive algorithms [19, 24] or based on the most haze-opaque (brightest) pixels [18, 21, 43-45]. Nevertheless, the located brightest pixels may belong to an interference object, such as an extra light source, white/gray objects, and high-light noise. As demonstrated in Figure 4, both He et al.'s method [21] (in the green box) and Namer et al.'s method [15] (in the blue box) locate the interference object as the atmospheric light in a challenging hazy image.

As a by-product of scene segmentation, we can cope with the challenging task by designing a scene weight assignment function. Using this function, we can locate a candidate scene that excludes most of the interference objects. This function is designed based on three basic observations:

(1) The probability that a scene contains the most haze-opaque (brightest) pixels is proportional to the haze density [18, 21,44,45]. This can be inferred according to the atmospheric scattering model; when the haze density is infinite for a pixel, the relevant pixels will be reduced to the atmospheric light.

(2) The most haze-opaque (brightest) pixel belongs to the sky region with higher probability, and interference objects, such as rivers, extra light sources, and road, are primarily located spatially lower than the sky scene. Thus, we can avoid these types of interference objects by considering the scene vertical index.

(3) Most existing dehazing methods are not suitable for white/gray interference objects (cars, animals, etc.) because they are not sensitive to the white/gray color [24]. However, the scene coverage ratio for these objects is significantly smaller than the scene coverage ratio for a sky scene.

Accordingly, we assign the weight to each segmented scene in [D.sub.mrf] by considering the scene haze density, scene average height, and scene coverage ratio. Thus, the scene weight assignment function is defined as

[mathematical expression not reproducible] (12)

where res is the resolution of the hazy image and [[mu].sub.i] and [absolute value of ([[OMEGA].sub.i])] are the scene haze density and pixel number, respectively, for each segmented scene. Based on (12), each segmented scene will be assigned a weight, and we take the scene with the top weight as the candidate scene [S.sub.A]. In addition, to further eliminate the affection from high-light noise, we locate the top 0.1% brightest pixels as the potential atmospheric light within the candidate scene [S.sub.A] and take the average value of these pixels as the atmospheric light.

As shown in Table 1, we list the assigned weight (retain four decimal places for precision) of each scene (scenes 1, 2, and 3) for Figure 3 and depict the located potential atmospheric light in the red outlined areas in Figure 5. We successfully located the atmospheric light and avoid most of the interference objects, as expected.

We tested our method on the same challenge hazy image (Figure 4) and here depict our results (the red outlined areas in Figure 4). By comparison, we demonstrate the advantage of our method.

3.3. Scene Albedo Recovery via ASP

3.3.1. Average Saturation Prior. Hazy images often lack visual vividness because the scene contents are extremely blurred with reduced contrast and faint surface colors. Inherently inspired by [21, 24], we conduct a number of experiments on varied types ofhazy images and high-definition clear-day outdoor images to identify statistical regularities.

Interestingly, as an experimental demonstration shown in Figure 6, we notice that the RGB histograms of a hazy image are almost identically distributed (see Figure 6(c)). Conversely, the RGB histograms of a high-definition clear-day outdoor image (the same scene) are significantly distinguishable (see Figure 6(d)). As shown in Figure 6(c), we also notice that the hazy image contains nearly zero pixels that are black (RGB 0, 0, and 0)/white (RGB 1, 1, and 1), whereas the high-definition clear-day outdoor image includes numerous pixels (see Figure 6(d)).

The observations (on hazy image RGB histograms) indicate that most pixels in a hazy image are extremely similar, therefore causing poor visibility and vice versa. We infer that this observation will contribute to statistical regularities for an average saturation distribution; thus we perform extensive tests on various types of hazy images and high-definition clear-day outdoor images.

Similar to [21, 24], we collect a large number of hazy images and high-definition clear-day outdoor images from the Internet using several search engines (with the keywords hazy image and high-definition clear-day outdoor images). Then, we randomly select 2,000 hazy images and obtain the average saturation probability distribution (see Figure 7(a)). Next, we select 4,000 high-definition clear-day outdoor images with landscape and cityscape scenes (where haze usually occurs) and manually cut out the sky regions (considering the similarity between the sky region and the hazy image). The corresponding average saturation probability distribution of the high-definition clear-day outdoor images is depicted in Figure 7(b).

The average saturation probability distribution of hazy images, as shown in Figure 7(a), is distinctly concentrated at approximately 0.005 (more than 40% at 0.005 and with a cumulative probability of more than 70% from 0 to 0.01). This finding indicates that few pixels are nearly pure white or black, which confirms our second observation on Figure 6(c). Thus, this result strongly suggests that the average saturation for a hazy image tends to be a very small value (0 to 0.01 with an overwhelming probability).

The average saturation probability distribution of high-definition clear-day outdoor images is demonstrated in Figure 7(b). We compute the expectation of the average saturation; the results indicate that the average saturation for a high-definition clear-day outdoor image tends to be 0.106 with a high probability. As demonstrated in Section 4, to further evaluate this conclusion, we select another six possible average saturation values and further compare the dehazing effect both qualitatively and quantitatively on another 200 various types of hazy images.

3.3.2. Scene Atmospheric Scattering Coefficient Estimation and Scene Albedo Recovery. To our knowledge, most exiting dehazing methods take the error-prone empirical value as the atmospheric scattering coefficient. Despite the valuable progress that has been made to overcome this problem, there is likely no optimal solution for all types of hazy images (even homogeneous images) due to the variation in haze density. For instance, Zhu et al. tested numerous atmospheric scattering coefficients values in [24] to pursue an optimal solution; however, the atmospheric scattering coefficient is simply assumed to be 1 in this method. Shi et al. [46] also tried to address the problem by considering the impact of Earth's gravity on the atmospheric suspended particles; however, the dehazing results tend to be unstable [33].

By combining the proposed ASP with the improved atmospheric scattering model, we can effectively estimate the scene atmospheric scattering coefficient within each scene. We derive (3) as

[mathematical expression not reproducible] (13)

Note that I(x, y) is given and A is estimated in Section 3.2; the scene albedo [rho](x, y) is now a function with respect to the scene atmospheric scattering coefficient [beta](i) and the scene depth d(x, y). Due to significant progress in estimating the scene depth [22, 24], we assume that the scene depth d(x, y) is given by [24]. Therefore, the scene albedo [rho](x, y) is a function only with respect to the scene atmospheric scattering coefficient [beta](i). For convenience of expression, we rewrite (13) as

[rho](x, y) = f([beta](t)), (x, y) [member of] [[OMEGA].sub.i]. (14)

Next, based on the proposed ASP, we can obtain the scene atmospheric scattering coefficient [beta](i) as

[mathematical expression not reproducible] (15)

where [zeta](x) is the average saturation computing function. Note that (15) is a convex function, and we can obtain the optimal solutions of the scene atmospheric scattering coefficient using the golden section method [47] and set the termination criteria to [10.sup.-4] according to [48, 49]. Once we estimate the scene atmospheric scattering coefficient [beta](i) for all scenes, the corresponding scattering map is obtained. Considering that the scene atmospheric scattering coefficient estimation is inherently a scene-wise process, we utilize the guided total variation model [33] to increase the edge-consistency property. Figure 8 shows four example demonstrations of hazy images (Figures 8(a) and 8(b) are homogeneous hazy images, and Figures 8(c) and 8(d) are inhomogeneous hazy images) and the obtained corresponding scattering maps. It can be noticed that the scattering maps are consistent well with the corresponding haze images.

According to (3), we can directly obtain the scene albedo [rho](x, y), since all the unknown coefficients are determined, including the atmospheric light A, the scene atmospheric scattering coefficient [beta](i), and the scene depth d(x, y). Then, the clear-day image can be recovered as J(x, y) = A x [rho](x, y).

4. Experiments

Given hazy image with N pixels and segmented into k scenes after jth iteration, the computational complexity for the proposed method is O(Nkj), when the linear parameters [[gamma].sub.1], [[gamma].sub.2]> [[gamma].sub.3], and [[gamma].sub.4] in (4) are obtained via training. In our experiments, we implemented our method in MATLAB, and approximately 1.9 seconds is required to process a 600 x 400 pixels' image using a personal computer with 2.6 GHz Intel(R) Core i5 processor and 8.0 GB RAM.

In this section, we first demonstrate the experimental procedure for determining the clustering number in (10). Then we demonstrate the validity of the proposed ASP through qualitative and quantitative experimental comparisons. Next, in order to verify the effectiveness and robustness of the corresponding dehazing method, we test it on various real-world hazy images and conduct a qualitative and quantitative comparison with several state-of-the-art dehazing methods, such as those by Tarel et al. [20], Zhu et al. [24], He et al. [21], Ju et al. [33], and Meng et al. [23]. The parameters in our method are all demonstrated in Section 3, and the parameters in the five state-of-the-art dehazing methods are set to be optimal according to [20, 21, 23, 24, 33] for fair comparison.

For quantitative evaluation and comparison, we adopt several extensively employed indicators, including the percentage of new visible edges e, contrast restoration quality [bar.r], FADE [zeta], and the hue fidelity H. According to [50], indicator e measures the ratio of edges that are newly visible after restoration, and indicator [bar.r] verifies the average visibility enhancement obtained by the restoration. The indicator [zeta] is proposed by [34], which is an assessment of haze removal ability. The indicator H is presented by [51], which is a statistical metric to indicate the hue fidelity after restoration. Higher values of e and [bar.r] imply better visual improvement after restoration, lower values of [zeta] indicate less haze residual (which means a better dehazing ability), and a smaller value of H indicates that the dehazing method maintains better hue fidelity.

4.1. Experimental Comparison for Clustering Number. In Section 3.1, we propose a haze density distribution map to describe the spatial relations of the haze density for a hazy image and adopt the k-means clustering algorithm to segment it into a group of scenes based on the haze density similarity. To determine a relatively balanced clustering number k, we conduct a large number of experiments on different hazy images using different values of clustering number k. Then, we compared the dehazing effect in terms of the qualitative comparison, computational time, and quantitatively comparison using three indicators (e, [bar.r], and [zeta]).

Figure 9 shows five example experimental demonstrations of qualitative comparison using different clustering number k, and Figures 10-13 show the corresponding quantitative comparison results of e, [bar.r], [zeta] and computational time, respectively. Through qualitative comparison, we find that the dehazing effect improves when k increases from 1 to 3 and tends to stabilize afterwards. As we can see, when k equals 1 (which implies removing haze using the current atmospheric scattering model (1)), the haze residual is obvious (see Figure 9(b), the sky region in Test 1, upper left corner in Test 2 and Test 3, and the long-range scene in Test 4). When k equals 3 (see Figure 9(d)), haze is completely removed and the details of the scenes are adequately restored, the recovered color is nature and visually pleasing, and no overenhancement appears. However, the dehazing effect tends to be the same; even k continues to increases (see Figure 9(d), compared with Figures 9(e)--9(j)).

This observation is consistent with the quantitative comparison results, as shown in Figures 10-12; when k increases from 1 to 3, the values of e and [bar.r] rise (see Figures 10 and 11), which means more edges are recovered and better visibility enhancement is obtained. The value of [zeta] decreases obviously (see Figure 12), and it implies that more haze is removed when k increases from 1 to 3. Despite the increased computational time, as shown in Figure 13, we think it is a reasonable tradeoff for better dehazing effect. Afterwards, with the increased clustering number (from 3 to 9), the values of e and [bar.r] tend to be stable, and the value of [zeta] fluctuates and even rises slightly, as shown in Figures 10-12. Meanwhile, the computational time rises along with the increasing clustering number.

The observations for the five example experimental demonstrations are consistent with the results of more than 200 experiments. Consequently, we assume a clustering number k of three is a balanced choice for our method.

4.2. Experimental Comparison for ASP. In Section 3.3, we propose the ASP based on the statistics of extensive high-definition clear-day outdoor images, and the results indicate an average saturation of 0.106 with high probability for a high-definition clear-day outdoor image. To further verify the validity of this conclusion, we test and compare the dehazing effect using different values of the average saturation (0.01, 0.05, 0.106, 0.15, 0.2, 0.25, and 0.3) on another 200 hazy images. The four example experimental demonstrations are depicted in Figure 14. Through qualitative comparison, it is obvious that the dehazing magnitude is approximately proportional to the average saturation value, especially when average saturation value rises from 0.01 to 0.15 (see Figures 14(b)-14(e)). However, when average saturation value goes beyond 0.15, the recovered image looks dim and color tends to be unnatural (see Figures 14(f)-14(h), the close-range scene in Test 1 and Test 2, the upper left corner and middle part in Test 3, and long-range scene in Test 4). When the average saturation equals 0.106, as shown in Figure 14(d), our method unveils most of the details, recovers vivid color information, and avoids overenhancement, with minimal Halo artifacts.

The corresponding quantitative comparisons of Figure 14 are shown in Figures 15-18. In addition to e, [bar.r], and [zeta], we also measure and compare the value of indicator H.

As shown in Figures 15 and 16, when average saturation value rises from 0.01, the values of e and [bar.r] increase and tend to achieve a high value when average saturation value equals 0.106 and fluctuate slightly afterwards. This indicates more edges newly visible are obtained and better visual effect is enhanced when average saturation value achieves 0.106. This observation is consistent with Figure 17; the value of [zeta] declines significantly and tends to be stable when average saturation value equals 0.106, which implies the best haze removal effect can be achieved when average saturation value achieves 0.106. However, as shown in Figure 18, the value of H stays at a low level and increases dramatically when the average saturation value exceeds 0.106, which means the color distortion appears inevitably.

These observations on the four example experimental demonstrations are in consistency with most of the rest experimental results; thus the ASP is physically valid and is able to well handle various types of hazy image.

4.3. Qualitative Comparison. Considering that the five state-of-the-art dehazing methods are able to generate perfect results using hazy images, a visual ranking of the methods is therefore difficult to complete. Thus, we select six challenging images, including a homogeneous dense haze image (Figure 19(a)), a homogeneous image with large white or gray regions (Figure 20(a)), a homogeneous image with a sky region (Figure 21(a)), a homogeneous image with rich texture details (Figure 22(a)), an inhomogeneous long-range image (Figure 23(a)), and an inhomogeneous close-range image (Figure 24(a)).

Figures 19-24 demonstrate the qualitative comparison of the five state-of-the-art dehazing methods with our method. The original hazy images are displayed in column (a); columns (b) to (g), from left to right, depict the dehazing results and the corresponding zoom-in patches of the methods of Tarel et al., Zhu et al., He et al., Ju et al., Meng et al., and our method, respectively.

As shown in Figure 19(b), Tarel et al.'s methods are obviously unable to process dense hazy image. This is due to the fact that Tarel et al.'s method uses a geometric criterion to decide whether the observed white region belongs to the haze or the scene object; thus it is unreliable under dense haze condition. In Figures 20(b) and 22(b), we can notice that Tarel et al.'s results suffer from the overenhancement; this is because this method is based on He et al.'s method, and therefore the transmission will be overestimated inevitably, as discussed in [21]. In addition, haze obviously remains around the sharp edges in Tarel et al.'s results, as shown in the zoom-in patches of Figures 21(b) and 23(b), since the median filter involved has poor edge-preserving behavior.

Zhu et al.'s method is a prior-based dehazing method and therefore cannot handle dense hazy image (see Figure 19(c)), since the relevant color attenuation prior fails for dense hazy image where the haze density is independent with the scene depth. Although Zhu et al.'s method is able to yield almost perfect when processing homogeneous mist images, the dehazing effect is unstable for the inhomogeneous hazy images (the haze remains at the mountain scene in Figure 23(c) and the zoom-in patches of Figure 24(c)). Obviously, this is because the atmospheric scattering model adopted is invalid under inhomogeneous atmosphere condition.

Due to the inherent problem of dark channel prior, He et al.'s method cannot be applied to the regions where the brightness is similar to the atmospheric light (the sky region in the Figure 21(d) is significantly overenhanced). Moreover, similar with Zhu et al.'s method, He et al.'s method tends to be unreliable when processing inhomogeneous hazy images. As we can see from the zoom-in patches of Figures 23(d) and 24(d), haze cannot be removed globally.

Despite Ju et al.'s method getting really good result, the overexposure (see the zoom-in patches of Figures 22(e) and 24(e)) and color distortion effects (see the upper part of Figures 22(e) and 23(e)) appear since the transmission estimation method is parameter sensitive. As shown in Figure 20(e), Ju et al.'s method recovers the most scene objects but suffers from the overenhancement.

Meng et al.'s method is based on [21] and further improves the dehazing effect by adding a boundary constraint, but the problem of ambiguity between the image color and haze still exits and therefore fails for the sky region in Figure 21(f). In addition, Meng et al.'s results significantly suffer from the overall color distortion, as illustrated in Figures 21(f), 22(f), and 23(f).

In contrast, our method removes most of the haze and well unveils the scene objects, maintains the color fidelity, and eliminates the overenhancement, with minimal Halo artifacts. Note that, by taking advantage of the proposed improved atmospheric scattering model, our method is effective for both homogeneous and inhomogeneous hazy images.

4.4. Quantitative Comparison. To quantitatively assess and rate the five state-of-the-art dehazing methods and our method, we compute four indicators (e, [bar.r], [zeta], and H) for the dehazing effects of Figures 19-24 and list the corresponding results in Tables 2-5. For convenience, we indicate the top value in bold and italics and the second-highest values in bold.

According to Table 2, our results yield the top value for Figure 24, which is a typical inhomogeneous hazy image. Although our results only achieve the second top value for Figures 19, 20, 22, and 23, the results must be balanced because the number of recovered visible edges can cause noise amplification. For instance, Tarel et al.'s results have the highest value in Figures 19-24, but the relevant visual effects are either overenhanced or suffer from Halo artifacts. Conversely, our results avoid most of the negative effects.

As shown in Table 3, our dehazing results achieve the top values for both inhomogeneous hazy images (Figures 23 and 24) and the second top values for Figures 19 and 22, which verify the validity of the proposed atmospheric scattering model and the effectiveness of our method. Although we only obtain the third top values for Figures 20 and 21, our results are more visually pleasing. Although Ju et al.'s and Tarel et al.'s results achieve the top and the second top values for Figures 20 and 21, overenhancement is evident in Figures 20(b) and 20(e), haze is significant in Figure 21(b), and the corners of the sky region in Figure 21(e) tend to be dark.

As shown in Table 4, the ability for dehazing methods to maintain the color fidelity can be assessed through these results. He et al.'s results get the best values for all the dehazing results, our results achieve the second-best values for three hazy images (Figures 19, 21, and 23), and our results are very closed to the second-best score for Figures 20 and 22. Thus, our method can maintain the color fidelity generally for most of the challenge hazy images. However, this indicator may only partially reveal the ability of a dehazing method and are not sensitive to the overenhancement. For instance, He et al.'s results suffer from overenhancement (refer to Figure 21(d)); Tarel et al.'s results are overenhanced for Figure 22 and achieve the second-best score. Thus, exploration of an integrated indicator, which is consistent with human visual judgement, is necessary.

Because the indicator [zeta] correlates well with human judgements of fog density [34], we compute the values of the indicator [zeta] for all dehazing results and list them in Table 5. As shown in Table 5, our method outperforms other methods for Figures 20-24 and has the second-best value for Figure 19. This finding verifies the outstanding dehazing effect of our method, and this conclusion is consistent with our observation of the qualitative comparison. Importantly, we prove the power of our method for dehazing inhomogeneous hazy images. We attribute this advantage to the proposed improved atmospheric scattering model and the corresponding dehazing method.

In Table 6, we provide a comparison of the computational times. Note that our method is significantly faster than most of the other methods and relatively close to the computation time of Zhu et al.'s method. The high efficiency of our method is primarily attributed to the linear model, which describes the haze density distribution and therefore simplifies the estimation procedure using a scene-based method instead of a per-pixel or patch-based strategy.

5. Discussion and Conclusions

In this paper, we have proposed an improved atmospheric scattering model to overcome the inherent limitation of the current model. This improved model is physically valid and has an advantage with respect to effectiveness and robustness. Based on the proposed model, we further improve the effectiveness of the corresponding single image dehazing method, since we abandoned the assumption-based atmospheric scattering coefficient but estimated it via the proposed ASP.

In this method, by means of the proposed haze density distribution map and the scene segmentation, the inhomogeneous problems can be converted into a group of homogeneous ones. Then, we further propose the ASP based on statistics of extensive high-definition outdoor images and first estimate the scene atmospheric scattering coefficient via ASP. Next, as a by-product of scene segmentation, we effectively increase the estimation accuracy of the atmospheric light by defining a scene weight assignment function. Experimental results verify the robustness of the proposed improved atmospheric scattering model and the effectiveness of the corresponding dehazing method.

Although we have overcome the inherent limitation of the current atmospheric scattering model and have identified a method for estimating the scene atmospheric scattering coefficient based on the proposed ASP, a problem remains unsolved. Despite extensive experimental assessment and comparison, finding the optimal solution for scene segmentation (the clustering problem) is a difficult mathematical task due to the variety of hazy images. To address this task, some machine learning methods can be considered, and we leave this problem for our future research.

https://doi.org/10.1155/2017/6851301

Competing Interests

The authors declare no conflict of interests regarding the publication of this paper.

Acknowledgments

This study was supported in part by the National Natural Science Foundation of China [61571241], Industry-University-Research Prospective Joint Project of Jiangsu Province [BY014014], Major Projects of Jiangsu Province University Natural Science Research [15KJA510002], and Top-Notch Academic Programs Project of Jiangsu Higher Education Institutions [PPZY015C242].

References

[1] G. Botella and C. Garcia, "Real-time motion estimation for image and video processing applications," Journal of Real-Time Image Processing, vol. 11, no. 4, pp. 625-631, 2016.

[2] G. Botella, U. Meyer-Baese, A. Garcia, and M. Rodriguez, "Quantization analysis and enhancement of a VLSI gradient-based motion estimation architecture," Digital Signal Processing: A Review Journal, vol. 22, no. 6, pp. 1174-1187, 2012.

[3] P. S. Chavez Jr., "An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data," Remote Sensing of Environment, vol. 24, no. 3, pp. 459-479, 1988.

[4] Y. Zhang, B. Guindon, and J. Cihlar, "An image transform to characterize and compensate for spatial variations in thin cloud contamination of Landsat images," Remote Sensing of Environment, vol. 82, no. 2-3, pp. 173-187, 2002.

[5] L. Shao, L. Liu, and X. Li, "Feature learning for image classification via multiobjective genetic programming," IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 7, pp. 1359-1371, 2014.

[6] Y. Luo, T. Liu, D. Tao, and C. Xu, "Decomposition-based transfer distance metric learning for image classification," IEEE Transactions on Image Processing, vol. 23, no. 9, pp. 3789-3801, 2014.

[7] A. De La Escalera, J. M. Armingol, J. M. Pastor, and F. J. Rodriguez, "Visual sign information extraction and identification by deformable models for intelligent vehicles," IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 2, pp. 57-68, 2004.

[8] T. K. Kim, J. K. Paik, and B. S. Kang, "Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering," IEEE Transactions on Consumer Electronics, vol. 44, no. 1, pp. 82-87, 1998.

[9] J. A. Stark, "Adaptive image contrast enhancement using generalizations of histogram equalization," IEEE Transactions on Image Processing, vol. 9, no. 5, pp. 889-896, 2000.

[10] E. H. Land and J. J. McCann, "Lightness and retinex theory," Journal of the Optical Society of America, vol. 61, no. 1, pp. 1-11, 1971.

[11] S. G. Narasimhan and S. K. Nayar, "Contrast restoration of weather degraded images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713-724, 2003.

[12] S. K. Nayar and S. G. Narasimhan, "Vision in bad weather," in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), pp. 820-827, IEEE, Kerkyra, Greece, September 1999.

[13] S. G. Narasimhan and S. K. Nayar, "Chromatic framework for vision in bad weather," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), vol. 2, pp. 598-605, Hilton Head Island, SC, USA, June 2000.

[14] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, "Polarization-based vision through haze," Applied Optics, vol. 42, no. 3, pp. 511-525, 2003.

[15] S. Shwartz, E. Namer, and Y. Y. Schechner, "Blind haze separation," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '06), vol. 2, pp. 1984-1991, June 2006.

[16] Y. Y. Schechner and Y. Averbuch, "Regularized image recovery in scattering media," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 9, pp. 1655-1660, 2007

[17] C. O. Ancuti and C. Ancuti, "Single image dehazing by multiscale fusion," IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271-3282, 2013.

[18] R. T. Tan, "Visibility in bad weather from a single image," in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), IEEE, Anchorage, Alaska, USA, June 2008.

[19] R. Fattal, "Single image dehazing," ACM Transactions on Graphics, vol. 27, no. 3, article no. 72, 2008.

[20] J. P. Tarel and N. Hautiere, "Fast visibility restoration from a single color or gray level image," in Proceedings of the IEEE 12th International Conference on Computer Vision, pp. 2201-2208, IEEE, Kyoto, Japan, September 2009.

[21] K. He, J. Sun, and X. Tang, "Single image haze removal using dark channel prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, 2011.

[22] K. Nishino, L. Kratz, and S. Lombardi, "Bayesian defogging," International Journal of Computer Vision, vol. 98, no. 3, pp. 263-278, 2012.

[23] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, "Efficient image dehazing with boundary constraint and contextual regularization," in Proceedings of the 14th IEEE International Conference on Computer Vision (ICCV '13), pp. 617-624, IEEE, Sydney, Australia, December 2013.

[24] Q. Zhu, J. Mai, and L. Shao, "A fast single image haze removal algorithm using color attenuation prior," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, 2015.

[25] B. Jiang, W. Zhang, J. Zhao et al., "Gray-scale image Dehazing guided by scene depth information," Mathematical Problems in Engineering, vol. 2016, Article ID 7809214, 10 pages, 2016.

[26] W. Wang, W. Li, Q. Guan, and M. Qi, "Multiscale single image dehazing based on adaptive wavelet fusion," Mathematical Problems in Engineering, vol. 2015, Article ID 131082, 14 pages, 2015.

[27] D. Nan, D.-Y. Bi, C. Liu, S.-P. Ma, and L.-Y. He, "A Bayesian framework for single image dehazing considering noise," The Scientific World Journal, vol. 2014, Article ID 651986, 13 pages, 2014.

[28] P. Jidesh and A. A. Bini, "An image dehazing model considering multiplicative noise and sensor blur," Journal of Computational Engineering, vol. 2014, Article ID 125356, 9 pages, 2014.

[29] S. G. Narasimhan and S. K. Nayar, "Vision and the atmosphere," International Journal of Computer Vision, vol. 48, no. 3, pp. 233-254, 2002.

[30] S. G. Narasimhan and S. K. Nayar, "Removing weather effects from monochrome images," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), vol. 2, pp. II186-II193, December 2001.

[31] F. P. Miller, A. F. Vandome, and J. Mcbrewster, Ficks Laws of Diffusion, Alphascript Publishing, 2010.

[32] D.-Y. Zhang, M.-Y. Ju, and X.-M. Wang, "A fast image daze removal algorithm using dark channel prior," Acta Electronica Sinica, vol. 43, no. 7, pp. 1437-1443, 2015.

[33] M. Ju, D. Zhang, and X. Wang, "Single image dehazing via an improved atmospheric scattering model," The Visual Computer, 2016.

[34] L. K. Choi, J. You, and A. C. Bovik, "Referenceless prediction of perceptual fog density and perceptual image defogging," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3888-3901, 2015.

[35] A. Mittal, A. K. Moorthy, and A. C. Bovik, "No-reference image quality assessment in the spatial domain," IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, 2012.

[36] A. Mittal, R. Soundararajan, and A. C. Bovik, "Making a "completely blind" image quality analyzer," IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209-212, 2013.

[37] O. Sarbishei and K. Radecka, "Analysis of Mean-Square-Error (MSE) for fixed-point FFT units," in Proceedings of the IEEE International Symposium of Circuits and Systems (ISCAS '11), pp. 1732-1735, IEEE, Rio de Janeiro, Brazil, May 2011.

[38] A. Valdessalici, G. Frassi, and A. Bellini, "Efficient implementation of a spectrum analyzer for fixed point architectures," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5, pp. 109-112, Philadelphia, Pa, USA, March 2005.

[39] S. Z. Selim and M. A. Ismail, "K-means-type algorithms: a generalized convergence theorem and characterization of local optimality," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 1, pp. 81-87, 1984.

[40] J. Hagauer and G. Rote, "Three-clustering of points in the plane," Computational Geometry: Theory and Applications, vol. 8, no. 2, pp. 87-95, 1997

[41] J. Tohka, "FAST-PVE: extremely fast markov random field based brain MRI tissue classification," in Proceedings of the Scandinavian Conference on Image Analysis (SCIA '13), vol. 7944, pp. 266-276, Espoo, Finland, June 2013.

[42] M. Sulami, I. Glatzer, R. Fattal, and M. Werman, "Automatic recovery of the atmospheric light in hazy images," in Proceedings of the 6th IEEE International Conference on Computational Photography (ICCP '14), pp. 1-11, IEEE, Santa Clara, Calif, USA, May 2014.

[43] S. G. Narasimhan and S. K. Nayar, "Interactive (de) weathering of an image using physical models," in Proceedings of the IEEE Workshop on Color and Photometric Methods in Computer Vision, vol. 6, pp. 1-8, Nice, France, October 2003.

[44] B. Xie, F. Guo, and Z. Cai, "Improved single image dehazing using dark channel prior and multi-scale retinex," in Proceedings of the International Conference on Intelligent System Design and Engineering Application (ISDEA '10), vol. 1, pp. 848-851, October 2010.

[45] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, "Instant dehazing of images using polarization," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), vol. 1, p. I, December 2001.

[46] Z. Shi, J. Long, W. Tang, and C. Zhang, "Single image dehazing in inhomogeneous atmosphere," Optik, vol. 125, no. 15, pp. 3868-3875, 2014.

[47] A. P. Stakhov, "The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering," Chaos, Solitons & Fractals, vol. 26, no. 2, pp. 263-289, 2005.

[48] C. H. Tsai, J. Kolibal, and M. Li, "The golden section search algorithm for finding a good shape parameter for meshless collocation methods," Engineering Analysis with Boundary Elements, vol. 34, no. 8, pp. 738-746, 2010.

[49] T. K. Sharma, M. Pant, and V. P. Singh, "Improved local search in artificial bee colony using golden section search," https://arxiv.org/abs/1210.6128.

[50] N. Hautiere, J.-P. Tarel, D. Aubert, and E. Dumont, "Blind contrast enhancement assessment by gradient ratioing at visible edges," Image Analysis and Stereology, vol. 27, no. 2, pp. 87-95, 2008.

[51] D. J. Jobson, Z. U. Rahman, and G. A. Woodell, "Statistics of visual representation," in AeroSense 2002, pp. 25-35, International Society for Optics and Photonics, 2002.

Zhenfei Gu, (1,2) Mingye Ju, (1) and Dengyin Zhang (1)

(1) School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing, China

(2) Nanjing College of Information Technology, Nanjing, China

Correspondence should be addressed to Dengyin Zhang; zhangdy@njupt.edu.cn

Received 29 November 2016; Revised 13 February 2017; Accepted 15 February 2017; Published 5 March 2017

Academic Editor: Guillermo Botella-Juan

Caption: Figure 1: Examples of hazy images with inhomogeneous atmosphere. Haze density is approximately the same within a box but varies between boxes.

Caption: Figure 2: (a) Various types of inhomogeneous and homogeneous hazy images. (b) Corresponding haze density distribution maps. (c) Relevant refined haze density distribution maps.

Caption: Figure 3: Relevant final segmentation results of Figure 2. Each hazy image is segmented into three scenes based on the haze density similarity; we denote the segmented scenes using 1, 2, and 3.

Caption: Figure 4: A challenging hazy image for the atmospheric light locating. The result of [21] is depicted in the green box, the result of [15] is depicted in the blue box, and our result is depicted in the red outlined areas.

Caption: Figure 5: Located potential atmospheric light using our method (in the red outlined areas).

Caption: Figure 6: (a) Hazy image. (b) Clear-day image. (c) RGB histograms of (a). (d) RGB histograms of (b).

Caption: Figure 7: (a) Average saturation probability distribution of hazy images. (b) Average saturation probability distribution of high-definition clear-day outdoor images.

Caption: Figure 8: Hazy images (homogeneous and inhomogeneous) and the corresponding scattering map.

Caption: Figure 9: Five example experimental demonstrations of qualitative comparison using different clustering number k. (a) Hazy image. (b-j) Left to right: the recovered images using the value of k from 1 to 9, respectively.

Caption: Figure 10: Values of e using different clustering number k.

Caption: Figure 11: Values of [bar.r] using different clustering number k.

Caption: Figure 12: Values of [zeta] using different clustering number k.

Caption: Figure 13: Computational time using different clustering number k.

Caption: Figure 14: Four example experimental demonstrations of qualitative comparison using different values of the average saturation. (a) Hazy image. (b-h) Left to right: recovered images using average saturations of 0.01, 0.05, 0.106, 0.15, 0.2, 0.25, and 0.3, respectively.

Caption: Figure 15: Values of e using different average saturation values.

Caption: Figure 16: Values of [rho] using different average saturation values.

Caption: Figure 17: Values of [zeta] using different average saturation values.

Caption: Figure 18: Values of H using different average saturation values.

Caption: Figure 19: Qualitative comparison of the homogeneous dense hazy image. (a) Hazy image. (b) Tarel et al's result. (c) Zhu et al's result. (d) He et al's result. (e) Ju et al.'s result. (f) Meng et al's result. (g) Our result.

Caption: Figure 20: Qualitative comparison of the homogeneous image with large white or gray regions. (a) Hazy image. (b) Tarel et al's result. (c) Zhu et al.'s result. (d) He et al.'s result. (e) Ju et al.'s result. (f) Meng et al.'s result. (g) Our result.

Caption: Figure 21: Qualitative comparison of the homogeneous image with sky region. (a) Hazy image. (b) Tarel et al's result. (c) Zhu et al.'s result. (d) He et al's result. (e) Ju et al.'s result. (f) Meng et al's result. (g) Our result.

Caption: Figure 22: Qualitative comparison of the homogeneous image with rich texture details. (a) Hazy image. (b) Tarel et al.'s result. (c) Zhu et al.'s result. (d) He et al.'s result. (e) Ju et al.'s result. (f) Meng et al.'s result. (g) Our result.

Caption: Figure 23: Qualitative comparison of the inhomogeneous long-range image. (a) Hazy image. (b) Tarel et al.'s result. (c) Zhu et al.'s result. (d) He et al.'s result. (e) Ju et al.'s result. (f) Meng et al.'s result. (g) Our result.

Caption: Figure 24: Qualitative comparison of the inhomogeneous close-range image. (a) Hazy image. (b) Tarel et al.'s result. (c) Zhu et al.'s result. (d) He et al.'s result. (e) Ju et al.'s result. (f) Meng et al.'s result. (g) Our result.
Table 1: Assigned weight of each scene in Figure 3.

Weight      (a)       (b)       (c)       (d)      (e)      (f)

Scene 1   -0.3822    0.0805   -0.1065   -0.2837   -0.3827   -0.5711
Scene 2   -1.0301   -0.6728   -0.3528   -0.8370   -0.6412   -0.9283
Scene 3   -1.6284   -1.0299   -0.8407   -1.1495   -1.0578   -1.3305

Table 2: Value of indicator e for the dehazing results of
Figures 19(a)-24(a) using different methods.

               Tarel          Zhu           He            Ju
             et al.'s      et al.'s      et al.'s      et al.'s
e           method [20]   method [24]   method [21]   method [33]

Figure 19    10.3350         9.1313        51.9002        47.7362
Figure 20     1.4785         1.0081         1.0650         1.2816
Figure 21     0.4016         0.1519         0.1491         0.0603
Figure 22     0.6243         0.2225         0.3097         0.1753
Figure 23     0.4573         0.0713         0.0958         0.0589
Figure 24     4.2303         1.3941         2.8247         2.9906

               Meng
             et al.'s      Our
e           method [23]   method

Figure 19     50.0129     50.0986
Figure 20      1.0323      1.4758
Figure 21      0.3040      0.1469
Figure 22      0.3232      0.3494
Figure 23      0.1446      0.1463
Figure 24      1.7313      4.4613

Table 3: Value of indicator r for the dehazing results of
Figures 19(a)-24(a) using different methods.

              Tarel          Zhu            He            Ju
             et al.'s      et al.'s      et al.'s      et al.'s
[bar.r]     method [20]   method [24]   method [21]   method [33]

Figure 19     2.2456        2.7717        5.0828        6.0780
Figure 20     2.1193        1.4425        1.4451        3.8099
Figure 21     2.3780        1.5599        1.5730        3.3111
Figure 22     2.1833        1.4152        1.5034        2.4242
Figure 23     1.4632        1.1635        1.5654        1.5703
Figure 24     3.0146        1.7835        1.7323        4.4592

               Meng
             et al.'s      Our
[bar.r]     method [23]   method

Figure 19     4.2383      5.1630
Figure 20     1.7171      1.9012
Figure 21     1.6098      1.6143
Figure 22     1.3657      1.6909
Figure 23     1.2689      1.7143
Figure 24     1.5042      4.7029

Table 4: Value of indicator H for the dehazing results of
Figures 19(a)-24(a) using different methods.

              Tarel          Zhu            He            Ju
             et al.'s      et al.'s      et al.'s      et al.'s
H           method [20]   method [24]   method [21]   method [33]

Figure 19     0.0622        0.1152        0.0001        0.2971
Figure 20     0.0918        0.0318        0.0026        0.5505
Figure 21     0.0546        0.0631        0.0013        0.2548
Figure 22     0.0442         0.077        0.0006        0.1469
Figure 23     0.0373        0.0154        0.0028        0.3324
Figure 24     0.0504        0.0415        0.0006        0.0493

               Meng
             et al.'s      Our
H           method [23]   method

Figure 19     0.3748      0.0209
Figure 20     0.2152      0.0517
Figure 21     0.0729      0.0539
Figure 22      0.237      0.0544
Figure 23     0.0351      0.0116
Figure 24     0.0177      0.0389

Table 5: Value of indicator [zeta] for the dehazing results of
Figures 19(a)-24(a) using different methods.

              Tarel          Zhu            He            Ju
             et al.'s      et al.'s      et al.'s      et al.'s
[zeta]      method [20]   method [24]   method [21]   method [33]

Figure 19     2.1654        2.4913        0.6674        0.3553
Figure 20     0.2649        0.3001        0.2853        0.2807
Figure 21     0.3879        0.5218        0.3247        0.3121
Figure 22     0.1792        0.2864        0.2294        0.2645
Figure 23     0.1814        0.4503        0.3402        0.2287
Figure 24     0.2176        0.5004        0.3266        0.3106

               Meng
             et al.'s      Our
[zeta]      method [23]   method

Figure 19     0.7728      0.4962
Figure 20     0.3279      0.2172
Figure 21     0.3156      0.3034
Figure 22     0.2067      0.1664
Figure 23     0.2636      0.1794
Figure 24     0.4070      0.1293

Table 6: Computational time for Figures 19(a)-24(a) using
different methods.

                           Tarel          Zhu            He
                          et al.'s      et al.'s      et al.'s
Computational time (s)   method [20]   method [24]   method [21]

Figure 19 (845 x 496)         19.02         3.30         207.00
Figure 20 (768 x 497)         15.12         3.10         189.80
Figure 21 (400 x 600)         6.66          2.44         122.20
Figure 22 (512 x 460)         5.13          2.72         119.76
Figure 23 (512 x 384)         4.72          2.21          99.30
Figure 24 (629 x 420)         7.44          2.47         171.45

                             Ju           Meng
                          et al.'s      et al.'s      Our
Computational time (s)   method [33]   method [23]   method

Figure 19 (845 x 496)         7.20          5.37        4.85
Figure 20 (768 x 497)         6.70          4.85        3.63
Figure 21 (400 x 600)         4.57          3.12        1.90
Figure 22 (512 x 460)         4.30          3.25        1.75
Figure 23 (512 x 384)         4.20          2.90        1.60
Figure 24 (629 x 420)         4.20          4.03        2.14
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Gu, Zhenfei; Ju, Mingye; Zhang, Dengyin
Publication:Mathematical Problems in Engineering
Article Type:Report
Geographic Code:1USA
Date:Jan 1, 2017
Words:10564
Previous Article:A Numerical Simulation of Base Shear Forces and Moments Exerted by Waves on Large Diameter Pile.
Next Article:Impulsive Flocking of Dynamical Multiagent Systems with External Disturbances.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters