Printer Friendly

Incident Light Frequency-Based Image Defogging Algorithm.

1. Introduction

The images taken by cameras in foggy weather are born with poor visibility and low contrast, which makes lots of trouble to image segmentation and target detection in video surveillance system and makes various outdoor monitoring systems, such as video surveillance system, unable to work reliably in bad weather. Therefore, it is an important research topic to improve the reliability and robustness of the outdoor monitoring system with simple and effective image defogging algorithm. Many researchers have made extensive studies and achieved a series of theoretical and application results [1-19].

Image defogging methods fall into two categories [20,21]: image enhancement and physical model-based restoration. The image enhancement method does not consider the cause of image degradation in foggy weather, only dealing with the characteristics of the foggy image with high precision and low contrast. This can weaken the fog effect on images, improve the visibility of scenes, and enhance the contrast of images. The most commonly used method in image enhancement is histogram equalization, which can effectively enhance the contrast of images, but owing to the uneven depth of scenes in foggy images, namely, different scenes are affected by fog in varying degree, global histogram equalization cannot fully remove the fog effect, while some details are still blurred. In literature [22], the sky is first separated by local histogram equalization, and then depth information matching is obtained skillfully in the non-sky zone by a moving template. This algorithm overcomes the shortcoming of the global histogram equalization for the detail processing and avoids the influence of the sky noise. But when this algorithm is applied, subimage selection easily leads to block effect and thereby cannot improve the visual effect considerably.

The physical model-based restoration method analyzes and processes the degradation process of foggy images in depth. To solve the problem of degraded image restoration, it first analyzes the inverse generation process of degraded images and builds a model for the effect of atmospheric scattering on the attenuation of image contrast. Generally speaking, since the degradation of physical process is considered, foggy images can be enhanced more effectively in this way than simple image processing. Yitzhaky et al. [23] were the first to consider the cause of foggy image degradation. Based on the detailed analysis of the influence of atmosphere on image degradation, an image degradation model was proposed. The effect of the atmosphere on the degradation of the image is considered as a degraded system, which can be used to eliminate the influence of weather factors on the image. However, since the key to building an image degradation model is to determine an atmospheric modulation transfer function and that the ratio of the effect of atmospheric turbulence and aerosol particles in the atmospheric modulation transfer function has something to do with the meteorological conditions of image photographing, the corresponding local meteorological parameters should be acquired from the meteorological station. However, these parameters are usually hard to get due to the harsh additional conditions.

In recent years, He et al. [24] proposed an algorithm based on dark channel a priori, which has attracted broad attention for its simplicity and efficacy. Dark channel a priori enables quick acquisition of transmissivity corresponding to each point from the original image, thus making real-time defogging possible, which is a premium feature for outdoor surveillance. Its practical application, however, usually produces results that are affected by color oversaturation, leading to image distortion. For this reason, this paper proposed an improved algorithm. First, incident light frequency's effect on the transmittance of various color channels was analyzed according to the Beer-Lambert Law, from which a proportion among various channel transmittances was derived; after that, images were preprocessed by downsampling to refine transmittance, and then the original size was restored to enhance the operational efficiency of the algorithm; finally, the transmittance of all color channels was acquired in accordance with the proportion, and then the corresponding transmittance was used for image restoration in each channel.

The remainder of this paper is structured as follows. Section 2 outlines the principle of the defogging algorithm based on dark channel prior; Section 3 analyzes the shortcomings of the original algorithm and derives an improved algorithm; Section 4 proves the validity of the improved algorithm by comparing it with the existing algorithms in experiments.

2. Dark Channel Prior-Based Defogging Algorithm

2.1. Atmospheric Scattering Model. The atmospheric scattering model proposed by Narasimhan et al. [25-29] describes the degradation process of foggy images:

I (x) = J (x)t(x) + A(1 - t(x)), (1)

where I represents the intensity of the image observed, J represents the intensity of scene light, A represents the atmospheric light at infinity, and t is known as transmittance. The first equation item J(x)t(x) is an attenuation term, and A(1 - t(x)) is an atmospheric light item. The aim of image defogging is to restore J from I.

2.2. Dark Channel Prior. Dark channel prior knowledge comes from statistical observations of a great many outdoor fogless images. It shows that in most images there are always some pixels that have a small value on a color channel. This prior knowledge can be defined as follows:

[mathematical expression not reproducible]. (2)

[J.sup.c] represents a color channel of J, while [OMEGA](x) is a pixel x centered square area. Suppose J is an outdoor fogless image, and [J.sup.dark] is a dark channel of J, and the above experiential law obtained by observations is known as dark channel prior. Dark channel prior knowledge indicates that the value of [J.sup.dark] is always very low and close to 0.

2.3. Defogging by Dark Channel Prior. Suppose that atmospheric light A has been fixed; then suppose transmittance is constant in a local region. The minimum operator is adopted for (1), and meanwhile A is divided, reducing to

[mathematical expression not reproducible], (3)

where superscript c denotes the component of a certain color channel and [??](x) denotes a roughly estimated transmittance. The minimum operator is adopted for color channel c, so

[mathematical expression not reproducible]. (4)

According to the law of dark channel prior, the dark channel item [J.sup.dark] in the outdoor fogless images should approach 0:

[mathematical expression not reproducible]. (5)

A rough transmittance can be estimated if the above equation is substituted into (4):

[mathematical expression not reproducible]. (6)

It has been discovered that in the practical application if the fog is removed thoroughly, an image will, however, look unreal, and depth perception will be lost. Therefore, constant [omega] (0 < [omega] [less than or equal to] 1) can be introduced to (6) to retain some fog:

[mathematical expression not reproducible]. (7)

Transmittance can only be roughly estimated according to the above equation, so to improve the accuracy, the original paper used an image matting algorithm [30] to refine the transmittance. The following linear equation can be solved to refine the transmittance:

(L + [lambda]U)t = [lambda][??], (8)

where [lambda] is a corrected parameter and L is the Laplacian matrix proposed by the image matting algorithm, which is usually a large sparse matrix.

After a refined transmittance t(x) is obtained, the equation below is used to calculate the resulting image J(x) of defogging:

J(x) = I(x) - A/max (t(x), [t.sub.0]) + A, (9)

where atmospheric light A is estimated this way: sort the pixels in dark channel [J.sup.dark] in descending order by brightness value, compare the brightness value of the corresponding points of the first 0.1% pixels in the original image I, and finally take the brightest point as atmospheric light A.

For most outdoor foggy images, the above algorithm can achieve a good defogging effect, but color distortion or excessive color saturation may be caused when this algorithm is used to process some images. Moreover, the algorithm runs very slowly. For instance, it will take 39.96 seconds to process an image with a size of 600 * 455. To solve this problem, this paper improved the original algorithm.

3. Incident Light Frequency-Based Algorithm

The imaging formula popular in the machine vision field is adopted in the existing defogging models [25-29], which was derived based on the Beer-Lambert Law [31]:

I(x) = [L.sub.[infinity]][rho](x)[e.sup.-[beta]d(x)] + [L.sub.[infinity]] (1 - [e.sup.-[beta]d(x)]), (10)

where I(x) represents the brightness value of the pixel at x coordinate in the image, [rho](x) represents the reflection coefficient of various body surfaces, and [e.sup.-[beta]d(x)] is transmittance t, which represents the attenuation degree of energy when light propagates in atmosphere.

As can be seen from the proof procedure of the Beer-Lambert Law, transmittance is derived from the equation below:

[mathematical expression not reproducible], (11)

where v represents incident light frequency, z represents a certain point on the propagation path of incident light. As shown in (11), transmittance t is related to the medium attribute y(v,z) of each point on the propagation path of incident light.

3.1. Original Algorithm Hypotheses. To reduce the complexity of the existing defogging model, two hypotheses are made on transmittance t in it.

3.1.1. Constant Frequency. When incident light frequency v is constant, (11) is simplified into

[mathematical expression not reproducible]. (12)

As can be seen in the equation above, the medium attribute function on the propagation path is simplified from bivariate function y(v, z) into single-variable function D(z).

3.1.2. Homogeneous Atmospheric Media. Suppose there are homogeneous atmospheric media on the propagation path of incident light; then (12) is further simplified into

[mathematical expression not reproducible], (13)

where d(x) represents the field depth at point x in the image, namely, the spatial distance between object and imaging device.

3.2. Improvement Direction. Although the above two hypotheses have greatly simplified the complexity of the defogging model, they also reduce the quality of defogging. To further improve defogging quality, we reintroduced the effect of incident light frequency v on attenuation coefficient p into atmospheric light imaging formula (10), thus further improving the atmospheric light imaging formula, as shown below:

[mathematical expression not reproducible]. (14)

At this point, attenuation coefficient [beta] is turned into the function [beta](v) of incident light frequency v.

According to the distance between object and imaging device (field depth), foggy images can fall into three categories.

(i) Long-Field Images. The overwhelming majority of scenes in the images are in the range of long-field depth (objects are over 500 m away from the camera).

(ii) Short-Field Images. The overwhelming majority of scenes in the images are in the range of short-field depth (objects are less than 500 m away from the camera).

(iii) Mixed-Field Images. Long and short-field scenes exist side by side in the images.

For long-field images, since field depth has increased, the concentration of fog on the propagation path of incident light will show increasingly complex changes with distance increasing. So, the prerequisites for the tenability of (14) are as follows: media are no longer uniformly distributed on the propagation path of light, and a new atmospheric light model needs to be built. According to the observation of the real world, if an observed object is farther away from the observer, the light from it will be harder to discover and thereby be replaced with atmospheric light (the sum of the various light beams from the environment that interact with each other). Thus, the analytical processing of such images can be translated into that of atmospheric light distribution.

For short-field images, due to the small field depth, the concentration of fog within this range can be considered constant, so such foggy images can be processed by (14).

For mixed-field images, zones can be partitioned according to field depth, and then images can be processed separately by scene types.

This paper proposed an improved method for the processing of short-field images. As can be seen from (14), the key to the processing of short-field images lies in the calculation of attenuation coefficient [beta](v). That is the focus of research in this paper.

3.3. Attenuation Coefficient. Since only the incident light from such three frequency bands as red (R), green (G), and blue (B) is imaged by a sensing method in the current cameras, the analysis of [beta](v) can be limited to R, G, and B. A typical frequency value [v.sub.r], [v.sub.g], [v.sub.b] is fetched, respectively, from R, G, and B, which correspond to attenuation coefficients [beta]([v.sub.r]), [beta]([v.sub.g]), [beta]([v.sub.b]), respectively. The value of attenuation coefficient can be calculated by the analytical statistics of the attenuation of pure light R, G, and B in foggy weather.

After the value of [beta]([v.sub.r]), [beta]([v.sub.g]), [beta]([v.sub.b]) is calculated, their ratio can be computed. The result is denoted by

[beta]([v.sub.r]): [beta]([v.sub.g]): [beta]([v.sub.b]) = [[beta].sub.r]: [[beta].sub.g]: [[beta].sub.b]. (15)

The statistical experiment shows that the effect of image restoration is comparatively ideal when [[beta].sub.r] : [[beta].sub.g] : [[beta].sub.b] = 1 : 1-28 : 1.61.

Suppose it is known that some color channel's transmittance [mathematical expression not reproducible]; other channels' transmittance [mathematical expression not reproducible] can be calculated by it:

[mathematical expression not reproducible], (16)

where c [member of {r,g,b}, [lambda] = [[??].sub.c]/[[beta].sub.c].

As can be seen in the equation above, once a color channel's transmittance [t.sub.c] is worked out, other color channels' transmittance can be calculated according to it, as shown below:

[mathematical expression not reproducible]. (17)

3.4. An Improved Image Restoration Method. Based on the analysis above, this section put forward a new transmittance calculation method.

The main steps are shown as follows:

(1) Reduce image size, desampling the input image I(x) to [??](x).

(2) Acquire dark channel pixel value [[??].sub.d](x) and its color channel d according to the formula below:

[mathematical expression not reproducible]. (18)

(3) Calculate the transmittance [[??].sub.d](x) corresponding to the color channel d in which the dark channel is located using the formula below:

[mathematical expression not reproducible]. (19)

(4) Use (8) to refine transmittance [[??].sub.d](x) into [[??].sub.d](x).

(5) Restore [[??].sub.d](x) to the size of the original image by interpolation and get [t.sub.d](x).

(6) Calculate the transmittance of all color channels by (17).

(7) Restore original image J(x) in all color channels using the formula below:

[mathematical expression not reproducible]. (20)

The rough operating process of the defogging algorithm is shown in Figure 1.

4. Experiments and Analysis

To test the effect of this algorithm, this paper selected six images: Tiananmen (600 * 455); House (441 * 450); Swan (835 * 557); Sweden (600 * 400); NY (800 * 431); Gugong (400 * 600).

To prove the superiority of the proposed algorithm, four famous defogging algorithms, CLAHE [19], Tarel's [18], Meng's [17], and He's [24], were used. The code of CLAHE was downloaded from Xu's Website [32], and the parameter ClipLimit used by this algorithm is set to 0.02. The code of Tarel's algorithm was downloaded from Tarel's Website [33], the percentage of restoration is set to 0.95, balance is set to 0.5, smax is set to 1, and gfactor is set to 1.3. The code of Meng's algorithm was downloaded from Meng's home page [34], the parameter method of Airlight function is set to our, and the Window size is set to 15. The parameter C0 of Boundcon function is set to 30, C1 is set to 300, and the Window size is set to 3. The parameter lambda of CalTransmission function is set to 2 and param is set to 0.5; the parameter delta of Dehazefun function is set to 0.85. The code of He's algorithm was implemented by MATLAB code, the Window Q used for dark channel calculation is set to 5x5, and m used in (6) is set to 0.95. The soft matting algorithm [30] is used to optimize the transmittance result.

The experimental platform we used: hardware platform is Intel(R) Core(TM) i3-3220 CPU with a basic frequency of 3.3 GHz, 8 G memory with a basic frequency of DDR3 1600 MHz, software platform is MATLAB R2016a 64-bit.

Image matting was adopted to refine transmittance in the original algorithm [24]. But this method essentially has high time complexity and space complexity since it is usually used to solve large-scale sparse linear equations. However, the effect of this step on restoration is no more than softening the edge of the transition region between foreground and background to weaken edge effect. Thus, the algorithm proposed in this paper reduces image size significantly, then refines transmittance by image matting, and finally restores the refined transmittance image to the original size by tricubic interpolates.

Figure 2 is a comparison of transmittance between the improved algorithm and the original algorithm. As can be seen in the figure, there is little difference between both in edge softening. However, size reduction can help greatly improve the computational efficiency of restoration algorithm.

As can be seen from Table 1, He's algorithm consumes the most time; it takes nearly 70 seconds to process an image with size of 800 * 600 (typical size of monitor image). CLAHE algorithm consumes the least time, but the defogging effect is very poor because of the serious color distortion. Tarel's algorithm has the same problem of color distortion as CLAHE. Meng's algorithm and our algorithm consume almost the same time and both have good defogging effect. However, as can be seen from the local magnification result, Figure 4, our algorithm is better than Meng's algorithm in detail. It also can be seen that, by downsampling before refining transmittance, the time of the operating efficiency of our algorithm is 49 times as high as that of the original one.

According to the results before and after improvement shown in Figure 3, after the transmittance of multiple channels is corrected, the algorithm proposed in this paper solves the problem of slightly higher color saturation in the existing algorithm and achieves a better and more natural visual effect than the original algorithm.

5. Conclusions

This paper made a theoretical analysis and experimental observation of the dark channel prior-based defogging algorithm, discovering from its theoretical basis that the existing algorithm ignores the effect of incident light frequency on transmittance. Therefore, this paper started with the derivation process of transmittance to reversely derive the relation between various channel transmittances to enhance the defogging algorithm. The experimental results suggest that this improved algorithm can achieve a more natural color effect in the restoration result. Since roughly calculated dark channel was still used to compute the relationship between various transmittances, a slight block effect appeared in our restoration result. The proposed algorithm will be improved in the next work to eliminate the block effect.

https://doi.org/10.1155/2017/9739201

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

[1] H. Zhang, X. Liu, and Y. Cheung, "Efficient single image dehazing via scene-adaptive segmentation and improved dark channel model," in Proceedings of the International Joint Conference on Neural Networks (IJCNN '16), Vancouver, Canada, July 2016.

[2] M. Ju, D. Zhang, and X. Wang, "Single image dehazing via an improved atmospheric scattering model," The Visual Computer, pp. 1-13, 2016.

[3] H. Lu, Y. Li, S. Nakashima, and S. Serikawa, "Single image dehazing through improved atmospheric light estimation," Multimedia Tools and Applications, vol. 75, no. 24, pp. 17081-17096, 2016.

[4] Z. Mi, H. Zhou, Y. Zheng, and M. Wang, "Single image dehazing via multi-scale gradient domain contrast enhancement," IET Image Processing, vol. 10, no. 3, pp. 206-214, 2016.

[5] T. Cui, L. Qu, J. Tian, and Y. Tang, "Single image haze removal based on luminance weight prior," in Proceedings of the IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER '16), pp. 332-336, Chengdu, China, June 2016.

[6] Z. Tang, X. Zhang, X. Li, and S. Zhang, "Robust image hashing with ring partition and invariant vector distance," IEEE Transactions on Information Forensics and Security, vol. 11, no. 1, pp. 200-214, 2016.

[7] Z. Gao and Y. Bai, "Single image haze removal algorithm using pixel-based airlight constraints," in Proceedings of the 22nd International Conference on Automation and Computing (ICAC '16): Tackling the New Challenges in Automation and Computing, no. 1, pp. 267-272, Colchester, UK, 2016.

[8] S. Santra and B. Chanda, "Single image dehazing with varying atmospheric light intensity," in Proceedings of the 5th National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG '15), December 2015.

[9] Q. Zhu, J. Mai, and L. Shao, "A fast single image haze removal algorithm using color attenuation prior," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, 2015.

[10] B. Huo, F. Yin, and B. Polytechnic, "Image dehazing with dark channel prior and novel estimation," International Journal of Multimedia and Ubiquitous Engineering, vol. 10, no. 3, pp. 13-22, 2015.

[11] C. Science and M. Studies, "Image enhancement techniques for different atmospheric conditions," International Journal of Advance Research in Computer Science and Management Studies, vol. 3, no. 2, pp. 49-52, 2015.

[12] L. K. Choi, J. You, and A. C. Bovik, "Referenceless prediction of perceptual fog density and perceptual image defogging," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3888-3901, 2015.

[13] H. Zhao, C. Xiao, J. Yu, and X. Xu, "Single image fog removal based on local extrema," IEEE/CAA Journal of Automatica Sinica, vol. 2, no. 2, pp. 158-165, 2015.

[14] L. Zhang, X. Li, B. Hu, and X. Ren, "Research on fast smog free algorithm on single image," in Proceedings of the 1st International Conference on Computational Intelligence Theory, Systems and Applications (CCITSA '15), pp. 177-182, IEEE, Yilan, Taiwan, December 2015.

[15] Y.-K. Wang and C.-T. Fan, "Single image defogging by multi-scale depth fusion," IEEE Transactions on Image Processing, vol. 23, no. 11, pp. 4826-4837, 2014.

[16] Z. Tang, X. Zhang, and S. Zhang, "Robust perceptual image hashing based on ring partition and NMF," IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 3, pp. 711-724, 2014.

[17] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, "Efficient image dehazing with boundary constraint and contextual regularization," in Proceedings of the 14th IEEE International Conference on Computer Vision (ICCV '13), pp. 617-624, December 2013.

[18] J.-P. Tarel and N. Hautiere, "Fast visibility restoration from a single color or gray level image," in Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV '09), pp. 2201-2208, IEEE, Kyoto, Japan, September 2009.

[19] K. Zuiderveld, "Contrast limited adaptive histogram equalization," in Graphics Gems IV, P. S. Heckbert, Ed., pp. 474-485, Academic Press Professional, San Diego, Calif, USA, 1994.

[20] Y. Xu, J. Wen, L. Fei, and Z. Zhang, "Review of video and image defogging algorithms and related studies on image restoration and enhancement," IEEE Access, vol. 4, pp. 165-188, 2016.

[21] S. Lee, S. Yun, J.-H. Nam, C. S. Won, and S.-W. Jung, "A review on dark channel prior based image dehazing algorithms," EURASIP Journal on Image and Video Processing, vol. 2016, article 4, 2016.

[22] Z. Pei, Z. Hong, Q. I. Xue-ming, and L. Han, "An image clearness method for fog," Journal of Image and Graphics, vol. 9, no. 1, pp. 124-128, 2004.

[23] Y. Yitzhaky, I. Dror, and N. S. Kopeika, "Restoration of atmospherically blurred images according to weather-predicted atmospheric modulation transfer functions," Optical Engineering, vol. 36, no. 11, pp. 3064-3072, 1997

[24] K. He, J. Sun, and X. Tang, "Single image haze removal using dark channel prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, 2011.

[25] S. G. Narasimhan and S. K. Nayar, "Contrast restoration of weather degraded images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713-724, 2003.

[26] S. G. Narasimhan and S. K. Nayar, "Vision and the atmosphere," International Journal of Computer Vision, vol. 48, no. 3, pp. 233-254, 2002.

[27] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, "Instant dehazing of images using polarization," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), vol. 1, pp. I325-I332, December 2001.

[28] F. Cozman and E. Krotkov, "Depth from scattering," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 801-806, June 1997

[29] S. K. Nayar and S. G. Narasimhan, "Vision in bad weather," in Proceedings of the 17th IEEE International Conference on Computer Vision (ICCV '99), vol. 2, pp. 820-827, Kerkyra, Greece, 1999.

[30] A. Levin, D. Lischinski, and Y. Weiss, "A closed-form solution to natural image matting," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 228-242, 2008.

[31] D. F. Swinehart, "The Beer-Lambert law," Journal of Chemical Education, vol. 39, no. 7, pp. 333-335, 1962.

[32] Y. Xu, J. Wen, L. Fei, and Z. Zhang, "Implementation code of clahe," http://www.yongxu.org/code/the%20survey%20of% 20defogging.zip.

[33] J.-P. Tarel and N. Hautiere, "Implementation code of fast visibility restoration from a single color or gray level image," http://perso.lcpc.fr/tarel.jean-philippe/visibility/visibresto2.zip.

[34] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, "Implementation code of efficient image dehazing with boundary constraint and contextual regularization," http://wwwescience.cn/people/ menggaofeng/research.html.

Wenbo Zhang and Xiaorong Hou

School of Energy Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China

Correspondence should be addressed to Wenbo Zhang; stroot@163.com and Xiaorong Hou; houxr@uestc.edu.cn

Received 12 December 2016; Revised 29 March 2017; Accepted 2 April 2017; Published 10 April 2017

Academic Editor: Jean Jacques Loiseau

Caption: Figure 1: Algorithm processing flow.

Caption: Figure 2: Comparisons of the transmission map. From top to bottom: soft matting [30] and Our Method.

Caption: Figure 3: Comparison of dehazing results. From left to right: original, CLAHE [19], Tarel's [18], Meng's [17], He's [24], and our image.

Caption: Figure 4: Local amplification of Swan's dehazing results. From left to right: original, CLAHE [19], Tarel's [18], Meng's [17], He's [24], and our image.
Table 1: Comparison of the time consumed by each algorithm.

Image name    Width * height    CLAHE (s)    Tarel (s)    Meng (s)

Tiananmen        600 * 455         0.09         7.25        3.60
House            441 * 450         0.10         3.67        3.55
Swan             835 * 557         0.13        19.54        7.01
Sweden           600 * 400         0.08         6.75        3.32
NY               800 *431          0.11        16.50        5.08
Gugong           400 * 600         0.17         6.91        6.39

Image name    He (s)    Proposed (s)

Tiananmen      39.96        5.50
House          30.97        6.60
Swan           70.10        6.39
Sweden         35.04        5.04
NY             54.95        5.13
Gugong         37.63        5.06
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Zhang, Wenbo; Hou, Xiaorong
Publication:Mathematical Problems in Engineering
Article Type:Report
Date:Jan 1, 2017
Words:4505
Previous Article:A Simultaneous Iteration Algorithm for Solving Extended Split Equality Fixed Point Problem.
Next Article:Ultrasonic Adaptive Detection for Aerospace Components with Varying Thickness.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters