Printer Friendly

A multi-resolution image fusion algorithm based on multi-factor weights.

1. Introduction

Real world scenes are usually have high dynamic range (HDR) of brightness information as contrast to scenes captured by the traditional digital cameras, which can only record the limited contrast, brightness and color information [1,2]. A certain dynamic range of a scene can be obtained by changing the exposure level, [3], but in any case, a single photo cannot record all detail information of a scene. To tackle this issue, a series of photos can be taken of the same scene with different exposure level, and so more detail information can be provided. Low-light photos can provide details of bright areas while high-light digital images can show the details of the shadow of a scene, so the different details of the scene can be included in a single photo by multi-exposure image fusion [4, 5] (showing as Figure 1).

Several techniques are proposed in the past to combine multi-exposure images. One of the approaches is to create an HDR image from multi-exposure images. This process is carried out in three steps. We review the steps from the text of Szeliski [6]: estimating the radiometric response function, estimating a radiance map by selecting or blending pixels from different exposures and tone mapping the resulting HDR images back into displayable gamut. The major limitation of this approach is that it requires digital camera parameters (exposure times) to calculate the radiometric response function, which might not be known to the user. Another approach is to fuse the LDR images directly, without obtaining the intermediate HDR image by using appropriate fusion technique. Goshtasby [7] proposed a region based fusion technique. In his approach, the digital images are partitioned into sub blocks, which make the algorithm computationally complex and slow.

Pyramid transform [8] and wavelet transform [9] are the main multi-scale decomposition image fusion methods. In order to be fused better, the digital image is decomposed into sub images with different scales and directions through multi-resolution analysis, and these sub images represent different characteristics of the original digital image [10]. This paper proposes a multi-resolution image fusion algorithm based on multi-factor weights, which firstly generates the weight diagram of an original multi-exposure image by measuring factors, then generates a high dynamic range images according weighted average based on Laplacian pyramid, finally, obtains the final fusion image by the Laplacian inverse transform. The fusion is performed pixel-by-pixel and does not involve any filtering or transformation, so the algorithm is very fast and computationally efficient. The experiments show that the resulting image quality is comparable with the popular algorithms, or even gets higher information entropy and contrast of the fused image.

The rest of the paper is organized as follows. Section 2 introduces Laplacian Pyramid transform method; Section 3 describes Multi-resolution image fusion algorithm based on multi-factor weights; Section 4 describes fusion result evaluation method; Section 5 presents the experimental results; and finally Section 6 are conclusions.

2. Laplacian Pyramid Transform Method

To fuse digital images, firstly the two digital images should be decomposed by the Laplacian Pyramid with two steps. The first step is Gauss Pyramid decomposition and then is Laplacian Pyramid from Gauss Pyramid [11, 12].

2.1 Gauss Pyramid Decomposition

In order to construct the Lth layer, the original digital image G0 is considered as the bottom of the Gauss Pyramid. The processes are as follows:

Firstly, the L-1th layer GL-1 is convolved with the low-pass window function. Secondly, the convolution result is half down-sampled as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

Where N is the top of the Gauss Pyramid, GL is the column number of Lth layer image, [R.sub.L] is the row number of the image, [omega](m, n) is the 5 * 5 window function.

The window function [omega](m, n) is expressed as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

With introducing the narrowing operator named Reduce, (1) can be written as:

[G.sub.L] = Reduce ([G.sub.L-1]) (3)

So a Gauss Pyramid image is constructed by [G.sub.0], [G.sub.1], ..., [G.sub.N], in which, G0is the base of Gauss Pyramid, [G.sub.N] is the top of Gauss Pyramid and the total layer number is N + 1. The L + 1th layer image is obtained from the Lth layer can consider as the low pass filter result with the image. As a result, the digital image becomes blurred gradually from bottom to top layer, exists a lot of redundant information among each layer.

2.2 The Establishment of Laplacian Pyramid

Laplacian Pyramid is aim to reduce the large amounts of redundant information in layers of Gauss Pyramid, so the band pass filtered image can be obtained from the difference between the two adjacent layers images. The detail process is as following: Firstly, enlarging the digital image [G.sub.L] by interpolation, the interpolated gray-value is the weighted average of gray-value of the original image, thus obtaining the enlarged image G*L with the same size of [G.sub.L-1]th image. Introducing an enlarging operator, named Expend, i.e.:

[G.sup.*.sub.L] = Expand ([G.sub.L]) (4)

Corresponding with (1), the enlarging operator is defined as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

The operator of Expand and Reduce are inverse operation with each other. [G.sup.*.sub.L]th layer contains less information than [G.sub.L-1]th layer for the [G.sub.L]th layer is obtained from the [G.sub.L-1]th layer by convolving and down-sampling, with the same size, but not equal.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

Where, N is the top layer number of Laplacian Pyramid; [LP.sub.L] denotes the Lth layer image of the Laplacient Pyramid decomposition. The Laplacient Pyramid is made up of [LP.sub.0], [LP.sub.1], ..., [LP.sub.N]. Except for the top layer which retains low-frequency information of the original digital image, the other layers show the band-pass characteristics of the original image in different spatial resolutions. These characteristics information are distributed in different layers with different scales.

2.3 Reconstruction of the Original Digital Image by Laplacian Pyramid

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

The image in each layer of Laplacian Pyramid can be enlarged to the same size of the next adjacent layer by interpolation. Then, the original digital image can be reconstructed accurately by adding all the layers, which indicates that Laplacian Pyramid decomposition is a complete expression of the original digital image.

3. Multi-resolution Image Fusion Algorithm Based on Multi-factor Weights

The same area of the digital images with different exposure is always performed better than a certain exposure value. So the more detail information can be obtained from the fused image than from a single image, by fusing the images with defined weight. It is supposed that the digital image sequence have been registered or fixed with a tripod, because we mainly focus on the image fusion in this paper. To generate the eight diagram of the original multi-exposed image, we consider three measuring factors of an image which are standard deviation, gradient, and moderate exposure.

Then, the fused image is obtained by computing the weighted average of the weight diagram according to the Pyramid principle. Lastly, the final fused image is obtained by doing inverse Laplacian Pyramid transformation to the fused Pyramid. The steps are as follows: (1) Computing the weight value W of each pixel according to the weight definition; (2) Doing multi-resolution fusion to the image sequence according to the weights obtained (showing as Figure 2).

3.1 Weight definition

1. Image gradient

The gradient value of pixels in a digital image with different exposure indicates the richness of detail information of the image. The better exposure effect generates clearer contour and larger gradient. So a larger gradient weight can keep the detail information to the fusion image well; On the contrary, the gradient value of the underexposure or overexposure areas is smaller, so the weight is smaller too. The gradient computing method in this paper is fully based on the existing Pyramid decomposition, i.e. the Lth layer image of Laplacian Pyramid decomposition in (6) is used as the gradient of the Lth layer image, named as [T.sub.L].

[T.sub.L] = [G.sub.L] - Expand ([G.sub.L+1]) 0 [less than or equal to] L < N (8)

2. The standard Deviation of an Image

The standard deviation of a digital image reflects the dispersion degree among pixels. The greater standard deviation is, the clearer the image is; otherwise, the image is fuzzy. As to an image with size of M * N, the standard deviation [S.sub.L] is as follows:

[S.sub.L] = [square root of ([1/MN] [M.summation over (i=1)] [N.summation over (j=1)] [([f.sub.L] - (i, j) - [[bar.U].sub.L]).sup.2])] (9)

Where, [[bar.U].sub.L] = [1/MN] [M.summation over (i=1)] [N.summation over (j=1)] [f.sub.L] (i, j), it represents the mean gray value of the Lth layer image.

3. The moderate exposure

The moderate exposure [13] is calculated based on visual property of human eye and also referred to the spatial frequency characteristics of a digital image. When the exposure is moderate, the human eye can see the highlighted information of an image in the centre area clearly. So, the pixels in the center area of the image are distributed with a lager weight. The calculation is showed as follows:

[E.sub.L] = exp (- [[([f.sub.L] - 0.5).sup.2]/2[[sigma].sup.2]]) (10)

Where [f.sub.L] is the pixel values normalized in the Lth layer of the image; [sigma] is the variance with value is 0.02 in the experiment.

By calculating the joint product of the Lth layer gradient, standard deviation and moderate exposure of each image, the linear combination with three information measuring factor is expressed as follows:

[W.sup.K.sub.L] = ([T.sup.K.sub.L] * [S.sup.K.sub.L] * [E.sup.K.sub.L]). [conjunction] x (11)

Where [W.sup.K.sub.L], [T.sup.K.sub.L], [S.sup.K.sub.L] and E^respectively note the weight, the gradient, standard deviation and moderate exposure of the Lth layer of the Pyramid of Kth image; X is the adjusting parameter of the weight and x = 2 in the experiment.

Normalize [W.sup.K.sub.L] of the M multi-exposure images and that is as follows:

[[??].sup.K.sub.L] = [[M.summation over (k=1)] [W.sup.K.sub.L]].sup.-1] [W.sup.K.sub.L] (12)

Finally, the corresponding weight diagrams for different exposure images are obtained.

3.2 Image Fusion Principle

First, M multi-exposure images are decomposed based on the Pyramid decomposition; Then, the weight diagram of the original multi-exposure image is generated according to the factors of gradient, standard deviation and moderate exposure, and the image layers are obtained of the same size with Laplacian Pyramid of the original digital image; Finally, the decomposed weight diagram are weighted average with the decomposed original images. Especially, to the top layer of the Pyramid which L = N, i.e. low frequency information, the arithmetic average is used to compute the fusion image in this paper, i.e.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

Where M is the number of multi-exposure images; N is the total number of the Pyramid decomposition; L is the current layer, L [member of] (0, 1, 2, ..., N).

4. The Fusion Result Evaluation Method

Fusion results should be compared and evaluated accordingly to evaluate the efficiency and performance of different fusion methods. Judging the digital image quality by human subjective feelings is qualitative evaluation, which is simple, visual, and convenient for evaluating obvious information of a digital image. But the human vision is not very sensitive to all variations and it is subjective and non-comprehensive. So we need comprehensive evaluation combines with quantitative evaluation and objective evaluation here. Quantitative evaluation is judging the image quality by mathematical feature values, which are objective and existing many methods. Different evaluation methods are used to the image fusion methods for different purpose. In this paper, the information entropy, average gradient and contrast are used to evaluate the fused image.

4.1 Information Entropy of an Digital Image

Supposing the fusion image need to be evaluated is F; the image function is f (x, y); the image size is M * N; and L is the number of the gray-level of the digital image. Information entropy, an important indicator to measure the information richness of an image, is calculated as follows:

EN = -[L-1.summation over (i=0)] [P.sub.i] log [P.sub.i] (14)

Where EN is information entropy; [P.sub.i] is the ratio of [N.sub.i] and M x n. [N.sub.i] is the number of pixels whose gray-level is i and M x n is the total number of pixels, i.e.

[P.sub.i] = [[N.sub.i]/M x N], P = {[P.sub.0], [P.sub.1], ..., [P.sub.L-1]} shows the probability distribution of pixels with different gray-level of the digital image.

Larger entropy means the fusion image contains richer information and the fusion quality is better.

4.2 The average gradient of a digital image

The average gradient can sensitively reflect the capability of a digital image contrast expression of tiny details and texture transformation characteristics, is used to evaluate the image blur degree. The calculation formula is as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

In a digital image, the larger the rate of gray-level change in a certain direction is, the bigger the gradient, and the clearer the image is.

4.3 The contrast of an image

Contrast can indicate the clarity of the digital image quality.

The greater the contrast is, the more the change levels from black to white are, and the richer the color manifest. The calculation formula is as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

Where, [delta](i,j) = [absolute value of (i - j)] are the gray-level values of adjacent pixels; [P.sub.[delta]](i,j) is the distribution probability of the adjacent pixels with the gray-lever difference is [delta].

5. Results and Analysis of the Experiments

Two groups of digital images with different exposure are selected for multi resolution fusion by using Laplacian Pyramid, Contrast Pyramid [14], Rate of low-pass Pyramid, Gradient Pyramid [15], Wavelet Transform [16,17] and method proposed in this paper respectively, high dynamic range images with different quality are obtained as shown in Figure 3 and Figure 5.

See intuitively, fusion results are similar with the two groups of multi-exposure images in Figure. 3 and Figure 5. Figure 4 and Figure 6 is local zoom in for Figure 3 and Figure 5 respectively. In Figure 4: we can see many virtual areas in Figure 4 (a) which is the fusion result of contrast Pyramid; Contrast of Figure 4 (d) is better than that of Figure 4 (b) and Figure 4 (c) obviously. In Figure 6: we can see ghost in Figure 6 (a) and Figure 6 (b) obviously; There has no ghost in Figure 6 (c), but the boundary of Figure 6 (c) is over enhancement and the contour is clear; There is few ghost in Figure 6 (d), but it looks better than the other methods mentioned before.

The fusion results are quantitatively analyzed according to the Table 1 and Table 2 (Data in table is normalized). From the point of view of information entropy, the mentioned fusion methods are very close; the result of the proposed method is a little worse than that of Contrast Pyramid, as matter of fact, it is pseudo information entropy due to the virtual areas in the Contrast Pyramid fusion result, which making the information entropy increase. As to the average gradient and contrast, the result of the proposed method is better than others and the obtained image is clearer.

So, from both a qualitative and quantitative analysis results, the fusion effect of the proposed method is superior to other methods.

6. Conclusion

HDR images can be generated by merging digital images taken at different exposures from the same digital camera for the same scene, which are constructed from lower dynamic range images. This paper presented an algorithm for fusion of multi-exposure images gained good fusion effect with very low computational complexity, which is performed on different scales, different space resolution and different decomposition layers respectively, combining three measuring factors of gradient, standard deviation, and moderate exposure. The proposed algorithm is fully automatic and does not require any knowledge of digital camera parameters or any sort of human intervention. The experimental results show that our algorithm preserves details even in very dark or very bright regions. Qualitative and quantitative analysis shows that the proposed fusion algorithm can get wide dynamic range while keeping high information entropy.

7. Acknowledgements

This research was funded by a grant (2014JK1014) from Shaanxi Natural Science Foundation.

References

[1] Raman, S., Chaudhuri, S. (2010). Low Dynamic Range Solutions to the High Dynamic Range Imaging Problem. Journal of Measurement Science and Instrumentation, 1 [1] 32-36.

[2] Kuk, J. G., Cho, N. I., Lee, S. U. (2011). High dynamic range (HDR) imaging by gradient domain fusion. In: Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEE, p. 1461-1464.

[3] Durand, F., Dorsey, J. (2002). Fast bilateral filtering for the display of high-dynamic-range images. ACM Transactions on Graphics, 21 (3) 257-266.

[4] Mertens, T., Kautz, J., Van Reeth, F. (2007). Exposure fusion. In: Computer Graphics and Applications, PG'07. 15th Pacific Conference on. IEEE, p. 382-390.

[5] Shen Rui, Cheng Irene, Shi Jian-bo, et al. (2011). Generalized random walks for fusion of multi-exposure images. Image Processing, IEEE Transactions on, p. 20 (12) 3634-3646.

[6] Szeliski, R. (2010). Computer vision: algorithms and applications. Springer, New York.

[7] Goshtasby, A. A. (2005). Fusion of multi-exposure images. Image and Vision Computing, 23 (6) 611-618.

[8] Rui-hui, Zhu ., Min, Wan., Guo-bin, Fan. (2007). An Image Fuse Method Based on Pyramid Transformation. Computer Simulation, 24 (12) 178-192.

[9] Malik, M. H., Asif, S., Gilani, M. (2008). Wavelet based exposure fusion. In: Proceedings of the World Congress on Engineering, WCE'08, International Association of Engineers, p. 688-693.

[10] Chen, Yi-Hui., Chin-Chen, Chang. (2012). Image Tamper Detection and Recovery Based on Dual Watermarks Sharing Strategy. Journal of Digital Information Management,10 (1) 39-49.

[11] Jiang Tie, Zhu Gui-bin, Sun Ao. (2013). Multi-exposure image fusion based on pyramid transformation. Computer Technology and Development, 23 (1) 95-98.

[12] Hao, Chen., Yan-jie, Wang. (2009). Research on image fusion algorithm based on Laplacian pyramid transform. Laser Infrared, 39(4) 339-442.

[13] Guang-xin, Li., Ke, Wang., Li-bao, Zhang. (2005). Computationally efficient algorithm of multi-resolution image fusion with weighted average fusion rule, Journal of Image and Graphics, 10 (12) 1529-1536

[14] Tian, Pu., Qing-zhe, Fang., Guo-qiang, Ni. (2000). Contrast-Based Multi-resolution Image Fusion. Acta Electronica Sinica, 28 (12)116-118.

[15] Xin, Xu., Qiang, Cheng., Huan-jiang, Sun et al. (2011). Image visualization improvement based on gradient fusion, Journal of Image and Graphics, 16 (2) 278-286.

[16] Jing-hua, Wang., De, Xu., Congyan, Lang. (2011). Exposure fusion based on shift-invariant discrete wavelet transform. Journal of Information Science and Engineering, 27(1) 197-211.

[17] Ting, Liu., Jian, Cheng. (2013). Remote sensing image fusion with wavelet transform and sparse representation. Journal of Image and Graphics, 18 (8) 1045-1053.

Zhengfang FU (1,2), Hong ZHU (1)

(1) School of Automation and Information Engineering Xi'an University of Technology, Xi'an, China

(2) Department of Electronics and Information Engineering Ankang University, Ankang, China fzf9797@163.com

Table 1. Entropy, gradient, contrast parameter list of Figure 3

             (a)       (b)       (c)       (d)       (e)       (f)

Entropy    0.16611   0.16843   0.16583   0.16642   0.16681   0.1664
gradient   0.18209   0.16209   0.1178    0.17732   0.17854   0.18216
Contrast   0.19256   0.15186   0.08938   0.18318   0.18132   0.2017

Table 2. Entropy, gradient, contrast parameter list of Figure 5

             (a)       (b)       (c)        (d)       (e)       (f)

Entropy    0.16409   0.16856    0.1694    0.16505   0.16881   0.16409
gradient   0.18204   0.14236   0.13605    0.17611   0.17859   0.18485
Contrast   0.16722   0.10408   0.096089   0.15662   0.16002   0.31597
COPYRIGHT 2014 Digital Information Research Foundation
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Fu, Zhengfang; Zhu, Hong
Publication:Journal of Digital Information Management
Article Type:Report
Date:Oct 1, 2014
Words:3372
Previous Article:Heuristic for accelerating run-time task mapping in NoC-based heterogeneous MPSoCs.
Next Article:A process-aware security task scheduling algorithm.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |