Printer Friendly

A Precise-Mask-Based Method for Enhanced Image Inpainting.

1. Introduction

Image inpainting as a technique of restoring missing or damaged areas of an image in an undetectable form for observers has drawn considerable attention over recent years due to the demand for revisiting damaged works of paintings or photography to make them more legible [1]. Remote sensing images are generally degraded by motion blur, the random noise, and the cloud that becomes a popular topic in remote sensing field [2]. Image inpainting can obtain the high quality image from degraded image and it can inpaint the random noise region. The noise-like white or off-white region can be located and further inpainted using the CDD or other inpainting models in this paper. The fundamental process of image inpainting is to construct a mask to locate the boundary of damaged region followed by subsequent inpainting process. State-of-the-art methods have attached significance to the inpainting model, and the mask of damage region is usually selected manually or by the conventional threshold-based method. Since manual method is time-consuming and the threshold-based method does not have the same precision for various images, therefore, it is of vital significance to automatically construct a precise mask for the damaged region of an image in order to fully restore the missing message, which is the aim of this paper.

To date, the numerous approaches [3-8] have been proposed for image inpainting since the seminal work of BSCB [3,4]: for example, the classical Total Variation (TV) [7] and Curvature-Driven Diffusion (CDD) [8] models, which are based on solving Partial Differential Equation (PDE) to preserve the directions of the isophotes in the missing region. In practice, it is assumed that the basement layer is white and off-white photo letter or wall, so it is with the damaged region. Then a certain range of white area is taken as the mask based on the threshold which applies to the threshold of the pixels in the image to be inpainted.

However, the boundary between damaged and undamaged regions is usually inconspicuous, which may result in two main drawbacks using the threshold method: (1) some pixels belonging to damaged region may be lost when the threshold is high, which leads to the occurrence of bleached inpainting region; (2) some bright pixels in undamaged region are probably treated as the region which needs to be inpainted when the threshold is low. Therefore, the validity of conventional threshold method is strongly restricted; that is, it works on condition that the boundary of damaged region is conspicuous. In this work, we introduce a new method to construct a more precise mask based on guided filtering and [L.sub.0] smoothing in order to preciously locate the boundary of damaged region. Experimental results demonstrate that our method shows superiority over the conventional method in image inpainting especially for the images with inconspicuous boundary of damaged region.

2. Related Work

Two main steps comprise the conventional method for image inpainting.

Step 1. The construction of mask image: In detail, setting two values--255 and 0--to represent absolutely white and black, respectively. Supposing that the inpainting region approaches white, when each RGB value of pixels is higher than the limit value, the pixels are labeled as damaged pixels, and then the values of corresponding pixels in mask are assigned to 255. Otherwise, the values are assigned to 0. The principle of constructing mask with threshold method is shown in Figure 1, where I and M denote the damaged image and the mask, respectively.

Step 2. After the construction of mask image, the damaged region is inpainted iteratively by CDD model with the mask above. CDD inpainting is like the process of heat diffusion with the diffusion intensity depending on both the gradient and curvature. The diffusion intensity is high when the curvature is large and decreases along with the isophote furtherly smoothing. Hence, the CDD model can eliminate the large curvature and stabilize the small curvature.

3. The Proposed Method

As mentioned above, it is difficult to build mask when the boundary of damaged region is inconspicuous. Inspired by dark channel prior [9], a new method is proposed to deal with this issue. The definition of dark channel is described as

[mathematical expression not reproducible], (1)

where [J.sup.c] is a color channel of J and [OMEGA](x) is a local patch centered at x. [J.sup.dark] is the corresponding dark channel image. The size of [OMEGA](x) is 1x1 in this paper. As the natural outdoor images are usually colorful and full of shadows, the dark channel of these images is really dark. The bright parts in these images have very high intensity in each RGB channel, so that the brightest point in the dark channel image is usually close to white.

Since the damaged region (like noise or cloud region) in an image is generally white, it is assumed that the damaged region approaches white and its boundary is inconspicuous. Resultantly, the brightest pixel is considered as the most obviously damaged pixel. In addition, the colors of the damage region and its undamaged adjacent region are obviously different, which means that the damage region and undamaged region can be identified by our eyes. The most obvious damage pixel is labeled as reference point A. The pixels whose RGB channel is approximately balanced may be damaged pixels. Therefore, the pixels are selected as damaged pixels according to their balanced degree of RGB and approaching degree to reference point A.

The method mentioned above is sufficient for building mask when the boundary of damaged region is clear. However, it fails to work well when the boundary is inconspicuous. Motivated by this challenge, we here propose a new method to build precise mask for the boundary of damaged region based on guided filtering [10] and L0 smoothing [11, 12]. In brief, guided filtering is employed to enhance image details, and then L0 smoothing is used to sharpen major edges and eliminate low-amplitude structures. These two processes act as the first two steps of our proposed method as illustrated in Figure 2 and the theoretical details are provided as follows.

In [10], it is assumed that the output image q is linearly related to the guide image I in a local place in guide filter; then the output image in window wk with radius r centered at pixel k can be expressed as [q.sub.i] = ([a.sub.k][I.sub.i] + [b.sub.k]) the linear coefficient ([a.sub.k], [b.sub.k]) are constant in [[omega].sub.k]. This linear model ensures that q and I have the same edges due to [DELTA]q = a[nabla]I.

To determine the linear coefficient ([a.sub.k], [b.sub.k]), the filtering input p is constricted, and the difference between output q and input p is considered to be noise, expressed as

[q.sub.i] = [p.sub.i] - [n.sub.i] (2)

and then the linear coefficient ([a.sub.k], [b.sub.k]) can be solved by minimizing the cost function:

E ([a.sub.k], [b.sub.k]) = [summation over (i [member of] [[omega].sub.k])] ([([a.sub.k][I.sub.i] + [b.sub.k] - [p.sub.i]).sup.2] + [epsilon][a.sup.2.sub.k]), (3)

where [epsilon] is a regularization parameter. The guided filter behaves as an edge-preserving smoothing operator when I and p are identical. Under this circumstance, the filtered information is the difference between I and q, which is expressed as (I - q). This information represents the detailed information of I and is used for the following enhancement process.

The enhancement via guided filtering is written as

[I.sub.e] = q + t(l-q), (4)

where [I.sub.e] is the result of the enhancement through guided filtering and [xi] is the enhancement factor. As seen from (4), [I.sub.e] increases with the increase of [xi], meaning that the detailed information and therefore the boundary of damaged region will be enhanced as [xi] is high; otherwise, the enhancement will be weak.

[L.sub.0] smoothing is used to suppress low-amplitude details and sharpen the salient boundary. [L.sub.0] smoothing is realized by

[mathematical expression not reproducible], (5)

where L is the input image and the gradient of pixel p in the output image S is expressed as [nabla][S.sub.p] = [([[partial derivative].sub.x], [S.sub.p], [[partial derivative].sub.y][S.sub.p]).sup.T]. Here [[partial derivative].sub.x] and [[partial derivative].sub.y] are the gradients at the directions of x and y, respectively. C(S) is defined as a counter to count the number of the pixels whose gradient is nonzero; that is, [absolute value of ([[partial derivative].sub.x] [S.sub.p])] + [absolute value of ([[partial derivative].sub.y][S.sub.p])] [not equal to] 0. The aim of minimization C(S) is to sharpen the major boundary, so that the boundary of damaged region can be preserved in S and then found in the gradient of S.

Figure 2 shows the sketch of our method and the steps are listed below.

Step 1. The damaged image I is enhanced by guided filtering, and the result is represented by [I.sub.e].

Step 2. [I.sub.e] is smoothed by L0 smoothing, and the result is denoted by [mathematical expression not reproducible].

Step 3. The interim mask [M.sub.1] is built by RGB balance and approaching degree of A from [mathematical expression not reproducible].

Step 4. The mask [M.sub.2] is built by [M.sub.1] and the gradient map of [mathematical expression not reproducible].

Step 5. [M.sub.3] is represented by ([M.sub.1] + [M.sub.2]).

Firstly, the original damaged image I is enhanced by (4), and the enhancement of I is represented as [I.sub.e]. The smoothed result of [I.sub.e] by [L.sub.0] smoothing is denoted as [mathematical expression not reproducible].

The brightest pixel in the dark channel of [mathematical expression not reproducible] is regarded as the reference point A mentioned above. Regarding the construction of mask [M.sub.1] the pixels are treated as damaged pixels when the following two conditions are satisfied: (1) the pixels' RGB approaching degree does not exceed the given coefficient [[delta].sub.1]; (2) the differences between RGB average of A and the pixels' RGB average do not exceed coefficient S2. Otherwise, the pixels will be treated as undamaged pixels. As stated in Section 2, in our method the pixels of damaged region are all set to 255 and the rest are set to 0, and we thus construct a new grayscale image denoted as mask [M.sub.1]. Generally, the white region in [M.sub.1] is mainly the inner region of damaged region; the other pixels especially the ones at the boundary in the damaged region are probably not included in [M.sub.1]. Therefore, [M.sub.1] acts as an interim mask and will be utilized to construct more precise masks in the following steps.

The next step is to locate the boundary of damaged region more precisely using guided filtering enhancement and L0 smoothing. Then a more precise mask instead of [M.sub.1] is needed, which is constructed in the following way. We employ a small patch through [M.sub.1] which contains inner damaged region to match the gradient map of [mathematical expression not reproducible] and then denote the included boundary of damaged region as mask [M.sub.2]. Likewise, the boundary is set to 255 and the rest is assigned to 0 in [M.sub.2]. Based on this, we can obtain a more precise mask [M.sub.3] defined as ([M.sub.1] + [M.sub.2]). Note that the size of the patch here should not be large, or the boundary will be too thick.

4. Experimental Results

In this section, we further test the performance of the proposed method using Matlab2011b in a personal computer with an Intel Core i7 processor. We collect the tested images from In this paper, the damaged regions like white text and curves in original images are manually made and superimposed on undamaged images with Photoshop.

We compared the proposed method with the traditional methods, and the results are shown in Figure 3. Figures 3(a) and 3(b) are the undamaged and damaged image, respectively. Figures 3(c) and 3(d) illustrate the inpainting regions (the orange regions) found by our method and threshold method, respectively. As can be seen from Figure 3(d), some undamaged regions (e.g., mountains indicated by the red arrow) are wrongly selected to be inpainted by using the threshold method. In comparison, fewer undamaged regions are found with the proposed method as indicated by Figure 3(d).

Figures 3(e) and 3(f) show the inpainting results using the proposed and threshold methods with both basing on CDD model, respectively. The zoom-in views of the red rectangle patches in Figures 3(e) and 3(f) are shown in Figures 3(g) and 3(h), respectively. Obviously, some damaged regions are not inpainted using the threshold method as indicated by the red arrows in Figure 3(g). In contrast, our method is superior to the threshold method especially for the images with inconspicuous boundary of damaged region.

5. Conclusion

In this paper, we report a new method for building a precise mask of damaged region for image inpainting. With the precise mask and the combinations of the guided filtering enhancement and L0 smoothing, our method exhibits satisfactory performance in image inpainting and shows superiority over conventional methods for images with inconspicuous boundary of damaged region. It is also interesting to apply our method to the inpainting of images colorful damaged region; meanwhile, a precise cloud region can be precisely detected and acquired as a byproduct of image inpainting. We leave this problem for future research. 10.1155/2016/6104196

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work was supported by the National Natural Science Foundation of China under Grant 61503300, Natural Science Foundation of Shaanxi Province of China under Grant 2014JQ8327, China Postdoctoral Science Foundation under Grant 2014M560801, Scientific Research Program Funded by Shaanxi Provincial Education Department under Grant 14JK1750, Science Foundation of Northwest University under Grant 13NW40, and Foundation of Key Laboratory of Space Active Opto-Electronics Technology of Chinese Academy of Sciences under Grant AOE-2016-A02.


[1] C. Guillemot and O. Le Meur, "Image inpainting: overview and recent advances," IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 127-144, 2014.

[2] H.-Y. Ding and Z.-F. Bian, "Remote sensing image restoration based on TV regularization and local constraints," Acta Photonica Sinica, vol. 38, no. 6, pp. 1577-1580, 2009.

[3] A. Bugeau, M. Bertalmio, V. Caselles, and G. Sapiro, "A comprehensive framework for image inpainting," IEEE Transactions on Image Processing, vol. 19, no. 10, pp. 2634-2645, 2010.

[4] A. Telea, "An image inpainting technique based on the fast marching method," Journal of Graphics Tools, vol. 9, no. 1, pp. 23-34, 2004.

[5] J.-F. Cai, R. H. Chan, and Z. Shen, "A framelet-based image inpainting algorithm," Applied and Computational Harmonic Analysis, vol. 24, no. 2, pp. 131-149, 2008.

[6] C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, and J. Verdera, "Filling-in byjoint interpolation of vector fields and gray levels," IEEE Transactions on Image Processing, vol. 10, no. 8, pp. 1200-1211, 2001.

[7] T. Chan and J. Shen, "Local inpainting models and tv inpainting," SIAM Journal on Applied Mathematics, vol. 62, no. 3, pp. 1019-1043, 2011.

[8] T. F. Chan and J. Shen, "Nontexture inpainting by curvaturedriven diffusions," Journal of Visual Communication and Image Representation, vol. 12, no. 4, pp. 436-449, 2001.

[9] K. He, J. Sun, and X. Tang, "Single image haze removal using dark channel prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, 2011.

[10] K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, 2013.

[11] L. Xu, C. Lu, Y. Xu, and J. Jia, "Image smoothing via L0 gradient minimization," ACM Transactions on Graphics, vol. 30, no. 6, article 174, 2011.

[12] F. Kou, W. Chen, Z. Li, and C. Wen, "Content adaptive image detail enhancement," IEEE Signal Processing Letters, vol. 22, no. 2, pp. 211-215, 2014.

Wanxu Zhang, (1) Yi Ru, (1) Hongqi Meng, (1) Min Liu, (2) Xiaolei Ma, (3) Lin Wang, (1) and Bo Jiang (1)

(1) School of Information Science and Technology, Northwest University, Xi'an, Shaanxi 710127, China

(2) Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Science, Shanghai 200083, China

(3) Department of Physics, Emory University, Atlanta, GA 30322, USA

Correspondence should be addressed to Bo Jiang;

Received 5 January 2016; Accepted 9 February 2016

Academic Editor: Maria Gandarias

Caption: Figure 1: Process of building mask by using the threshold method.

Caption: Figure 2: Sketch map of the proposed method.

Caption: Figure 3: Comparison of inpainted results. (a) Original undamaged image. (b) Damaged image. (c) and (d) are the inpainting regions found by our method and threshold method, respectively. (e) and (f) are the inpainting results using our method and threshold method, respectively. (g) and (h) are the zoom-in patch of (e) and (f), respectively.
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Zhang, Wanxu; Ru, Yi; Meng, Hongqi; Liu, Min; Ma, Xiaolei; Wang, Lin; Jiang, Bo
Publication:Mathematical Problems in Engineering
Date:Jan 1, 2016
Previous Article:Fractional Differentiation-Based Active Contour Model Driven by Local Intensity Fitting Energy.
Next Article:Existence of Positive Solutions for a Class of Nonlinear Algebraic Systems.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters