Printer Friendly

A General Proximal Alternating Minimization Method with Application to Nonconvex Nonsmooth 1D Total Variation Denoising.

1. Introduction

In the past few years, increasing attentions have been paid to convex optimization problems, which consist of minimizing a sum of convex or smooth functions [1-3]. Each of the objective functions enjoys several appreciated properties, like strong convexity, Lipschitz continuity, or other convex conditions. These properties can usually lead to great advantage in computing. Meanwhile works on such convex problems have provided a sound theoretic foundation. Both of the theoretic and computing advantages created many benefits to practical use. It is particularly noticeable in signal/image processing, machine learning, computer vision, and so forth. However, what deserves the special attention is the fact that the convex or smooth models are always approximations of nonconvex nonsmooth problems. For example, nonconvex [l.sub.0]-norm in sparse recovery problems is routinely relaxed as convex [l.sub.1]-norm, and many related works were developed [4,5]. Although the difference between the nonconvex nonsmooth problem and its approximations vanishes in certain case, it is nonnegligible sometimes, like the problem in paper [6]. On the other hand, excellent numerical performances of various nonconvex nonsmooth algorithms inspire researchers to continue on their directions to the nonconvex methodology.

Nonconvex and nonsmooth convex optimization problems are ubiquitous in different disciplines, including signal denoising [7], image deconvolution [8,9], or other ill-posed inverse problems [10], to name a few. In this paper, we aim at solving the following generic nonconvex nonsmooth optimization problem, formulated in real Hilbert spaces X and [{[U.sub.m]}.sub.m=1, 2, ..., M], for some M [member of] [N.sup.+]:

[mathematical expression not reproducible], (1)

where (i) f(x) : X [right arrow] R [union]{+[infinity]} and [h.sub.m](x) : [U.sub.m] [right arrow] R [union]{+[infinity]} are proper lower semicontinuous functions;(ii) the operators [L.sub.m] : X [right arrow] [U.sub.m] are linear; and (iii) the set of minimizers is supposed to be nonempty.

It is quite meaningful to find a common convergent point in the optimal set of sums of simple functions [2, 11]. Insightful studies for nonconvex problems are presented in [12,13]: if nonconvex structured functions of the type L = f(x) + Q(x, y) + g(y) has the Kurdyka-Lojasiewicz (K-L) property, then each bounded sequence generated by a proximal alternating minimization algorithm converges to a critical point of L. Our main work is mainly based on this convergence result and introduces a generic proximal minimization method for the general model (1). Note that if some of the functions in (1) are zeros and linear operator [L.sub.m] is the identity [I.sub.d], the model would reduce to common one in compressed sensing [14]. The model we consider is the generalization of many application problems, such as the common "lasso" problem [15], and composition in papers [2,16]. Here, we provide a few of the typical examples.

Example 1 (1D total variation minimization [7]). In this application, we need to solve the following denoising problem:

[mathematical expression not reproducible], (2)

where y [member of] X is the input signal, X = [R.sup.n], [[parallel]z[parallel].sub.0] counts the number of nonzero elements of z, and Dx := ([x.sub.2] - [x.sub.1], [x.sub.3] - [x.sub.2],...,[x.sub.n] - [x.sub.n-1]) is defined as derivation of original signal x.

Noise removal is the basis and requisite of other subsequential applications and algorithm dealing with total variation (TV) regularizer; this regularizer is of great importance since it can efficiently deal with noisy signals which have sparse derivatives (or gradients), for instance, piecewise constant (PWC) signal that has flat sections with a number of abrupt jumps. The 1D total variation minimization can be extended to related 2-dimension image restoration.

Example 2 (group lasso [17]). In this application, one needs to solve

[mathematical expression not reproducible], (3)

where [x.sup.T] = ([x.sup.T.sub.1],[x.sup.T.sub.2] ..., [x.sup.T.sub.n]), [x.sub.i] [member of] [R.sup.Pi] are the decision block variables and [[summation].sup.n.sub.i=1] [P.sub.i] = N with [P.sub.i] is the corresponding block size.

In the past few years, researches on structural sparse signal recovery have been very popular and group lasso is typical one of those important problems. It attracts many attentions in face recognition, multiband signal processing, and other machine learning problems. The general case is also applied to many other kinds of structural sparse recovery problems, like [l.sub.12]-minimization [18] and block sparse recovery [19].

Example 3 (image deconvolution [20]). In this application, one needs to solve

[mathematical expression not reproducible], (4)

where X = [R.sup.mxn] and [[parallel]X[parallel].sub.F] = [[[[summation].sub.i][[summation].sub.j]([X.sub.2.sub.i,j].sup.1/2]. The discrete total variation, denoted by TV in (4), is defined as follows. We define r x r matrix:

[mathematical expression not reproducible], (5)

and the discrete gradient operator D : X [right arrow] [X.sup.2] is defined as

[DX.sub.i,j] = ([dx.sub.i,j],[dy.sub.i,j]), dx = [D.sub.m]X, dy = [XD.sup.T.sub.n]. (6)

Then we have TV(X) = [[parallel]DX[parallel].sub.1,2] = [[summation].sub.i] [[summation].sub.j][square root of ([[absolute value of [dx.sub.i,j])].sup.2] + [[absolute value of [dy.sub.i,j]].sup.2])].

The concept of deconvolution finds lots of applications in signal processing and image processing [21-23]. In this paper, it would just be considered as a specific case to problem (1), although paying attention to this problem (4) with other details is equally important.

The main difficulty in solving (1) lies in that x is coupled by [L.sub.m]. In order to surmount the computation barrier, we introduce a new auxiliary variable and split the problem into two sequences of subproblems, minutely described in the next section. Then our problem is an extension of problem given in paper [12]. The paper aims at giving a generic proximal alternating minimization method for a class of nonconvex nonsmooth problem (1), to be applied in many fields. The motivation is introduction of auxiliary variable and splitting the original problem into two kinds of easier nonconvex nonsmooth subproblems. Recent studies often give the regularization [h.sub.m] a reasonable assumption; namely, the proximal map of [h.sub.m] is easy to calculate. Then convergence analysis can be extended by the context of the present work [12]. In the last section, we show application to nonconvex nonsmooth 1D total variation signal denoising.

2. Algorithm

In order to reduce computation complexity caused by composite of nonconvex function [h.sub.m](x) and operator [L.sub.m], we introduce a sequence of auxiliary variables [{[[theta].sub.m]}.sup.M.sub.m=1]. Then, the problem in (1) can be represented equivalently as follows: for each m [member of] {1, ..., M}

[mathematical expression not reproducible], (7)

where [{[[theta].sub.m]}.sup.M.sub.m=1] are represented as a concise whole [theta] and

[mathematical expression not reproducible]. (8)

The last term [[summation].sup.M.sub.m=1]([[tau].sub.m]/2[[parallel] [[theta].sub.m] - [L.sub.m]x[parallel].sup.2.sub.2] ensures the high similarity between [[theta].sub.m] and [L.sub.m]x (m = 1, ..., M). And this quadratic function minimization can be easily solved. Hence, the original complex composite is split into two simpler objectives.

Apparently, now this form could be solved by a series of alternating proximal minimization methods [12]: for each m [member of] {1, ..., M},

[mathematical expression not reproducible]; (9a)

[mathematical expression not reproducible]. (9b)

We then make the following assumptions about (9a) and (9b):

[mathematical expression not reproducible]

for some positive 0 < [r.sub.-] < [r.sub.+], the stepsize sequences satisfy [r.sub.-] < [[zeta].sub.k], [[mu].sub.k] < [r.sub.+] [for all]k [greater than or equal to] 0.

The assumption is weakly required and can be easily meet. In the first section, the given three examples all satisfy the first two conditions, and the third condition holds when proper parameters are set in practical tests. Besides, each of the proximal terms (1/2[[zeta].sub.k])[[parallel]u - [x.sup.k][parallel].sup.2.sub.2] and (1/2[[mu].sub.k])[[parallel][v.sub.m] - [[theta].sup.k.sub.m][parallel].sup.2.sub.2] in (9a) and (9b) is used to meet the decrease condition of objective function. If we omit them, the algorithm can also perform well but can not build direct general convergence theories. At that time, it reduces to alternating projection minimization method.

3. Convergence Results

In fact, now the problem is a proximal alternating minimization case, whose global convergence has been detailedly analyzed in paper [12]. The paper mainly concentrates on theory analysis of problem L(x, y) = f(x) + Q(x, y) + g(y) with the following form:

[mathematical expression not reproducible], (10)

And if L has the Kurdyka-Lojasiewicz (K-L) property, then each bounded sequence generated by the above algorithm converges to a critical point of L. Even convergence rate of the algorithm can be computed, which depends on the geometrical properties of the function L around its critical points. It is remarkable that assumption with K-L property can be verified in many common functions.

The convergence difference between algorithm of paper [12] and ours is that our minimization objective is not two variables (x, y) but x and a sequence of variables [{[[theta].sub.m]}.sup.M.sub.m=1]. In this section, our work is to give similar consequence of algorithms (9a) and (9b).

3.1. Preliminary

Definition 4 (subdifferentials [24,25]). Let g : [R.sup.N] [right arrow] (-[infinity], +[infinity]] be a proper and lower semicontinuous function.

(1) For a given x [member of] dom(g), the Frechet subdifferential of g at x, written as [??]g(x), is the set of all vectors u [member of] [R.sup.N] which satisfy

[mathematical expression not reproducible]. (11)

When x [not member of] dom(g), we set [??]J(x) = 0.

(2) The "limiting" subdifferential, or simply the subdifferential, of J at x [member of] [R.sup.N], written as [partial derivative]g(x), is defined through the following closure process:

[mathematical expression not reproducible]. (12)

A necessary condition for z [member of] X to be a minimizer of g is

0 [member of] [partial derivative]g(x). (13)

A point that satisfies (13) is called limiting-critical or simply critical point. The set of critical points of g is denoted by crit g.

Being given ([x.sup.0],[[theta].sup.0]) [member of] X [[PI].sup.M.sub.m=1] [U.sub.m], recall that sequence generated by (9a) and (9b) is of the form ([x.sup.k], [[theta].sup.k]) [right arrow] ([x.sup.k+1], [[theta].sup.k]) [right arrow] ([x.sup.k+1],[[theta].sup.k+1]). According to the basic properties of P, we can deduce a few important points.

Corollary 5. Assume sequences ([x.sup.k],[[theta].sup.k]) are generated by (9a) and (9b) under assumption (H), and then they are well defined. Moreover, consider the following:

(i) The following estimate holds:

[mathematical expression not reproducible]; (14)

hence L([x.sup.k], [[theta].sup.k]) dose not increase.

(ii)

[mathematical expression not reproducible]; (15)

hence [lim.sub.k[right arrow][infinity]][[parallel][x.sup.k] - [x.sup.k-1][parallel].sub.2] + [[summation].sup.M.sub.m=1][[parallel] [[theta].sup.k.sub.m] - [[theta].sup.k-1.sub.m][parallel].sub.2] = 0.

(iii) For k [greater than or equal to] 1, we have

[mathematical expression not reproducible], (16)

where above 0 is a multivector with the same dimension of [[PI].sup.M.sub.m=1][[theta].sub.m].

Besides, for all bounded subsequence [mathematical expression not reproducible].

Corollary 6. Assume that (H) hold. Let ([x.sup.k],[[theta].sup.k]) be a sequence complying with (9a) and (9b). Let [omega]([x.sup.0],[[theta].sup.0]) denote the set (possibly empty) of its limit points. Then

(i) if ([x.sup.k],[[theta].sup.k]) is bounded, then [omega]([x.sup.0],[[theta].sup.0]) is a nonempty compact connected set and dist(([x.sup.k],[[theta].sup.k]), [omega]([x.sup.0],[[theta].sup.0])) [right arrow] 0 as k [right arrow] +[infinity],

(ii) [omega]([x.sup.0],[[theta].sup.0]) [subset] crit P,

(iii) P is finite and constant on [omega]([x.sup.0],[y.sup.0]), equal to [inf.sub.k[member of]N] P([x.sup.k],[y.sup.k]) = [lim.sub.k[right arrow]+[infinity]] P([x.sup.k],[[theta].sup.k]).

The above proposition gives some convergence results about sequences generated by (7), (9a), and (9b). Point (ii) guarantees that all limiting points produced by (9a) and (9b) must be limiting-critical points. And (iii) gives the point that objective P converges to the finite and constant.

3.2. Convergence to a Critical Point. This part gives more precise convergence analysis about the proximal algorithms (9a) and (9b).

Let f: X [right arrow] R [union]{+[infinity]} be a proper lower semicontinuous function. For -[infinity] < [[eta].sub.1] < [[eta].sub.2] [less than or equal to] +[infinity], let us set [[[eta].sub.1] < f < [[eta].sub.2]] = {x [member of] X : 1 < f(x) < 2}. Then we give an important definition in the optimization theory.

Definition 7 (K-L property [12]). The function f is said to have the K-L property at [bar.x] [member of] dom [partial derivative]f if there exist [eta] [member of] (0, +[infinity]], a neighborhood U of [bar.x], and a continuous concave function [phi] : [0, [eta]) [right arrow] [R.sup.+] such that

(i) [phi](0) = 0;

(ii) [phi] is [C.sup.1] on (0, [eta]);

(iii) for all s [member of] (0, [eta]), [phi]'(s) > 0;

(iv) for all x [member of] U [intersection] [f([bar.x]) < f < f([bar.x]) + [eta]], the Kurdyka-Lojasiewicz inequality holds:

[phi](f(x) - f([bar.x])) dist(0, [partial derivative]f(x)) [greater than or equal to] 1. (17)

If we justify that a function has K-L property, we should estimate [eta], U, [phi]. Many convex functions, for instance, satisfy the above property with U = [R.sup.n] and [eta] = +[infinity]. Besides, a lot of nonconvex examples are also given in paper [12].

Below, we will give convergence analysis to critical point.

Theorem 8 (convergence). Assume that P satisfies (H) and has the Kurdyka-Lojasiewicz property at each point of the domain of f. Then

(i) either [[parallel]([x.sup.k],[[theta].sup.k])[parallel].sub.2] tends to infinity,

(ii) or ([x.sup.k] - [x.sup.k-1], [[theta].sup.k] - [[theta].sup.k-1]) is [l.sup.1], that is,

[mathematical expression not reproducible], (18)

and, as a consequence, ([x.sup.k],[[theta].sup.k]) converges to a critical point of P.

The above theorem's proof is based on the same analysis process in paper [12], so here we just present the convergence results but omit their proofs.

4. Application to 1D TV Denoising

In practical scientific and engineering contexts, noise removal is the basis and requisite of other subsequential applications. It has received extensive attentions. A range of computational algorithms have been proposed to solve the denoising problem [26-28]. Among these solvers, total variation (TV) regularizer is of great importance since it can efficiently deal with noisy signals that have sparse derivatives (or gradients). For instance, piecewise constant (PWC) [29] signal with noise, whose derivative is sparse relative to signal dimension, could be denoised by powerful TV denoising method.

In 1D TV denoising problem [10], one needs to solve model (2). TV denoising minimizes a composite of two parts. The first part is to keep the error, between the observed data and the original, as small as possible. The second is devoted to minimizing the sparsity of the gradients. Usually, denoising model is defined as one combination of a quadratic data fidelity term and a convex regularization term or a differential regularization, for example, convex but nonsmooth problem [30]

[mathematical expression not reproducible] (19)

or differential but nonconvex problem [7]

[mathematical expression not reproducible], (20)

where [[parallel]x[parallel].sub.1] = [[summation].sup.n.sub.i=1][absolute value of [x.sub.i]]. Exact solution of the above two types can be obtained by very fast direct algorithms [7,30]. In fact, convex [l.sub.1]-norm is the replacement of nonconvex [l.sub.0]-norm in (19) since convex optimization techniques have been deeply studied. The latter, like logarithmic penalty and arctangent penalty, can be solved by MM update iteration, in which total objective function (including data fidelity and regularization terms) should meet strictly convex condition.

In this test, we apply our algorithms (9a) and (9b) to this example. Auxiliary variable [theta] is introduced to reduce complexity of the composition [[parallel]Dx[parallel].sub.0]. Then (2) is represented as

[mathematical expression not reproducible]. (21)

Apparently this problem satisfies the convergence conditions [12]. Concrete steps by algorithms (9a) and (9b) are shown in

[mathematical expression not reproducible]; (22a)

[mathematical expression not reproducible]. (22b)

In fact, when tests [[zeta].sub.k] and [[mu].sub.k] can be set as very large constants, the last proximal terms (1/2[[zeta].sub.k])[[parallel]u - [x.sup.k][parallel].sup.2.sub.2] and (1/2[[mu].sub.k])[[parallel]v - [[theta].sup.k][parallel].sup.2.sub.2] in each computing step could be negligible. Hence, the computation of (22a) and (22b) is as follows.

Computation of (22a). The former step (22a) is a quadratic function and could be computed through many techniques, like gradient descent.

Computation of (22b). Apparently, the latter (22b) could be rewritten as a proximal operator of function g([theta]) [12]; that is, [mathematical expression not reproducible]. Consider the proximal operator [mathematical expression not reproducible]. When n = 1, [l.sub.0] norm is reduced to [absolute value of [x.sub.0]], where one easily establishes that

[mathematical expression not reproducible]. (23)

When n is arbitrary, trivial algebraic manipulations are given, with u = ([u.sub.1], [u.sub.2], ..., [u.sub.n]) [member of] [R.sup.n]:

[mathematical expression not reproducible], (24)

and thus [mathematical expression not reproducible] is a perfectly known object.

Total variation denoising examples with three convex and nonconvex regularization instances (the two others are convex and nonconvex but smooth algorithms in [7,30], resp.) are figured in Figure 1. Original piece signal data [x.sup.0] [member of] [R.sup.n] with length n = 256 is obtained with MakeSignal in paper [7]. The noisy data is obtained using additive white Gaussian noise (AWGN) ([sigma] = 0.5). For both convex and nonconvex cases, we set [lambda] = [square root of (n[sigma])/4, consistent with the range suggested in [30] for standard (convex) TV denoising and nonconvexity parameter is set to its maximal value, a = 1/(4[lambda]) default in [7]. These settings could lead to the best denoising result in their papers. All the other settings are consistent with paper [7]. The maximum iteration numbers are all 500. All the codes are tested in the same computer.

According to the comparison between our algorithm for TV-[l.sub.0] norm and the proposed algorithms in papers [7,30], our algorithm has better result with smaller Root-Mean-Square-Error (RMSE), where RMSE(x) == [square root of ((1/n) [[summation].sup.n.sub.i=1][[parallel][x.sub.i] - [x.sup.0.sub.i][parallel].sup.2.sub.2])]. Referring to Figure 1, the best RMSE results for 1D TV denoising with convex [l.sub.1] penalty [21] and smooth log penalty [7] are 0.2720 and 0.2611, respectively. And ours is 0.1709, much better than the convex and smooth cases.

5. Conclusion

Nonconvex nonsmooth algorithm finds many interesting applications in many fields. In this paper, we give a general proximal alternating minimization method for a kind of nonconvex nonsmooth problems with complex composition. It has concise form, good theory results, and promising numerical result. For specific 1D standard TV denoising problem, the improvement is more dramatic compared to the existing algorithms [7,30]. Besides, our algorithm works on other nonconvex nonsmooth problems, such as block sparse recovery, group lasso, and image deconvolution, of which the examples are just too numerous to mention.

http://dx.doi.org/10.1155/2016/5053434

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The work is supported in part by National Natural Science Foundation of China, no. 61571008.

References

[1] P. L. Combettes and J.-C. Pesquet, "Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators," Set-Valued and Variational Analysis, vol. 20, no. 2, pp. 307-330, 2012.

[2] L. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms," Journal of Optimization Theory and Applications, vol. 158, no. 2, pp. 460-479, 2013.

[3] J. E. Esser, Primal dual algorithms for convex models and applications to image restoration, registration and nonlocal inpainting [Ph.D. thesis], University of California, Los Angeles, Calif, USA, 2010.

[4] Y. B. Zhao and D. Li, "Reweighted [l.sub.1]-minimization for sparse solutions to underdetermined linear systems," SIAM Journal on Optimization, vol. 22, no. 3, pp. 1065-1088, 2012.

[5] J. Wright and Y. Ma, "Dense error correction via-minimization," IEEE Transactions on Information Theory, vol. 56, no. 7, pp. 3540-3560, 2010.

[6] T. Sun, H. Zhang, and L. Cheng, "Subgradient projection for sparse signal recovery with sparse noise," Electronics Letters, vol. 50, no. 17, pp. 1200-1202, 2014.

[7] I. W. Selesnick, A. Parekh, and I. Bayram, "Convex 1-D total variation denoising with non-convex regularization," IEEE Signal Processing Letters, vol. 22, no. 2, pp. 141-144, 2015.

[8] L. He and S. Schaefer, "Mesh denoising ACM Transactions on Graphics, vol. 32, no. 4, article 64, 2013.

[9] L. Xu, C. Lu, Y. Xu, and J. Jia, "Image smoothing via [L.sub.0] gradient minimization," ACM Transactions on Graphics, vol. 30, no. 6, article 174, 2011.

[10] C. Brandt, H.-P. Seidel, and K. Hildebrandt, "Optimal spline approximation via [l.sub.0]-minimization," Computer Graphics Forum, vol. 34, no. 2, pp. 617-626, 2015.

[11] H. Zhang, L. Cheng, and W. Yin, "A dual algorithm for a class of augmented convex models," https://arxiv.org/abs/1308.6337.

[12] H. Attouch, J. Bolte, P. Redont, and A. Soubeyran, "Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Lojasiewicz inequality," Mathematics of Operations Research, vol. 35, no. 2, pp. 438-457, 2010.

[13] H. Attouch, J. Bolte, and B. F. Svaiter, "Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods," Mathematical Programming, vol. 137, no. 1-2, pp. 91-129, 2013.

[14] J. Bolte, S. Sabach, and M. Teboulle, "Proximal alternating linearized minimization for nonconvex and nonsmooth problems," Mathematical Programming, vol. 146, no. 1-2, pp. 459-494, 2014.

[15] R. Tibshirani, "Regression shrinkage and selection via the lasso," Journal of the Royal Statistical Society. Series B (Methodological), vol. 73, no. 3, pp. 267-288, 1996.

[16] G. Chen and M. Teboulle, "A proximal-based decomposition method for convex minimization problems," Mathematical Programming, vol. 64, no. 1-3, pp. 81-101, 1994.

[17] L. Meier, S. Van De Geer, and P. Bhlmann, "The group Lasso for logistic regression," Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 70, no. 1, pp. 53-71, 2008.

[18] X. Liao, H. Li, and L. Carin, "Generalized alternating projection for weighted-2, 1 minimization with applications to model-based compressive sensing," SIAM Journal on Imaging Sciences, vol. 7, no. 2, pp. 797-823, 2014.

[19] E. Elhamifar and R. Vidal, "Block-sparse recovery via convex optimization," IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4094-4107, 2012.

[20] L. Condat, "A generic proximal algorithm for convex optimization--application to total variation minimization," IEEE Signal Processing Letters, vol. 21, no. 8, pp. 985-989, 2014.

[21] T. F. Chan and C.-K. Wong, "Total variation blind deconvolution," IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 370-375, 1998.

[22] L. He, A. Marquina, and S. J. Osher, "Blind deconvolution using TV regularization and Bregman iteration," International Journal of Imaging Systems and Technology, vol. 15, no. 1, pp. 74-83, 2005.

[23] N. Dey, L. Blanc-Feraud, C. Zimmer et al., "Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution," Microscopy Research and Technique, vol. 69, no. 4, pp. 260-266, 2006.

[24] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation I: Basic Theory, Springer Science & Business Media, Berlin, Germany, 2006.

[25] R. T. Rockafellar and R. J. B. Wets, Variational Analysis, Springer Science & Business Media, Berlin, Germany, 2009.

[26] L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D: Nonlinear Phenomena, vol. 60, no. 1-4, pp. 259-268, 1992.

[27] M. Lysaker, A. Lundervold, and X.-C. Tai, "Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time," IEEE Transactions on Image Processing, vol. 12, no. 12, pp. 1579-1590, 2003.

[28] M. Lysaker, S. Osher, and X.-C. Tai, "Noise removal using smoothed normals and surface fitting," IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1345-1357, 2004.

[29] M. A. Little and N. S. Jones, "Generalized methods and solvers for noise removal from piecewise constant signals. I. Background theory," Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, vol. 467, no. 2135, pp. 3088-3114, 2011.

[30] L. Dumbgen and A. Kovac, "Extensions of smoothing via taut strings," Electronic Journal of Statistics, vol. 3, pp. 41-75, 2009.

Xiaoya Zhang, (1) Tao Sun, (1) and Lizhi Cheng (1,2)

(1) College of Science, National University of Defense Technology, Changsha, Hunan 410073, China

(2) The State Key Laboratory for High Performance Computation, National University of Defense Technology, Changsha, Hunan 410073, China

Correspondence should be addressed to Xiaoya Zhang; suezhang246@163.com

Received 15 June 2016; Accepted 10 October 2016

Academic Editor: Maria L. Gandarias

Caption: Figure 1: Total variation denoising with nonconvex nonsmooth [l.sub.0] penalty (RMSE = 0.1709), compared with convex [l.sub.l] penalty [21] (RMSE = 0.2720) and smooth log penalty [7] (RMSE = 0.2611).
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Zhang, Xiaoya; Sun, Tao; Cheng, Lizhi
Publication:Mathematical Problems in Engineering
Date:Jan 1, 2016
Words:4362
Previous Article:Exact and Heuristic Algorithms for Routing AGV on Path with Precedence Constraints.
Next Article:Exact Boundary Derivative Formulation for Numerical Conformal Mapping Method.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters