Printer Friendly

Vector Extrapolation Based Landweber Method for Discrete Ill-Posed Problems.

1. Introduction

Ill-posed inverse problems arise in many important applications, including astronomy, medical tomography, geophysics, sound source detection, and image deblurring (refer, e.g., to [1-6] and references therein). In this paper, we consider linear discrete ill-posed problems of the form

b = A[x.sub.true] + [eta], (1)

where b [member of] [R.sup.m] is an observed vector, A [member of] [R.sup.m x m] is a severely ill-conditioned matrix, and [eta] [member of] [R.sup.m] is an unknown additive noise. Popular methods for computing an approximation of [x.sub.true] are iterative regularization methods, which are very well suited for large-scale problems [3,7]. Due to their fast convergence behaviors, Krylov subspace methods (e.g., CGLS [8, pp. 288-293], LSQR [9], and LSMR [10]) are usually considered to be superior to Landweber method (refer to Brianzi et al. [11] for an excellent overview of regularizing properties of Krylov subspace methods). However, the semiconvergence behavior of Krylov subspace methods is much more pronounced than that of Landweber method. This suggests that we are at high risk of computing a poor solution without good stopping criteria for Krylov subspace methods.

Due to its simplicity and stability, Landweber method is still competitive with Krylov subspace methods in some real applications [12, 13]. Slow convergence behavior is the main practical disadvantage of Landweber method. Recently, the adaptive Landweber method, which updates parameters at each iteration, was advanced to speed up the convergence of Landweber method [14]. But if we do not have a good preset parameter, the performance of the adaptive Landweber method is almost the same as that of Landweber method. Vector extrapolation methods are often effective strategies to accelerate the convergence of vector sequences. This motivates us to combine vector extrapolation methods with Landweber method to refresh Landweber method by accelerating its convergence.

The rest of this paper is organized as follows. In Section 2, a brief review of popular vector extrapolation methods is provided. To accelerate the convergence of Landweber method, the vector extrapolation based Landweber method is established in Section 3. Moreover, a restarted version of the vector extrapolation based Landweber method is proposed for practical considerations. Numerical experiments from image deblurring are reported to show the performance of the vector extrapolation based Landweber method in Section 4. Finally, the paper closes with conclusions in Section 5.

2. Vector Extrapolation Methods

In this section, we briefly review popular vector extrapolation methods. Surveys and practical applications of the vector extrapolation methods can be found in [15-19]. Most popular vector extrapolation methods can be classified into two categories: the polynomial-type methods and the e-algorithms [16]. In general, the polynomial-type methods are more economical than the e-algorithms for solving large-scale systems of algebraic equations in terms of both computation and storage requirements. In the current study, we consider using the minimal polynomial extrapolation (MPE) method which is one of the efficient polynomial-type methods.

Let {[x.sub.j]} be the vector sequence generated via

[x.sub.j+1] = T[x.sub.j] + b, j = 0, 1, ..., (2)

where [x.sub.0] [member of] [C.sup.n] is an initial vector. The MPE approximation [s.sub.n,k] to s, the desired limit or antilimit, is defined as

[s.sub.n,k] = [k.summation over (i=0)][[gamma].sub.i][x.sub.n+1], (3)

with [k.summation over (i=0)][[gamma].sub.i] = 1. Define

[u.sub.i] = [DELTA][x.sub.i] = [x.sub.i+1] - [x.sub.i]. (4)

Then [[gamma].sub.i] are determined as follows:

(i) Compute the least-squares solution of the overdetermined linear system

[U.sup.(n).sub.k-1] c = -[u.sub.n+k], (5)

where [U.sup.(n).sub.k-1] = [u.sub.n] [absolute value of [u.sub.n+1]] ... [absolute value of [u.sub.n+k-1]] and c = [([c.sub.0], [c.sub.1], ..., [c.sub.k- 1]).sup.T]

(ii) Set [c.sub.k] = 1, and compute [[gamma].sub.i] by

[[gamma].sub.i] = [c.sub.i]/[[summation].sup.k.sub.i=0][c.sub.i], 0 [less than or equal to] i [less than or equal to] k. (6)

Based on QR decomposition and LU decomposition, two fast and numerically stable implementations of the MPE method were presented in [20,21].

The following theorem in [22] illustrates the accelerating effect of the MPE method.

Theorem 1. Let the vector sequence {[x.sub.n]} be such that

[X.sub.n] = S + [p.summation over (i=1)] [v.sub.i] [[lambda].sup.n.sub.i], (7)

where [v.sub.i], [v.sub.2], ..., [v.sub.p] are linearly independent vectors and [[lambda].sub.i] are distinct nonzero scalars satisfying

[[lambda].sub.i] [not equal to] 1 [for all]i (8)

and are ordered such that

[absolute value of [[lambda].sub.1]] [greater than or equal to] [absolute value of [[lambda].sub.2]] [greater than or equal to] [absolute value of [[lambda].sub.p]]. (9)

Assume for some integers k < p that there holds

[absolute value of [[lambda].sub.k]] > [absolute value of [[lambda].sub.k+1]]. (10)

Then [s.sub.n,k] exists for all large n, and

[s.sub.n,k] - s = O ([[lambda].sup.n.sub.k+1]) as n [right arrow] [infinity]. (11)

Remark 2. When the matrix T is diagonalizable and (I - T) is nonsingular, vector sequences {[x.sub.n]} satisfy the conditions of Theorem 1 [17]. In this case, all eigenvalues of the matrix T are different from 1, the scalars [[lambda].sub.1], ..., [[lambda].sub.p] are some or all of the distinct nonzero eigenvalues of T, and the vectors [v.sub.i] are corresponding eigenvectors.

3. Vector Extrapolation Based Landweber Method

Landweber method is one of the simplest iterative regularization methods for solving linear discrete ill-posed problems (1). It follows the form

[x.sub.j] = [x.sub.j-1] + [[omega]A.sup.T] (b - A[x.sub.j-1]), (12)

where [x.sub.0] is an initial vector and [omega] is a real parameter satisfying 0 < [omega] < 2[[parallel][A.sup.T]A[parallel].sup.-1.sub.2]. The filter factors for (12) are

[f.sup.(j).sub.i] = 1 - [(l - [omega][[sigma].sup.2.sub.i].sup.j], i = 1, ..., m, (13)

in which the iteration number j plays the role of a regularization parameter [3].

Remark 3. At each iteration, the leading computational requirement for Landweber method is two matrix-vector multiplications. Imposing boundary conditions, the matrix-vector multiplication can be efficiently implemented via the corresponding fast transforms [23]. In this paper, we employed Neumann boundary conditions and the corresponding fast two-dimensional discrete cosine transform. The difference in boundary conditions may slightly affect the accuracies of image deblurring. For further discussions on this topic, we refer the interested readers to [23].

Remark 4. When solving the regularized linear system ([A.sup.T] A + [lambda][L.sup.T]L)x = [A.sup.T]b by Landweber method, the matrix I - T is nonsingular if N(A) [intersection] N(L) = {0}. And if the point spread function (PSF) is symmetric, the matrices A, L, and [A.sup.T]A + [L.sup.T]L can be diagonalized by the two-dimensional discrete cosine transform. Thus, from Remark 2, we can conclude that vector sequences {[x.sub.j]} generated by (12) satisfy the conditions of Theorem 1 when solving the above regularized linear system.

Although exhibiting stable convergence behavior, Landweber method converges too slowly to be a practical method. Recently, the adaptive Landweber method, which adaptively updates parameters, has been proposed to speed up the convergence of Landweber method [14]. Numerical experiments showed that, without a good preset parameter, the performance of the adaptive Landweber method is almost the same as that of Landweber method. One way to economically accelerate the convergence of vector sequences is to apply vector extrapolation methods. This motivates us to combine Landweber method with vector extrapolation methods to accelerate the convergence of Landweber method for practical use. The vector extrapolation based Landweber method for ill-posed problems (1) is proposed as follows.

Algorithm 5.

(1) Choose the integers k and p, the discrepancy principle factor [v.sub.dp], the initial vector [x.sub.0], and the parameter [omega] satisfying 0 < [omega] < 2[[parallel][[A.sup.T]A[parallel].sup.-1.sub.2].

(2) Generate {[x.sub.1], ..., [x.sub.p-1]] by (12); set [x.sub.n-1] = [x.sub.p- 1].

(3) Generate {[x.sub.n], ..., [x.sub.n+k] by (12), and compute the MPE approximation [s.sub.n,k] by (3).

(4) If [[parallel]A[s.sub.n,k] - b[[parallel].sub.2] [less than or equal to] [v.sub.dp][[parallel][eta][parallel].sub.2], stop.

Otherwise set [x.sub.i-1] = [x.sub.i] for i = n, ..., n + k, and go to step (3).

Remark 6. The discrepancy principle for parameter-choice is well suited for iterative regularization methods [7]. The stable convergence behavior of the vector extrapolation based Landweber method makes it easy to be terminated by the simplified discrepancy principle [[parallel]A[s.sub.n,k] - b[[parallel].sub.2] [less than or equal to] [v.sub.dp][[parallel][eta][parallel].sub.2]. Recent developments of the parameter-choice methods (i.e., stopping rules) for Landweber-type methods are discussed in [24].

Remark 7. For a fcth-order MPE approximation, an m x k overdetermined linear system needs to be solved with complexity O([k.sup.2]m) (see [8] for more details) and k + 2 vectors need to be stored. Thus, one has to take computational and storage requirement into account in choosing the order k. And we observed that it is useful to perform some iterations before applying vector extrapolation methods in practice.

It is not computationally desirable to solve an m x k overdetermined linear system after one Landweber iteration in Algorithm 5. For practical considerations, we can compute the kth-order MPE approximation after k + 1 Landweber iterations and then restart this procedure using the MPE approximation as the initial vector. The restarted version of the vector extrapolation based Landweber method is proposed as follows.

Algorithm 8.

(1) Choose the integers p and k, the discrepancy principle factor [v.sub.dp], the initial vector [x.sub.0], and the parameter [omega] satisfying 0 < [omega] < 2[[parallel][A.sup.T]A[parallel].sup.-1.sub.2].

(2) Generate {[x.sub.1], ..., [x.sub.p-1]} by (12); set [x.sub.n-1] = [x.sub.p- 1].

(3) Generate {[x.sub.n], ..., [x.sub.n+k]} by (12), and compute the MPE approximation [s.sub.n,k] by (3).

(4) If [[parallel]A[s.sub.n,k] - b[parallel].sub.2] [less than or equal to] [v.sub.dp] [[parallel][eta][parallel].sub.2], stop.

Otherwise set [x.sub.n-1] = [s.sub.n,k], and go to step (3).

4. Numerical Experiments

In this section, we illustrate the performance of the vector extrapolation based Landweber method for typical linear discrete ill-posed problems (1) arising from image deblurring. In all tests, 1% white noise is added to the blurred images. The relative error (ReErr) is often used to measure the quality of solutions. It is defined as

ReErr = [[parallel][x.sub.com] - [x.sub.true][parallel].sub.2]/[[parallel][x.sub.true][parallel].sub.2], (14)

where [x.sub.com] and [x.sub.true] are the computed solution and the true solution, respectively. All the tests were carried out on a ThinkPad laptop with Intel(R) Core(TM) i3 CPU, 2.27 GHz, with 2 GB RAM, under Windows 7 operating system. Moreover, all the experimental results are obtained from using Matlab v710 (R2010a) implementation with double-precision floating-point arithmetic.

Example 9. The first example is a test problem from Restore-Tools package [25]. The original image, the point spread function (PSF), and the blurred and noisy image are illustrated in Figure 1. Convergence histories of CGLS, Landweber method, and Algorithm 5 in Figure 2 show that Algorithm 5 converges faster than Landweber method and Algorithm 5 also has a more stable performance than CGLS. Moreover, restored images by Landweber method and Algorithm 5 in Figures 3 and 4 vividly illustrate that Algorithm 5 is superior to Landweber method.

Example 10. The aim of the second example is to make comparisons between the adaptive Landweber method and Algorithm 5. Two original images as well as the corresponding blurred and noisy images with the PSF of Gaussian blur and the non-strongly symmetric PSF (see Figure 4) are shown in Figure 5. It is clearly observed from convergence histories in Figure 6 that Algorithm 5 converges faster than Landweber method and the adaptive Landweber method and the performance of the adaptive Landweber method is almost the same as that of Landweber method without a good preset parameter w. The corresponding restored images by Landweber method, the adaptive Landweber method, and Algorithm 5 are given in Figure 7. We can safely conclude that Algorithm 5 outperforms Landweber method and the adaptive Landweber method in terms of both convergence speed and quality of recovery.

Example 11. We compare the performance of the vector extrapolation based Landweber method with its restarted version in the third example. For the test problem used in Example 9, the vector extrapolation based Landweber method reaches the relative error of 0.3747 with 100 iterations and 97 MPE approximations, while its restarted version reaches the relative error of 0.3539 with 100 iterations and 31 MPE approximations. The corresponding restored image by the restarted version of the vector extrapolation based Landweber method (100 Landweber iterations and 31 MPE approximations) is given in Figure 8. The experiments confirm the favorable performance of the restarted version of the vector extrapolation based Landweber method both visually and in aspects of relative error and the number of MPE approximations.

5. Conclusions

In this paper, we construct the vector extrapolation technique for accelerating the Landweber method, which is a classical iterative method for solving ill-posed problems (1). Moreover, both the properties and the practical implementation of the proposed method (including its restarted version) are well investigated. Numerical experiments have shown that the proposed methods are feasible, robust, and efficient.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors' Contributions

All authors contributed equally and significantly in writing this article. All authors wrote, read, and approved the final manuscript.

Acknowledgments

This research is supported by NSFC (61370147, 61402082, and 61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132 and ZYGX2016J138).

https://doi.org/10.1155/2017/1375716

References

[1] M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, CRC Press, Boca Raton, FL, USA, 1998.

[2] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Mathematics and Its Applications, Springer Netherlands, Dordrecht, The Netherlands, 1996.

[3] P. C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems, SIAM Monographs on Mathematical Modeling and Computation, SIAM, Philadelphia, PA, USA, 1998.

[4] J-J. Mei, Y. Dong, T-Z. Huang, and W. Yin, "Cauchy Noise Removal by Nonconvex ADMM with Convergence Guarantees," Journal of Scientific Computing, pp. 1-24, 2017

[5] X.-L. Zhao, F. Wang, T.-Z. Huang, M. K. Ng, and R. J. Plemmons, "Deblurring and sparse unmixing for hyperspectral images," IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 7, pp. 4045-4058, 2013.

[6] X.-L. Zhao, F. Wang, and M. K. Ng, "A new convex optimization model for multiplicative noise and blur removal," SIAM Journal on Imaging Sciences, vol. 7, no. 1, pp. 456-475, 2014.

[7] P.-C. Hansen, Discrete Inverse Problems: Insight and Algorithms, SIAM, Philadelphia, PA, USA, 2010.

[8] A. Bjorck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, USA, 1996.

[9] C. C. Paige and M. A. Saunders, "LSQR: an algorithm for sparse linear equations and sparse least squares," ACM Transactions on Mathematical Software, vol. 8, no. 1, pp. 43-71, 1982.

[10] D. C. Fong and M. Saunders, "LSMR: an iterative algorithm for sparse least-squares problems," SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2950-2971, 2011.

[11] P. Brianzi, P. Favati, O. Menchi, and F. Romani, "A framework for studying the regularizing properties of Krylov subspace methods," Inverse Problems, vol. 22, no. 3, pp. 1007-1021, 2006.

[12] C. Vonesch and M. Unser, "A fast thresholded Landweber algorithm for wavelet-regularized multidimensional deconvolution," IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 539-549, 2008.

[13] H. Zhang and L. Z. Cheng, "Projected Landweber iteration for matrix completion," Journal of Computational and Applied Mathematics, vol. 235, no. 3, pp. 593-601, 2010.

[14] L. Liang and Y. Xu, "Adaptive landweber method to deblur images," IEEE Signal Processing Letters, vol. 10, no. 5, pp. 129-132, 2003.

[15] X.-M. Gu, T.-Z. Huang, H.-B. Li, S.-F. Wang, and L. Li, "Two CSCS-based iteration methods for solving absolute value equations," Journal of Applied Analysis and Computation, vol. 7, no. 4, pp. 1336-1356, 2017.

[16] K. Jbilou and H. Sadok, "Vector extrapolation methods. Applications and numerical comparison," Journal of Computational and Applied Mathematics, vol. 122, no. 1-2, pp. 149-165, 2000.

[17] A. Sidi, "Vector extrapolation methods with applications to solution of large systems of equations and to PageRank computations," Computers & Mathematics with Applications, vol. 56, no. 1, pp. 1-24, 2008.

[18] D. A. Smith, W. F. Ford, and A. Sidi, "Extrapolation methods for vector sequences," SIAM Review, vol. 29, no. 2, pp. 199-233, 1987.

[19] H.-F. Zhang, T.-Z. Huang, C. Wen, and Z.-L. Shen, "FOM accelerated by an extrapolation method for solving PageRank problems," Journal of Computational and Applied Mathematics, vol. 296, pp. 397-409, 2016.

[20] K. Jbilou and H. Sadok, "LU implementation of the modified minimal polynomial extrapolation method for solving linear and nonlinear systems," IMA Journal of Numerical Analysis, vol. 19, no. 4, pp. 549-561, 1999.

[21] A. Sidi, "Efficient implementation of minimal polynomial and reduced rank extrapolation methods," Journal of Computational and Applied Mathematics, vol. 36, no. 3, pp. 305-337, 1991.

[22] A. Sidi, "Convergence and stability properties of minimal polynomial and reduced rank extrapolation algorithms," SIAM Journal on Numerical Analysis, vol. 23, no. 1, pp. 197-209, 1986.

[23] M. Donatelli and S. Serra-Capizzano, "Anti-reflective boundary conditions and re-blurring," Inverse Problems, vol. 21, no. 1, pp. 169-182, 2005.

[24] T. Elfving and T. Nikazad, "Stopping rules for Landweber-type iteration," Inverse Problems, vol. 23, no. 4, article no. 004, pp. 1417-1432, 2007.

[25] J. G. Nagy, K. Palmer, and L. Perrone, "Iterative methods for image deblurring: a Matlab object-oriented approach," Numerical Algorithms, vol. 36, no. 1, pp. 73-93, 2004.

Xi-Le Zhao, (1) Ting-Zhu Huang, (1) Xian-Ming Gu, (2,3) and Liang-Jian Deng (1)

(1) School of Mathematical Sciences/Research Center for Image and Vision Computing, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

(2) School of Economic Mathematics, Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China

(3) Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, Nijenborgh 9, P O. Box 407, 9700 AK Groningen, Netherlands

Correspondence should be addressed to Xian-Ming Gu; guxianming@live.cn

Received 10 June 2017; Revised 12 September 2017; Accepted 18 September 2017; Published 16 November 2017

Academic Editor: Qingling Zhang

Caption: Figure 1: The true image (a), the point spread function (b), and the blurred and noisy image (c).

Caption: Figure 2: Convergence histories of CGLS, Landweber method, and Algorithm 5 with [omega] = 1.7.

Caption: Figure 3: Restored images by Landweber method and Algorithm 5.

Caption: Figure 4: The non-strongly symmetric PSF.

Caption: Figure 5: The original image (a) and the corresponding blurred and noisy image with the PSF of Gauss blur (b). The original image (c) and the corresponding blurred and noisy image with the non-strongly symmetric PSF (d).

Caption: Figure 6: Convergence histories of Landweber method, the adaptive Landweber method, and Algorithm 5 with [omega] = 1.7 (a) and [omega] = 0.7 (b).

Caption: Figure 7: The corresponding restored images by Landweber method ((a) and (d)), the adaptive Landweber method ((b) and (e)), and Algorithm 5 ((c) and (f)) at iteration 35: [omega] = 1.7 and [omega] = 0.7 for (a), (b), and (c) and for (d), (e), and (f), respectively.

Caption: Figure 8: Restored image by the restarted version of the vector extrapolation based Landweber method with [omega] = 1.7.
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Zhao, Xi-Le; Huang, Ting-Zhu; Gu, Xian-Ming; Deng, Liang-Jian
Publication:Mathematical Problems in Engineering
Article Type:Report
Date:Jan 1, 2017
Words:3360
Previous Article:Performance Analysis of a Hybrid Power Cutting System for Roadheader.
Next Article:Feature Extraction and Fusion Using Deep Convolutional Neural Networks for Face Detection.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters