Printer Friendly

Tikhonov regularization with nonnegativity constraint.

Abstract. Many numerical methods for the solution of ill-posed problems are based on Tikhonov regularization. Recently, Rojas and Steihaug [15] described a barrier method for computing nonnegative Tikhonov-regularized approximate solutions of linear discrete ill-posed problems. Their method is based on solving a sequence of parameterized eigenvalue problems. This paper describes how the solution of parametrized eigenvalue problems can be avoided by computing bounds that follow from the connection between the Lanczos process, orthogonal polynomials and Gauss quadrature.

Key words. ill-posed problem, inverse problem, solution constraint, Lanczos methods, Gauss quadrature.

AMS subject classifications. 65F22, 65F10, 65R30, 65R32, 65R20.

1. Introduction. The solution of large-scale linear discrete ill-posed problems continues to receive considerable attention. Linear discrete ill-posed problems are linear systems of equations

Ax = b, A [member of] [R.sup.m x n], x [member of][R.sup.n], b [member of] [R.sup.m], (1.1)

with a matrix of ill-determined rank. In particular, A has singular values that "cluster" at the origin. Thus, A is severely ill-conditioned and may be singular. We allow m [not equal to] n. The right-hand side vector b of linear discrete ill-posed problems that arise in the applied sciences and engineering typically is contaminated by an error e [member of] [R.sup.m], which, e.g., may stem from measurement errors. Thus, b = [??] + e, where [??] is the unknown error-free right-hand side vector associated with b.

We would like to compute a solution of the linear discrete ill-posed problem with error-free right-hand side,

Ax = [??]. (1.2)

If A is singular, then we may be interested in computing the solution of minimal Euclidean norm. Let [??] denote the desired solution of (1.2). We will refer to [??] as the exact solution.

Let A [dagger] denote the Moore-Penrose pseudo-inverse of A. Then [x.sub.0] := A [dagger] b is the least-squares solution of minimal Euclidean norm of (1.1). Due to the error e in b and the ill-conditioning of A, the vector [x.sub.0] generally satisfies

[parallel][x.sub.0][parallel] [much greater than] [parallel][??][parallel], (1.3)

and then is not a meaningful approximation of [??]. Throughout this paper [parallel]x[parallel] denotes the Euclidean vector norm or the associated induced matrix norm. We assume that an estimate of [parallel][??][parallel], denoted by [DELTA], is available and that the components of [??] are known to be nonnegative. We say that the vector [??] is nonnegative, and write [??] [greater than or equal to] > 0. For instance, we may be able to determine [DELTA] from knowledge of the norm of the solution of a related problem already solved, or from physical properties of the inverse problem to be solved. Recently, Ahmad et al. [1] considered the solution of inverse electrocardiography problems and advocated that known constraints on the solution, among them a bound on the solution norm, be imposed, instead of regularizing by Tikhonov's method.

The matrix A is assumed to be so large that its factorization is infeasible or undesirable. The numerical methods for computing an approximation of [??] discussed in this paper only require the evaluation of matrix-vector products with A and its transpose [A.sup.T].

Rojas and Steihaug [15] recently proposed that an approximation of [??] be determined by solving the constrained minimization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.4)

and they presented a barrier function method for the solution of (1.4).

Let [x.sup.+.sub.0] denote the orthogonal projection of [x.sub.0] onto the set

[R.sup.n,+] := {x [member of] [R.sup.n] : x [greater than or equal to] 0}, (1.5)

i.e., we obtain [x.sup.+.sub.0] by setting all negative entries of [x.sub.0] to zero. In view of (1.3), it is reasonable to assume that the inequality

[parallel][x.sup.+.sub.0][parallel] > [DELTA] (1.6)

holds. Then the minimization problems (1.4) and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.7)

have the same solution. Thus, for almost all linear discrete ill-posed problems of interest, the minimization problems (1.4) and (1.7) are equivalent. Indeed, the numerical method described by Rojas and Steihaug [15, Section 3] solves the problem (1.7).

The present paper describes a new approach to the solution of (1.7). Our method makes use of the connection between the Lanczos process, orthogonal polynomials, and quadrature rules of Gauss-type to compute upper and lower bounds for certain functionals. This connection makes it possible to avoid the solution of large parameterized eigenvalue problems. A nice survey of how the connection between the Lanczos process, orthogonal polynomials, and Gauss quadrature can be exploited to bound functionals is provided by Golub and Meurant [6].

Recently, Rojas and Sorensen [14] proposed a method for solving the minimization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.8)

without nonnegativity constraint, based on the LSTRS method. LSTRS is a scheme for the solution of large-scale quadratic minimization problems that arise in trust-region methods for optimization. The LSTRS method expresses the quadratic minimization problem as a parameterized eigenvalue problem, whose solution is determined by an implicitly restarted Arnoldi method. Matlab code for the LSTRS method has been made available by Rojas [13].

The solution method proposed by Rojas and Steihaug [15] for minimization problems of the form (1.7) with nonnegativity constraint is an extension of the scheme used for the solution of minimization problems of the form (1.8) without nonnegativity constraint, in the sense that the solution of (1.7) is computed by solving a sequence of minimization problems of the form (1.8). Rojas and Steihaug [15] solve each one of the latter minimization problems by applying the LSTRS method.

Similarly, our solution method for (1.7) is an extension of the scheme for the solution of

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.9)

described in [5], because an initial approximate solution of (1.7) is determined by first solving (1.9), using the method proposed in [5], and then setting negative entries in the computed solution to zero. Subsequently, we determine improved approximate solutions of (1.7) by solving a sequence of minimization problems without nonnegativity constraint of a form closely related to (1.9). The methods used for solving the minimization problems without nonnegativity constraint are modifications of a method presented by Golub and von Matt [7]. We remark that (1.6) yields [parallel][x.sub.0][parallel] > [DELTA], and the latter inequality implies that the minimization problems (1.8) and (1.9) have the same solution.

This paper is organized as follows. Section 2 reviews the numerical scheme described in [5] for the solution of (1.9), and Section 3 presents an extension of this scheme, which is applicable to the solution of the nonnegatively constrained problem (1.7). A few numerical examples with the latter scheme are described in Section 4, where also a comparison with the method of Rojas and Steihaug [15] is presented. Section 5 contains concluding remarks.

Ill-posed problems with nonnegativity constraints arise naturally in many applications, e.g., when the components of the solution represent energy, concentrations of chemicals, or pixel values. The importance of these problems is seen by the many numerical methods that recently have been proposed for their solution, besides the method by Rojas and Steihaug [15], see also Bertero and Boccacci [2, Section 6.3], Hanke et al. [8], Nagy and Strakos [11], and references therein. Code for some methods for the solution of nonnegatively constrained least-squares problems has been made available by Nagy [10]. There probably is not one best method for all large-scale nonnegatively constrained ill-posed problems. It is the purpose of this paper to describe a variation of the method by Rojas and Steihaug [15] which can reduce the computational effort for some problems.

2. Minimization without nonnegativity constraint. In order to be able to compute a meaningful approximation of the minimal-norm least-squares solution [??] of (1.2), given {A, b}, the linear system (1.1) has to be modified to be less sensitive to the error e in b. Such a modification is commonly referred to as regularization, and one of the most popular regularization methods is due to Tikhonov. In its simplest form, Tikhonov regularization replaces the solution of the linear system of equations (1.1) by the solution of the Tikhonov equations

([A.sup.T] A + [lambda]I)x = [A.sup.T]b. (2.1)

For each positive value of the regularization parameter [lambda], equation (2.1) has the unique solution

[x.sub.[lambda]] := [([A.sup.T]A + [lambda]I).sup.-1][A.sup.T]b. (2.2)

It is easy to see that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

These limits generally do not provide meaningful approximations of [??]. Therefore the choice of a suitable bounded positive value of the regularization parameter [lambda] is essential. The value of [lambda] determines how sensitive the solution [x.sub.[lambda]] of (2.1) is to the error e, how large the discrepancy b - A[x.sub.[lambda]] is, and how close [x.sub.[lambda]] is to the desired solution [??] of (1.2). For instance, the matrix [A.sup.T]A + [lambda]I is more ill-conditioned, i.e., its condition number [kappa]([A.sup.T]A + [lambda]I) := [parallel][A.sup.T]A + [lambda]I[parallel] [[parallel]([A.sup.T]A + [lambda]I).sup.-1][parallel] is larger, the smaller [lambda] > 0 is. Hence, the solution [x.sub.[lambda]] is more sensitive to the error e, the smaller [lambda] > 0 is.

The following proposition establishes the connection between the minimization problem (1.9) and the Tikhonov equations (2.1).

PROPOSITION 2.1. ([7]) Assume that [parallel][x.sub.0][parallel] > [DELTA]. Then the constrained minimization problem (1.9) has a unique solution [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the form (2.2) with [[lambda].sub.[DELTA]] > 0. In particular,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.3)

Introduce the function

[phi]([lambda]) := [[parallel][x.sub.[lambda]][parallel].sup.2], [lambda] > 0. (2.4)

PROPOSITION 2.2. ([5]) Assume that A [dagger] b [not equal to] 0. The function (2.4) can be expressed as

[phi]([lambda]) = [b.sup.T]A[([A.sup.T]A + [lambda]I).sup.-2][A.sup.T]b, [lambda] > 0, (2.5)

which shows that [phi]([lambda]) is strictly decreasing and convex for [lambda] > 0. Moreover, the equation

[phi]([lambda]) = [tau] (2.6)

has a unique solution [lambda], such that 0 < [lambda] < [infinity], for any [tau] that satisfies 0 < [tau] < [[parallel]A [dagger]b[parallel].sup.2].

We would like to determine the solution [[lambda].sub.[DELTA]] of the equation

[phi]([lambda]) = [[DELTA].sup.2] (2.7)

Since the value of [DELTA], in general, is only an available estimate of [parallel][??][parallel], it is typically not meaningful to compute a very accurate approximation of [[lambda].sub.[DELTA]]. We outline how a few steps of Lanczos bidiagonalization applied to A yield inexpensively computable upper and lower bounds for [phi]([lambda]). These bounds are used to determine an approximation of [[lambda].sub.[DELTA]].

Application of l < min{m, n} steps of Lanczos bidiagonalization to the matrix A with initial vector b yields the decompositions

A[V.sub.l] = [U.sub.l+1][C.sub.l+1,l], [A.sup.T][U.sub.l] = [V.sub.l][C.sup.T.sub.l], b = [[sigma].sub.1][U.sub.l][e.sub.1], (2.8)

where [V.sub.l] [member of] [R.sup.n x l] and [U.sub.l+1] [member of] [R.sup.m x (l+1)] satisfy [V.sup.T.sub.l] = [I.sub.l] and [U.sup.T.sub.l+1][U.sub.l+1] = [I.sub.l+1]. Further, [U.sub.l] consists of the first l columns of [U.sub.l+1] and [[sigma].sub.1] = [parallel]b[parallel]. Throughout this paper [I.sub.j] denotes the j x j identity matrix and [e.sub.j] is the jth axis vector. The matrix [C.sub.l+1,l] is bidiagonal,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.9)

with positive subdiagonal entries [[sigma].sub.2], [[sigma].sub.3], ..., [[sigma].sub.l+1]; [C.sub.l] denotes the leading l x l submatrix of [C.sub.l+1,l]. The evaluation of the partial Lanczos bidiagonalization (2.8) requires l matrix-vector product evaluations with both the matrices A and [A.sup.T]. We tacitly assume that the number of Lanczos bidiagonalization steps l is small enough so that the decompositions (2.8) with the stated properties exist with [[sigma].sub.l+1] > 0. If [[sigma].sub.l+1] vanishes, then the development simplifies; see [5] for details.

Let [C.sub.l+1,l] = [Q.sub.l+1,l][R.sub.l] denote the QR-factorization of [C.sub.l+1,l], i.e., [Q.sub.l+1,l] [member of] [R.sup.(l+1)xl] has orthonormal columns and [R.sub.l] [member of] [R.sup.lxl] is upper bidiagonal. Let [R.sub.l-1,l] denote the leading (l - 1) x l submatrix of [R.sub.l], and introduce the functions

[[phi].sup.-.sub.l]([lambda]) := [[parallel][A.sup.T] b[parallel].sup.2][e.sup.T.sub.1][([R.sup.T.sub.l][R.sub.l] + [lambda][I.sub.l]).sup.-2][e.sup.1], (2.10)

[[phi].sup.+.sub.l]([lambda]) := [[parallel][A.sup.T] b[parallel].sup.2][e.sup.T.sub.1][([R.sup.T.sub.l-1][R.sub.l-1,l] + [lambda][I.sub.l]).sup.-2][e.sup.1], (2.11)

defined for [lambda] > 0. Using the connection between the Lanczos process and orthogonal polynomials, the functions [[phi].sup.[+ or -].sub.l] ([lambda]) can be interpreted as Gauss-type quadrature rules associated with an integral representation of [phi]([lambda]). This interpretation yields

[[phi].sup.-.sub.l]([lambda]) < [phi]([lambda]) < [[phi].sup.+.sub.l]([lambda]) [lambda] > 0; (2.12)

details are presented in [5]. Here we just remark that the factor [parallel][A.sup.T]b[parallel] in (2.10) and (2.11) can be computed as [parallel][A.sup.T]b[parallel] = [[rho].sub.1][[sigma].sub.1], where the right-hand side is defined by (2.8) and (2.9).

We now turn to the zero-finder used to determine an approximate solution of (2.7). Evaluation of the function [phi]([lambda]) for several values of [lambda] can be very expensive when the matrix A is large. Our zero-finder only requires evaluation of the functions [[phi].sup.[+ or -].sub.l], ([lambda]) and of the derivative

d/d[lambda] [[phi].sup.+.sub.l] ([lambda]) = -2[[parallel] [A.sup.T]b[parallel].sup.2][e.sup.T.sub.1][([R.sup.T.sub.l-1,l] [R.sub.l-1,l] + [lambda][I.sub.l]).sup.-3] [e.sub.1] (2.13)

for several values of [lambda]. When the Lanczos decomposition (2.8) is available, the evaluation of the functions [[phi].sup.[+ or -].sub.l] ([lambda]) and derivative (2.13) requires only O(l) arithmetic floating point operations for each value of [lambda]; see [5].

We seek to find a value of A such that

[[DELTA].sup.2][[eta].sup.2] [less than or equal to] [phi]([lambda]) [less than or equal to] [[DELTA].sup.2], (2.14)

where the constant 0 < [eta] [less than or equal to] 1 determines the accuracy of the computed solution of (2.7). As already pointed out above, it is generally not meaningful to solve equation (2.7) exactly, or equivalently, to let [eta] := 1.

Let [lambda] be a computed approximate solution of (2.7) which satisfies (2.14). It follows from Proposition 2.2 that [lambda] is bounded below by [[lambda].sub.[DELTA]], and this avoids that the matrix [A.sup.T] A + [lambda]I has a larger condition number than the matrix [A.sup.T]A + [[lambda].sub.[DELTA]]I.

We determine a value of [lambda] that satisfies (2.14) by computing a pair {l, [lambda]}, such that

[[DETLA].sup.2][[eta].sup.2] [less than or equal to] [[phi].sup.-] ([lambda]), [[phi].sup.+.sub.l] ([lambda]) [less than or equal to] [[DELTA].sup.2]. (2.15)

It follows from (2.12) that the inequalities (2.15) imply (2.14). For many linear discrete ill-posed problems, the value of l in a pair {l, [lambda]} that satisfies (2.15) can be chosen fairly small.

Our method for determining a pair {l, [lambda]} that satisfies (2.15) is designed to keep the number of Lanczos bidiagonalization steps f small. The method starts with l = 2 and then increases l if necessary. Thus, for a given value of l [greater than or equal to] 2, we determine approximations [[lambda].sup.(j).sub.l], j = 0, 1, 2, ..., of the largest zero, denoted by [[lambda].sub.l], of the function

[h.sup.+.sub.l] ([lambda]) := [[phi].sup.+.sub.l] ([lambda]) - [[DELTA].sup.2].

PROPOSITION 2.3. ([5]) The function [[phi].sup.+.sub.l] ([lambda]), defined by (2.11), is strictly decreasing and convex for [lambda] > 0. The equation

[[phi].sup.+.sub.l] ([lambda]) = [tau] (2.16)

has a unique solution [lambda], such that 0 < [lambda] < [infinity], for any [tau] that satisfies 0 < [tau] < [[parallel]A [dagger]b[parallel].sup.2].

The proposition shows that equation (2.16) has a unique solution whenever equation (2.7) has one. Let the initial approximation [[lambda].sup.(0).sub.l] of [[lambda].sub.l] satisfy [[lambda].sub.l] < [[lambda].sup.(0).sub.l]. We use the quadratically convergent zero-finder by Golub and von Matt [7, equations (75)-(78)] to determine a monotonically decreasing sequence of approximations [[lambda].sup.(j).sub.l], j = 1, 2, 3, ..., of [[lambda].sub.l]. The iterations with the zero-finder are terminated as soon as an approximation [[lambda].sup.(p).sub.l], such that

1/10 ([[eta].sup.2] - 1) [[DELTA].sup.2] + [[DELTA].sup.2] [less than or equal to] [[phi].sup.+.sub.l] ([[lambda].sup.(p).sub.l]) [less than or equal to] [[DELTA].sup.2], (2.17)

has been found. We used this stopping criterion in the numerical experiments reported in Section 4. A factor different from 1/10 could have been used in the negative term in (2.17). The factor has to be between zero and one; a larger factor may reduce the number of iterations with the zero-finder, but increase the number of Lanczos bidiagonalization steps.

If [[lambda].sup.(p).sub.l] also satisfies

[[DELTA].sup.2] [[eta].sup.2] [less than or equal to] [[phi].sup.-.sub.l]([[lambda].sup.(p).sub.l]), (2.18)

then both inequalities (2.15) hold for [lambda] = [[lambda].sup.(p).sub.l], and we accept [[lambda].sup.(p).sub.l] as an approximation of the solution [[lambda].sub.[DELTA]] of equation (2.7).

If the inequalities (2.17) hold, but (2.18) does not, then we carry out one more Lanczos bidiagonalization step and seek to determine an approximation of the largest zero, denoted by [[lambda].sub.l+1], of the function

[h.sup.+.sub.l+1]([lambda]) := [[phi].sup.+.sub.l+1] ([lambda]) - [[DELTA].sup.2],

using the same zero-finder as above with initial approximate solution [[lambda].sup.(0).sub.l+1] := [[lambda].sup.(p).sub.l].

Assume that [[lambda].sup.(p).sub.l] satisfies both (2.17) and (2.18). We then solve

([R.sup.T.sub.l][R.sub.l] + [[lambda].sup.(p).sub.l][I.sub.l])y = [parallel][A.sup.T]b[parallel][e.sub.1]. (2.19)

The solution, denoted by y, yields the approximation

[??] := [V.sub.l] [??] (2.20)

of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The vector [??] is a Galerkin approximation of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], such that [[parallel] [??][parallel].sup.2] = [[phi].sup.- .sub.l]([[lambda].sup.(p).sub.l]); see [3, Theorem 5.1]. We remark that in actual computations, [??] is determined by solving a least-squares problem, whose associated normal equations are (2.19); see [5] for details.

3. Minimization with nonnegativity constraint. This section describes our solution method for (1.7). Our scheme is a variation of the barrier function method used by Rojas and Steihaug [15] for solving linear discrete ill-posed problems with nonnegativity constraint. Instead of solving a sequence of parameterized eigenvalue problems, as proposed in [15], we solve a sequence of Tikhonov equations. Similarly as in Section 2, we use the connection between the Lanczos process, orthogonal polynomials, and Gauss quadrature to determine how many steps of the Lanczos process to carry out. Analogously to Rojas and Steihaug [15], we introduce the function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.1)

which is defined for all vectors x = [[[[xi].sub.1], [[xi].sub.2], [[xi].sub.n]].sup.T] in the interior of the set (1.5). Such vectors are said to be positive, and we write x > 0. The barrier parameter [mu] > 0 determines how much the solution of the minimization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.2)

is penalized for being close to the boundary of the set (1.5). We determine an approximate solution of (1.7) by solving several minimization problems related to (3.2) for a sequence of parameter values [mu] = [[mu].sup.(j)], j = 0, 1, 2, ..., that decrease towards zero as j increases.

Similarly as Rojas and Steihaug [15], we simplify the minimization problem (3.2) by replacing the function [f.sub.[mu]](x) by the first few terms of its Taylor series expansion at x = [[[[xi].sub.1], [[xi].sub.2], ..., [[xi].sub.n]].sup.T],

[q.sub.[mu]] (x + h) := [f.sub.[mu]] (x) + [([for all] [f.sub.[mu]] (x)).sup.T] h + 1/2 [h.sup.T] [[for all].sup.2] [f.sub.[mu]] (x) h,

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

i.e., X = diag[[[xi].sub.1], [[xi].sub.2], ..., [[xi].sub.n]]. Here and below c = [[1, 1, ..., 1].sup.T] [member of] [R.sup.n]. For [mu] > 0 and x fixed, we solve the quadratic minimization problem with respect to h,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This is a so-called trust-region subproblem associated with the minimization problem (3.2). Letting z := x + h yields the equivalent quadratic minimization problem with respect to z,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.3)

We determine an approximate solution of the minimization problem (1.7) by solving several minimization problems of the form (3.3) associated with a sequences of pairs {[mu], x} = {[[mu].sup.(j)], [x.sup.(j)]}, j = 0, 1, 2, ..., of positive parameter values and positive approximate solutions of (1.7), such that the former converge to zero as j increases. The solution z (j) of (3.3) associated with the pair {[[mu].sup.(j)], [x.sup.(j)]} determines a new approximate solution x(j+1) of (1.7) and a new value [[mu].sup.(j+1)] of the barrier parameter; see below for details.

We turn to the description of our method for solving (3.3). The method is closely related to the scheme described in Section 2.

The Lagrange functional associated with (3.3) shows that necessary and sufficient conditions for a feasible point z to be a solution of (3.3) and [lambda] [member of] R to be a Lagrange multiplier are that the matrix [A.sup.T] A + [mu][X.sup.-2] + [lambda]I be positive semidefinite and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.4)

For each parameter value [mu] > 0 and diagonal matrix X with positive diagonal entries, the linear system of equations (3.4)(i) is of a similar type as (2.1). The solution z depends on the parameter [lambda], and we therefore sometimes denote it by [z.sub.[lambda]]. Introduce the function

[psi]([lambda]) := [[parallel][z.sub.[lambda]][parallel].sup.2], [lambda] > 0.

Equation (3.4)(i) yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.5)

and we determine a value of [lambda] > 0, such that equation (3.4)(ii) is approximately satisfied, by computing upper and lower bounds for the right-hand side of (3.5) using the Lanczos process, analogously to the approach of Section 2. Application of l steps of Lanczos tridiagonalization to the matrix A + [mu][X.sup.-2] with initial vector [A.sup.T]b + 2[mu][X.sup.-1]c yields the decomposition

([A.sup.T] A + [mu][X.sup.-2])[W.sub.l] = [W.sub.l][T.sub.l] + [f.sub.l][e.sup.T.sub.l], (3.6)

where [W.sub.l] [member of] [R.sup.nxl] and [f.sub.l] [member of] [R.sup.n] satisfy

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The matrix [T.sub.l] [member of] [R.sup.lxl] is symmetric and tridiagonal, and since [A.sup.T]A + [mu][X.sup.-2] is positive definite for [mu] > 0, so is [T.sub.l].

Assume that [f.sub.l] [not equal to] 0, otherwise the formulas simplify, and introduce the symmetric tridiagonal matrix [[??].sub.l+1] [member of] [R.sup.(l+1)x(l+1)] with leading principal submatrix [T.sub.l], last sub- and super-diagonal entries [parallel][f.sub.l][parallel], and last diagonal entry chosen so that [[??].sub.l+1] is positive semidefinite with one zero eigenvalue. The last diagonal entry can be computed in O(l) arithmetic floating point operations.

Introduce the functions, analogous to (2.10) and (2.11),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.7)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.8)

Similarly as in Section 2, the connection between the Lanczos process and orthogonal polynomials makes it possible to interpret the functions [[psi].sup.[+ or -].sub.l] (A) as Gauss-type quadrature rules associated with an integral representation of the right-hand side of (3.5). This interpretation yields, analogously to (2.12),

[[psi].sup.-.sub.l] ([lambda]) < [psi]([lambda]) < [[psi].sup.=.sub.l] ([lambda]), [lambda] > 0; (3.9)

see [4] for proofs of these inequalities and for further properties of the functions [[psi].sup.[+ or -].sub.l] ([lambda]).

Let 0 < [eta] [less than or equal to] 1 be the same constant as in equation (2.14). Similarly as in Section 2, we seek to determine a value of [lambda] > 0, such that

[[DELTA].sup.2][[eta].sup.2] [less than or equal to] [psi]([lambda]) < [[DELTA].sup.2]. (3.10)

This condition replaces equation (3.4)(ii) in our numerical method. We determine a pair {l, [lambda]}, such that

[[DELTA].sup.2][[eta].sup.2] [less than or equal to] [[psi].sup.-.sub.l]([lambda]), [[psi].sup.+.sub.l] [less than or equal to] [[DELTA].sup.2]. (3.11)

The value of [lambda] so obtained satisfies (3.10), because it satisfies both (3.9) and (3.11). For many linear systems of equations (3.4)(i), the inequalities (3.11) can be satisfied already for fairly small values of l.

For l = 2, 3, 4, ..., until a sufficiently accurate approximation of (1.7) has been found, we determine the largest zero, denoted by [[lambda].sub.], of the function

[g.sup.+.sub.l]([lambda]) := [[psi].sup.=.sub.l]([lambda]) - ([[DELTA].sup.2] + 1/20 ([[eta].sup.2] - 1) [[DELTA].sup.2]), (3.12)

using the quadratically and monotonically convergent zero-finders discussed by Golub and von Matt [7]. The iterations with the zero-finders are terminated when an approximate zero [[lambda].sup.(p).sub.l] has been found, such that

1/10 ([[eta].sup.2] - 1) [[DELTA].sup.2] + [[DELTA].sup.2] [less than or equal to] [[psi].sup.+.sub.l]([[lambda].sup.(p).sub.l]) [less than or equal to] [[DELTA].sup.2], (3.13)

cf. (2.17). The purpose of the term 1/20 ([[eta].sup.2] - 1) [[DELTA].sup.2] in (3.12) is to make the values ([[psi].sup.+.sub.l]([[lambda].sup.(j).sub.l]), j = 0, 1, 2, ..., converge to the midpoint of an interval of acceptable values; cf. (3.13).

If, in addition, [[lambda].sup.(p).sub.l] satisfies

[[DELTA].sup.2][[eta].sup.2] [less than or equal to] [[psi].sup.-.sub.l]([[lambda].sup.(p).sub.l]), (3.14)

then both inequalities (3.11) hold for [lambda] = [[lambda].sup.(p).sub.l], and we accept [[lambda].sup.(p).sub.l] and as an approximation of the largest zero of [[psi].sub.l]([lambda]).

If (3.13) holds, but (3.14) is violated, then we carry out one more Lanczos tridiagonalization step and seek to determine an approximation of the largest zero, denoted by [[lambda].sub.l+1], of the function [g.sup.+.sub.l+1] ([lambda]) := [[psi].sup.+.sub.l+1]([lambda]) - [[DELTA].sup.2].

Let [[lambda].sup.(p).sub.l] satisfy both inequalities (3.11). Then we determine an approximate solution [??] of the linear system of equations (3.4)(i) using the Lanczos decomposition (3.6). Thus, let [??] denote the solution of

([T.sub.l] + [[lambda].sup.(p).sub.l] [I.sub.l+1])y = [parallel][A.sup.T]b + 2[mu][X.sup.-1]c[parallel][e.sub.1].

Then

[??] := [W.sub.l][??] (3.15)

is a Galerkin approximation of the vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We are in a position to discuss the updating of the pair {[[mu].sup.(j)], [x.sup.(j)]}. Let [??] be the computed approximate solution of the minimization problem (3.3) determined by [mu] = [[mu].sup.(j)] and x = [x.sup.(j)]. Define [h.sup.(j)] := [??] - [x.sup.(j)] and compute the candidate solution

[??] := [x.sup.(j)] + [dh.sup.(j)] (3.16)

of (1.7), where the constant d > 0 is chosen so that [??] is positive. As in [15], we let

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.17)

Let [delta] > 0 be a user-specified constant, and introduce the vector

[x.sup.(j+1)] = [[[[xi].sub.1], [[xi].sub.2], ..., [[xi].sub.n]].sup.T] [[xi].sub.k] := max{[delta], [e.sup.T.sub.k]x}, 1 [less than or equal to] k [less than or equal to] n, (3.18)

and the matrix X = diag [[x.sup.(j+1)]]. The purpose of the parameter [delta] is to avoid that [x.sup.(j+1)] has components very close to zero, since this would make [parallel][X.sup.-1][parallel] very large. Following Rojas and Steihaug [15], we compute

s := [mu]([X.sup.-1][??] - 2[X.sup.-1]c) (3.19)

and update the value of the barrier parameter according to

[[mu].sup.(j+1)] := [sigma]/n [absolute value of [s.sup.T][x.sup.(j+1)]], [sigma] := 1 x [10.sup.-2]. (3.20)

In our computed examples, we use the same stopping criteria as Rojas and Steihaug [15]. Let f(x) be the quadratic part of the function [f.sub.[mu]](x) defined by (3.1), i.e.,

f(x) := 1/2 [x.sup.T][A.sup.T] Ax - [b.sup.T] Ax.

The computations are terminated and the vector [??] given by (3.16) is accepted as an approximate solution of (1.7), as soon as we find a vector [x.sup.(j+1)], defined by (3.18), which satisfies at least one of the conditions

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.21)

Here s is defined by (3.19), and e f, ex and es are user-supplied constants.

We briefly comment on the determination of the matrix X and parameter [mu] = [[mu].sup.(1)] in the first system of equations (3.4)(i) that we solve. Before solving (3.4)(i), we compute an approximate solution [??], given by (2.20), of the minimization problem (1.9) as described in Section 2. Let [??] = [[lambda].sup.(p).sub.l] denote the corresponding value of the regularization parameter, and let [[??].sup.+] be the orthogonal projection of [??] onto the set (1.5). If [[??].sup.+] is a sufficiently accurate approximate solution of (1.7), then we are done; otherwise we improve [[??].sup.+] by the method described in this section. Define [x.sup.(1)] by (3.18) with j = 0 and [??] replaced by [??], let X = diag[[x.sup.(1)]], and let [[mu].sup.(1)] be given by (3.20) with j = 0 and s := [A.sup.T]b - ([A.sup.T] A- [??]I)[x.sup.(1)]. We now can determine and solve (3.4). The following algorithm summarizes how the computations are organized.
ALGORITHM 3.1. Constrained Tikhonov Regularization

1. Input: A [member of] [R.sup.mxn], b [member of] [R.sup.m], [delta],
 [[epsilon].sub.f], [[epsilon].sub.x], [[epsilon].sub.s], [eta].
 Output: [lambda], [mu], approximate solution x of (1.7).

2. Apply the method of Section 2 to determine an approximate solution x
 of (1.9).
 Compute the associated orthogonal projection [[??].sup.+] onto the
 set (1.5) and let [??] := [[??].sup.+].
 If x is a sufficiently accurate approximate solution of (1.7), then
 exit.

3. Determine the initial positive approximate solution [x.sup.(1)] by
 (3.18), let X = diag[[x.sup.(1)]], and compute [mu] = [[mu].sup.(1)]
 as described above. Define the linear system (3.4)(i). Let j := 1.

4. Compute the approximate solution [??], given by (3.15), of the
 linear system (3.4)(i) with [lambda] = [[lambda].sup.(p).sub.l] and
 the number of Lanczos tridiagonalization steps l, chosen so that
 the inequalities (3.13) and (3.14) hold.

5. Determine [??] according to (3.16) with d given by (3.17), and
 [x.sup.(j+1)] using (3.18).
 Compute s by (3.19). If the pair {[x.sup.(j+1)], s} satisfies one of
 the inequalities (3.21), then accept the vector [??] as an
 approximate solution of (1.7) and exit.

6. Let X = diag[[x.sup.(j+1)]] and let [mu] = [[mu].sup.(j+1)] be given
 by (3.20). Define a new linear system of equations (3.4)(i) using
 {[mu], X}. Let j := j + 1. Goto 4.


4. Computed examples. We illustrate the performance of Algorithm 3.1 when applied to a few typical linear discrete ill-posed problems, such that the desired solution [??] of the associated linear systems of equations with error-free right-hand side (1.2) is known to be nonnegative. All computations were carried out using Matlab with approximately 16 significant decimal digits. In all examples, we let [DELTA] := [parallel][??][parallel] and chose the initial values l = 2 and [[lambda].sup.(0).sub.2] = 10. Then [[phi].sub.2]([[lambda].sup.(0).sub.2]) < [[DELTA].sup.2], i.e., [[lambda].sup.(0).sub.2] is larger than [[lambda].sub.2], the largest zero of [[phi].sub.2]([lambda]). The error vectors e used in the examples have normally distributed random entries with zero mean and are scaled so that e is of desired norm.

[FIGURE 4.1 OMITTED]

Example 4.1. Consider the Fredholm integral equation of the first kind

[[integral].sup.6.sub.-6][kappa]([tau], [sigma])x([sigma])d[sigma] = b([tau]), -6 [less than or equal to] [tau] [less than or equal to] 6, (4.1)

discussed by Phillips [12]. Its solution, kernel and right-hand side are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.2)

b([tau]) := (6 - [absolute value of [tau]])(1 + 1/2 cos([pi]/3 [tau])) + 9/2[pi] sin([pi]/3[absolute value of [tau]]). (4.3)

We discretize the integral equation using the Matlab code phillips from the program package Regularization Tools by Hansen [9]. Discretization by a Galerkin method using 300 orthonormal box functions as test and trial functions yields the symmetric indefinite matrix A [member of] [R.sup.300x300] and the right-hand side vector [??] [member of] [R.sup.300]. The code phillips also determines a discretization of the solution (4.2). We consider this discretization the exact solution [??] [member of] [R.sup.300]. An error vector e is added to b to give the right-hand side b of (1.1) with relative error [parallel]e[parallel]/[parallel][??][parallel] = 5 x [10.sup.-3]. This corresponds to [parallel]e[parallel] = 7.6 x [10.sup.-2].

[FIGURE 4.2 OMITTED]

Let [DELTA] := [parallel][??][parallel] and [eta] := 0.999. Then Step 2 of Algorithm 3.1 yields an approximate solution x, defined by (2.20), using only 8 Lanczos bidiagonalization steps; thus only 8 matrix-vector product evaluations are required with each one of the matrices A and [A.sup.T]. Since the vector [??] is not required to be nonnegative, it represents an oscillatory approximate solution of (1.2); see the black curves in Figures 4.1 and 4.2. The relative error in [??] is [parallel][??] - [??][parallel]/[parallel][??][parallel] = 1.91 x [10.sup.-2].

Let [[??].sup.+] denote the orthogonal projection of [??] onto the set (1.5). The magenta curves in Figures 4.1 and 4.2 display [[??].sup.+]. Note that [[??].sup.+] agrees with [??] for nonnegative values. Clearly, [[??].sup.+] is a better approximation of [??] than [??]; we have [parallel][[??].sup.+] - [??][parallel]/[parallel][??][parallel] = 1.36 x [10.sup.-2].

Let the coefficients in the stopping criteria (3.21) for Algorithm 3.1 be given by [[epsilon].sub.f] := 1 x [10.sup.-5], [[epsilon].sub.x] := 1 x [10.sup.-5], and [[epsilon].sub.s] := 1 x [10.sup.-12]. These are the values used in [15]. Let [delta] := 1 x [10.sup.-3] in (3.18). These parameters are required in Step 5 of Algorithm 3.1. The red curves of Figures 4.1 and 4.2 show the approximate solution [??] determined by Steps 4-6 of the algorithm. The computation of [??] required the execution of each of these steps twice, at the cost of 21 Lanczos tridiagonalization steps the first time, and 8 Lanczos tridiagonalization steps the second time. In total, 79 matrix-vector product evaluation were required for the computations of x. This includes work for computing x. The relative error [parallel][??] - [??][parallel]/[parallel][??][parallel] = 5.42 x [10.sup.-3] is smaller than for [[??].sup.+]; this is also obvious form Figure 4.2. Specifically, the method of Section 3 gives a nonnegative approximate solution [??], whose error is about 1/3 of the error in [[??].sup.+].

Example 4.2. This example differs from Example 4.1 only in that no error vector e is added to the right-hand side vector b determined by the code phillips. Thus, b = [??]. Rojas and Steihaug [15] have also considered this example. In the absence of an error e in b, other than round-off errors, fairly stringent stopping criteria should be used. Letting [eta] := 0.9995 yields after 6 Lanczos bidiagonalization steps the approximate solution x in Step 2 of Algorithm 3.1. The relative error in [??] is [parallel][??] - [??][parallel]/[parallel][??][parallel] = 7.61 x [10.sup.-3]. The associated orthogonal projection [[??].sup.+] onto (1.5) has relative error [parallel][??] - [??][parallel]/ [parallel][??][parallel] = 5.50 x [10.sup.-3]. The computation of [??] and [[??].sup.+] requires the evaluation of 6 matrix-vector products with A and 6 matrix-vector products with [A.sup.T]. Rojas and Steihaug [15] report the approximate solution determined by their method to have a relative error of 1.01 x [10.sup.-2] and its computation to require the evaluation of 631 matrix-vector products.

[FIGURE 4.3 OMITTED]

The approximate solution [[??].sup.+] can be improved by the method of Section 3, however, at a fairly high price. Using [[epsilon].sub.f] := 1 x [10.sup.-9], [[epsilon].sub.x] := 1 x [10.sup.-5], [[epsilon].sub.s] := 1 x [10.sup.-13], and [delta] := 1 x [10.sup.-3], we obtain the approximate solution [??] with relative error [parallel][??] - [??][parallel]/[parallel][??][parallel] = 5.15 x [10.sup.-3]. The evaluation of x requires the computation of 129 matrix-vector products, with at most 19 consecutive Lanczos tridiagonalization steps.

We conclude that when the relative error in b is fairly large, such as in Example 4.1, the method of the present paper can determine an approximate solution [??] with significantly smaller error than the projected approximate solution [[??].sup.+]. However, when the relative error in b is small, then the method of the present paper might only give minor improvements at high cost.

Example 4.3. Consider the blur- and noise-free image shown in Figure 4.5. The figure depicts three hemispheres, but because of the scaling of the axes, they look like hemi-ellipsoids. The image is represented by 256 x 256 pixels, whose values range from 0 to 255. The pixel values are stored row-wise in the vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which we subsequently scale to have unit length. After scaling, the largest entry of [??] is 1.5 x [10.sup.-2]. The image in Figure 4.5 is assumed not to be available, only [DELTA] := [parallel][??][parallel] and a contaminated version of the image, displayed in Figure 4.6, are known. We would like to restore the available image in Figure 4.6 to obtain (an approximation of) the image in Figure 4.5.

[FIGURE 4.4 OMITTED]

[FIGURE 4.5 OMITTED]

The image in Figure 4.6 is contaminated by noise and blur, with the blurring operator represented by a nonsymmetric Toeplitz matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of ill-determined rank; thus, A is numerically singular. Due to the special structure of A, only the first row and column have to be stored. Matrix-vector products with A and [A.sup.T] are computed by using the fast Fourier transform. The vector [??] := A[??] represents a blurred but noise-free image. The error vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] represents noise and has normally distributed entries with zero mean. The vector is scaled to yield the relative error [parallel]e[parallel]/[parallel][??][parallel] = 1 x [10.sup.-3] in the available right-hand side vector b = [??] + e. The latter represents the blurred and noisy image shown in Figure 4.6. The largest entry of b is 1.0 x [10.sup.-2].

[FIGURE 4.6 OMITTED]

[FIGURE 4.7 OMITTED]

The method of Section 2 with [eta] = 0.99 requires 26 Lanczos bidiagonalization steps to determine the vector [??], given by (2.20) and displayed in Figure 4.7, with relative error [parallel]x - [??][parallel]/[parallel][??][parallel] = 8.3 x [10.sup.-2]. Note the oscillations around the bases of the hemispheres. The nonnegative vector [[??].sup.+], obtained by projecting [??] onto (1.5), is shown in Figure 4.8. It has relative error [parallel][x.sup.+] - [??][parallel]/[parallel][??][parallel] = 8.1 x [10.sup.-2]. The largest entry of [x.sup.+] is 1.5 x [10.sup.-2].

[FIGURE 4.8 OMITTED]

[FIGURE 4.9 OMITTED]

The oscillations around the hemispheres can be reduced by the method of Section 3. With [delta] := 5 x [10.sup.-5], [[epsilon].sub.f] := 1 x [10.sup.-3], [[epsilon].sub.x] := 1 x [10.sup.-3], and [[epsilon].sub.s] := 1 x [10.sup.-10], we obtain the vector [??], given by (3.16), with relative error [parallel][??] - [??][parallel]/[parallel][??][parallel] = 7.3 x [10.sup-2]. Figure 4.9 depicts [??]. Note that the oscillations around the bases of the hemispheres are essentially gone. The largest component of [??] is 1.5 x [10.sup.-2]. The computation of x required the completion of Steps 4 and 5 of Algorithm 3.1 once, and demanded computation of 27 Lanczos tridiagonalization steps. The evaluation of x required that a total of 110 matrix-vector products with A or [A.sup.T] be computed, including the 52 matrix-vector product evaluations needed to compute [[??].sup.+].

[FIGURE 4.10 OMITTED]

[FIGURE 4.11 OMITTED]

[FIGURE 4.12 OMITTED]

[FIGURE 4.13 OMITTED]

Example 4.4. The data for this example was developed at the US Air Force Phillips Laboratory and has been used to test the performance of several available algorithms for computing regularized nonnegative solutions. The data consists of the noise- and blur-free image of the satellite shown in Figure 4.10, the point spread function which defines the blurring operator, and the blurred and noisy image of the satellite displayed in Figure 4.11. The images are represented by 256 x 256 pixels. The pixel values for the noise- and blur-free image and for the contaminated image are stored in the vectors [??] and b, respectively, where b is the right-hand side of (1.1). Thus, O and b are of dimension [256.sup.2] = 65536. The matrix A in (1.1) represents the blurring operator and is determined by the point spread function; A is a block-Toeplitz matrix with Toeplitz blocks of size size 65536 x 65536. The matrix is not explicitly stored; matrix-vector products with A and [A.sup.T] are evaluated by the fast Fourier transform. Similarly as in Example 4.3, the vector [??] := A[??] represents a blurred noise-free image. We consider the difference e := b - [??] to be noise, and found that [parallel]b - [??][parallel] = 3.3 x [10.sup.-4] and [parallel]b - [??][parallel]/[parallel][??][parallel] = 4.7 x [10.sup.-2]. Thus b is contaminated by a significant amount of noise.

[FIGURE 4.14 OMITTED]

Let [DELTA] := [parallel][??][parallel], and assume that the blur- and noise-free image of Figure 4.10 is not available. Given A, b and [DELTA], we would like to determine an approximation of this image. We let [eta] := 0.935 in Algorithm 3.1.

The method of Section 2 requires 47 Lanczos bidiagonalization steps to determine the vector [??], given by (2.20) and shown in Figure 4.12. The relative error in [??] is [parallel][??] - [??][parallel]/ [parallel][??][parallel] = 0.354. The projection [[??].sup.+] of [??] onto the set (1.5) has relative error [parallel][[??].sup.+] - [??][parallel]/[parallel][??][parallel] = 0.346. As expected, this error is smaller than the relative error in x. Figure 4.13 displays [[??].sup.+]. We remark that the gray background in Figure 4.12 is of no practical significance. It is caused by negative entries in the vector [??]. This vector is resealed by Matlab to have entries in the interval [0,255] before plotting. In particular, zero entries are mapped to a positive pixel value and are displayed in gray.

The accuracy of the approximate solution [[??].sup.+] can be improved by the method of Section 3. We use the same values of the parameters [delta], [[epsilon].sub.f], [[epsilon].sub.x], and [[epsilon].sub.s] as in Example 4.3. After execution of Steps 4-6 of Algorithm 3.1 followed by Steps 4-5, a termination condition is satisfied, and the algorithm yields [??] with relative error [parallel][??] - [??][parallel]/[parallel][??][parallel] = 0.339. Figure 4.14 shows [??]. Step 4 requires the evaluation of 113 Lanczos tridiagonalization steps the first time, and 51 Lanczos tridiagonalization steps the second time. The computation of [??] demands a total of 427 matrix-vector product evaluations with either A or [A.sup.T], including the 94 matrix-vector products required to determine [??] and [[??].sup.+].

The relative error in x is smaller than the relative errors in computed approximations of O determined by several methods recently considered in the literature. For instance, the smallest relative error achieved by the methods presented in [8] is 0.356, and the relative error reported in [15] is 0.358.

We conclude this section with some comments on the storage requirement of the method of the present paper. The implementation used for the numerical examples reorthogonalizes the columns of the matrices [V.sub.l], [U.sub.l], and [W.sub.l] in the decompositions (2.8) and (3.6). This secures numerical orthonormality of the columns and may somewhat reduce the number of matrix-vector products required to solve the problems (compared with no reorthogonalization). However, reorthogonalization requires storage of all the generated columns. For instance in Example 4.3, Lanczos bidiagonalization with reorthogomalization requires storage of [V.sub.26] and [U.sub.26], and Lanczos tridiagonalization with reorthogonalization requires storage of [W.sub.26]. The latter matrix may overwrite the former. We generally apply reorthogonalization when it is important to keep the number of matrix-vector product evaluations as small as possible, and when sufficient computer storage is available for the matrices [V.sub.l], [U.sub.l], and [W.sub.l]. The effect of loss of numerical orthogonality, that may arise when no reorthogonalization is carried out, requires further study. Without reorthogonalization, the method of the present paper can be implemented to require storage of only a few columns of [V.sub.l], [U.sub.l], and [W.sub.l] simultaneously, at the expense of having to compute the matrices [V.sub.l], [U.sub.l], and [W.sub.l] twice. The scheme by Rojas and Steihaug [15] is based on the implicitly restarted Amoldi method, and therefore its storage requirement can be kept below a predetermined bound.

5. Conclusion. The computed examples illustrate that our numerical method for Tikhonov regularization with nonnegativity constraint can give a more pleasing approximate solution of the exact solution [??] than the scheme of Section 2, when the latter gives an oscillatory solution. Our method is closely related to a scheme recently proposed by Rojas and Steihaug [15]. A careful comparison between their method and ours is difficult, because few details on the numerical experiments are provided in [15]. Nevertheless, we feel that our approach to Tikhonov regularization with nonnegativity constraint based on the connection between orthogonal polynomials, Gauss quadrature and the Lanczos process, is of independent interest. Moreover, Examples 4.2 and 4.4 indicate that our method may be competitive.

REFERENCES

[1] G. F. AHMAD, D. H. BROOKS, AND R. S. MACLEOD, An admissible solution approach to inverse electro-cardiography, Ann. Biomed. Engrg, 26 (1998), pp. 278-292.

[2] M. BERTERO AND P. BOCCACCI, Introduction to Inverse Problems in Imaging, Institute of Physics Publishing, Bristol, 1998.

[3] D. CALVETTI, G. H. GOLUB, AND L. REICHEL, Estimation of the L-curve via Lanczos bidiagonalization, BIT, 39 (1999), pp. 603-619.

[4] D. CALVETTI AND L. REICHEL, Gauss quadrature applied to trust region computations, Numer. Algorithms, 34 (2003), pp. 85-102.

[5] D. CALVETTI AND L. REICHEL, Tikhonov regularization with a solution constraint, SIAM J. Sci. Comput., 26 (2004), pp. 224-239.

[6] G. H. GOLUB AND G. MEURANT, Matrices, moments and quadrature, in Numerical Analysis 1993, D. F. Griffiths and G. A. Watson, eds., Longman, Essex, England, 1994, pp. 105-156.

[7] G. H. GOLUB AND U. VON MATT, Quadratically constrained least squares and quadratic problems, Numer. Math., 59 (1991), pp. 561-580.

[8] M. HANKE, J. NAGY, AND C. VOGEL, Quasi-Newton approach to nonnegative image restorations, Linear Algebra Appl., 316 (2000), pp. 223-236.

[9] P. C. HANSEN, Regularization tools: A Matlab package for analysis and solution of discrete ill-posed problems, Numer. Algorithms, 6 (1994), pp. 1-35. Software is available in Nedib at http://www.netlib.org.

[10] J. G. NAGY, RestoreTools: An object oriented Matlab package for image restoration. Available at the web site http://www.mathcs.emory.edu/~nagy/.

[11] J. NAGY AND Z. STRAKOS, Enforcing nonnegativity in image reconstruction algorithms, in Mathematical Modeling, Estimation and Imaging, D. C. Wilson et al., eds., Society of Photo-Optical Instrumentation Engineers (SPIE), 4121, The International Society for Optical Engineering, Bellingham, WA, 2000, pp. 182-190.

[12] D. L. PHILLIPS, A technique for the numerical solution of certain integral equations of the first kind, J. ACM, 9 (1962), pp. 84-97.

[13] M. ROJAS, LSTRS Matlab software. Available at the web site http://www.math.wfu.edu/~mrojas/software.html.

[14] M. ROJAS AND D. C. SORENSEN, A trust-region approach to regularization of large-scale discrete forms of ill-posed problems, SIAM J. Sci. Comput., 23 (2002), pp. 1842-1860.

[15] M. ROJAS AND T. STEIHAUG, An interior-point trust-region-bused method for large-scale non-negative regularization, Inverse Problems, 18 (2002), pp. 1291-1307.

* Received February 10, 2004. Accepted for publication October 11, 2004. Recommended by R. Plemmons.

D. CALVETTI ([dagger]), B. LEWIS ([double dagger]), L. REICHEL ([section]), AND F. SGALLARI ([paragraph]]

([dagger]) Department of Mathematics, Case Western Reserve University, Cleveland, OH 44106, U.S.A. E-mail: dxc57@po.cwru.edu. Research supported in part by NSF grant DMS-0107841 and NIH grant GM-66309-01.

([double dagger]) Rocketcalc, 100 W. Crain Ave., Kent, OH 44240, U.S.A. E-mail: blewis@rocketca1c.com.

([section]) Department of Mathematical Sciences, Kent State University, Kent, OH 44242, U.S.A. E-mail: reichel@math.kent.edu. Research supported in part by NSF grant DMS-0107858.

([paragraph]) Dipartimento di Matematica, Universita di Bologna, Piazza P.ta S. Donato 5, 40127 Bologna, Italy. E-mail: sgallari@dm.unibo.it.
COPYRIGHT 2004 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Calvetti, D.; Lewis, B.; Reichel, L.; Sgallari, F.
Publication:Electronic Transactions on Numerical Analysis
Date:Jul 1, 2004
Words:8631
Previous Article:Transient behavior of powers and exponentials of large Toeplitz matrices.
Next Article:Implicit for local effects and explicit for nonlocal effects is unconditionally stable.


Related Articles
Shear rate and wall slip velocity functions of polyvinyl chloride melts based on slit die viscometry data.
A parameter choice method for Tikhonov regularization.
Error controlled regularization by projection.
Regularization properties of Tikhonov regularization with sparsity constraints.
A weighted-GCV method for Lanczos-hybrid regularization.
Kyrgyz beekeepers to yield around 2,000 tons of honey in 2009.
Russian James Bond passes away at 81.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters