Printer Friendly

A note on preconditioners and scalar products in Krylov subspace methods for self-adjoint problems in Hilbert space.

1. Introduction. In this note we consider Krylov subspace methods for the solution of linear equations

(1.1) Ax = b in [X.sup.*].

The unknown x belongs to a Hilbert space X, and [X.sup.*] denotes its dual. The linear operator A maps X to [X.sup.*], and it is assumed to be bounded, i.e., A [member of] L(X,[X.sup.*]), and self-adjoint. The right-hand side b belongs to [X.sup.*]. This is a natural setting for a variational formulation of some boundary-value problems involving linear second-order elliptic partial differential equations as illustrated by the examples below.

We shall consider the solution of (1.1) by the conjugate gradient (CG) and minimal residual (MINRES) methods depending on whether or not positive definiteness (coercivity) of A is assumed. These methods were originally introduced in [14, 20] for the case X = [R.sup.n]. Generalizations to infinite-dimensional Hilbert spaces have been considered in [1,5, 7, 8,11, 13, 18, 19] and the references therein; see also the references in [12].

In most of the above publications, a problem Bx = c is considered, where B is assumed to map X into itself. Clearly, this complies with the present setting when we set B := RA, where R [member of] [lapalce]([X.sup.*],X) denotes the Riesz isomorphism. This point of view is taken, at least implicitly, in [1, 11, 16] and in [2, Section 3]. In this presentation, we prefer the setting (1.1), and we keep the Riesz map explicit. This will make it easier to pinpoint the fact that the Riesz isomorphism precisely takes the role of a preconditioner. (Also in finite dimensions it is often preferred to keep the preconditioner explicit rather than to merge it with the system matrix.)

As a matter of fact, Krylov subspace methods in Hilbert spaces for (1.1) cannot be formulated without referring to the Riesz isomorphism/preconditioner since A2 has no meaning due to non-matching mapping properties. Since in turn the Riesz map is defined by the scalar product in X (or [X.sup.*]), we conclude that selecting a preconditioner for problem (1.1) is the same as selecting the scalar product in X (or [X.sup.*]). If X = [R.sup.n], an "unpreconditioned" method is one where implicitly the Euclidean scalar product is used.

In addition to this insight, we obtain an elegant derivation of the preconditioned CG and MINRES methods, which avoids altogether the temporary use of Cholesky factors. Finally, our work also explains why it is natural to have positive definite preconditioners when solving self-adjoint indefinite problems with MINRES.

The significance of the Riesz map or the scalar product, respectively, is pointed out in many publications to varying degrees. We mention [10, Chapter 16] and [11], where the CG method is described in a variational setting and the Riesz isomorphism appears implicitly in the conversion of the residual to the gradient. In [16], the importance of the Riesz map is substantiated by the role it plays in the proof of the Lax-Milgram lemma, which applies to the case of positive definite but possibly non-self-adjoint operators A. Also recently in [25], appropriate scalar products are studied, which can guide the choice of block-diagonal preconditioners for self-adjoint saddle-point problems.

We are thus aware of the fact that our observation of preconditioning and choosing the inner product in X being one and the same for CG and MINRES would be considered folklore by some. Nevertheless, it is our experience that this fact is not as widely known as one would expect, and it seems difficult to find references which pinpoint it. This note was written to serve as a reference.

The following two prototypical examples illustrate that considering A [member of] [laplace](X, [X.sup.*]) is natural for variational problems. The first example is Poisson's equation -[DELTA]u = f on a d-dimensional bounded domain [OMEGA], endowed for simplicity with homogeneous Dirichlet boundary conditions. Its variational formulation is given by (1.1) with X = [H.sup.1.sub.0]([OMEGA]) and

<Au, v> = [[integral].sub.[OMEGA]] [nabla]u x [nabla]v dx and <b, v> = [[integral].sub.[OMEGA]] fv dx.

As usual, [H.sup.1.sub.0]([OMEGA]) denotes the space of Sobolev functions which are square integrable up to their weak first-order derivatives and which have zero boundary values in the sense of traces. The Poisson problem gives rise to a positive definite operator (when X is endowed with the standard [H.sup.1]-norm or [H.sup.1]-seminorm) so that (1.1) is amenable to solution by the CG method.

The Stokes problem in fluid dynamics, by contrast, leads to an indefinite system which can be solved by MINRES. The associated variational formulation employs the Hilbert space X = [H.sup.1.sub.0]([OMEGA]) x ... x [H.sup.1.sub.0]([OMEGA]) x [L.sup.2.sub.0]([OMEGA]) and is given by


and b as above; see, for instance, [6, Section 5]. Here [L.sup.2.sub.0]([OMEGA]) is the space of square integrable functions with zero mean.

This paper is structured as follows. Section 2 addresses the CG method in Hilbert spaces for the self-adjoint and positive definite (coercive) case. Section 3 is devoted to the MINRES method in Hilbert spaces for self-adjoint and possibly indefinite problems.

Throughout, [laplace](X, [X.sup.*]) and [laplace](X) denote the spaces of bounded linear operators X [right arrow] [X.sup.*] or X [right arrow] X, respectively. Moreover, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] or, for short, <*, *> denotes the duality pairing of the Hilbert space [X.sup.*]. and its dual [X.sup.*]. We shall denote by R [member of] [laplace]([X.sup.*], X) the Riesz map. Given b [member of] [X.sup.*], its Riesz representation is defined by (Rb, x)x = <b, x> for all x [member of] X, where [(*, *).sub.x] is the scalar product in X. Clearly, the Riesz map R depends on the choice of the scalar product in X.

Let us mention that we do not address here the C G method for indefinite systems in so-called non-standard inner products as treated, for instance, in [4, 22, 23, 24].

2. The conjugate gradient method. In this section, A [member of] [laplace](X, [X.sup.*]) is assumed self-adjoint, i.e., <Ax, y> = <Ay, x>, and positive definite (coercive), i.e., <Ax,x> [greater than or equal to] [delta] [[parallel]x[parallel].sup.2.sub.x] for some [delta] > 0 and all x, y [member of] X. This implies that A induces a norm, [[parallel]x[parallel].sub.A] = <Ax,x>.sup.1/2], which is equivalent to the norm [parallel].[parallel] x. The unique solution of (1.1) is denoted by [x.sub.*].

The CG method, developed in [14], can be conceived in several ways. We follow here the derivation based on the one-dimensional minimization of the error in the A-norm for a predetermined search direction and the update of these search directions using the direction of steepest descent while maintaining A-conjugacy of search directions.

Given an iterate [x.sub.k] [member of] X and a search direction [p.sub.k] [member of] X, the CG method seeks to minimize the value of

[phi](x) = 1/2<Ax,x> - <b,x> = 1/2 [[parallel]x - [x.sub.2][parallel].sup.2.sub.A] - 1/2 [[parallel][x.sub.2][parallel].sup.2.sub.A]

along the line [x.sub.k] + [alpha][p.sub.k]. This minimum is attained at

[[alpha].sub.k] := <[r.sub.k], [p.sub.k] A[p.sub.k], [p.sub.k]>,

where [r.sub.k]: = b - A[x.sub.k] [member of] [X.sup.*] denotes the residual. The search direction [p.sub.k] is chosen as a linear combination of the previous direction [p.sub.k-1] and the direction of steepest descent [d.sub.k] for [phi] at [x.sub.k].

2.1. The direction of steepest descent. It is now important to realize that the steepest descent direction depends upon the scalar product in X. Let us denote by [phi]'([x.sub.k]) the (Frechet) derivative of [phi] at [x.sub.k], which is, by definition, an element of [X.sup.*]. The directional derivative becomes [phi]'([x.sub.k])d = (A[x.sub.k] - b, d} for arbitrary directions d [member of] X. The direction of steepest descent [d.sub.k] has, by its definition, the property that

(2.1) [d.sub.k] minimizes [phi]'([x.sub.k])d = (A[x.sub.k] - b, d} = -([r.sub.k], d}

over all d [member of] X of constant norm. To solve the problem (2.1), we apply the Riesz map to obtain the representation

[phi]' ([x.sub.k]) d = -(R[r.sub.k], d)x.

The Cauchy-Schwarz inequality now readily shows [d.sub.k] = R[r.sub.k] to be the direction of steepest descent for [phi] at [x.sub.k]. Since the Riesz map depends on the scalar product in X, so does the direction of steepest descent [d.sub.k]. Note that the direction of steepest descent of [phi] is also known as its gradient. We point out that even in [R.sup.n] it is useful to distinguish the derivative of a function (which does not depend on the scalar product) from the gradient (which does).

For instance, let A [member of] [R.sup.nxn] be symmetric, and consider it as a linear map (x [right arrow] Ax) from X = [R.sup.n] to its dual. The directional derivative of [phi] (x) = 1/2[x.sup.T] - [b.sup.T] x in the direction d G [member of] [R.sup.n] is [phi] (x)d = [(Ax-b).sup.T]d. Then we distinguish the derivative [phi]'(x) = Ax - b [member of] X' from the gradient [nabla] [phi] (x). The latter depends on the scalar product in X, which is represented by the symmetric positive definite matrix P. We write (x, y)x or (x, y)p for the scalar product of two elements x,y [member of] X. According to (2.1), the direction d [member of] X is the gradient of [phi] at x if it minimizes

[phi]'(x)d = [(Ax - b).sup.T]d = [([P.sup.-1] (Ax - b),d).sub.p]

over all d [member of] X of constant norm. This shows that [nabla] [phi] (x) = [P.sup.-1](Ax - b).

2.2. The conjugate gradient method finalized. Using a linear combination of [d.sub.k] and the previous search direction [p.sub.k-1], i.e.,

(2.2) [p.sub.k] = R[r.sub.k] + [[beta].sub.k][p.sub.k-1].

the requirement (A[p.sub.k], [p.sub.k-1]} = 0 of A-conjugacy of search directions leads to the choice

[[beta].sub.k]: (A[p.sub.k-1], R[r.sub.k]/(A[p.sub.k-1], [p.sub.k-1]}.

The procedure outlined above generates the following iterates and Krylov subspaces of dimension k [greater than or equal to] 1:


Here, the primal and dual Krylov subspaces are related by [[kappa].sub.k](RA; R[r.sub.0]) = R[[kappa].sub.k](AR; [r.sub.0]).

It follows by standard arguments that the iterates also satisfy

([r.sub.k], [p.sub.j]} = 0 for all j = 0,1, ..., k-1,

([r.sub.k], R[r.sub.j]} = 0 for all j = 0,1, ..., k-1,

(A[p.sub.k], [p.sub.j]} = 0 for all j =0,1, ..., k-1,

and that [x.sub.k] minimizes [phi] over the entire affine space [x.sub.0] + [[kappa].sub.k] (RA; R[r.sub.0]). Using these relations, one arrives at the final form of the CG method in a Hilbert space; see Algorithm 2.1.
ALGORITHM 2.1 (CG Method for (1.1) in a Hilbert Space).

 1: Set [r.sub.0] : = b - A[x.sub.0] [member of] [X.sup.*]
 2: Set [p.sub.0] : = R[r.sub.0] [member of] X
 3: Set k : = 0
 4: while not converged do
 5:   Set [[alpha].sub.k] := {[r.sub.k], R[r.sub.k]}/A[p.sub.k],
 6:   Set [x.sub.k+1] : = [x.sub.k] + [[alpha].sub.k] [p.sub.k]
 7:   Set [r.sub.k+1] : = [r.sub.k] - [[alpha].sub.k] A[p.sub.k]
 8:   Set [[beta].sub.k+1]:= ([r.sub.k+1], R[r.sub.k+1]}/{[r.sub.k],
 9:   Set [p.sub.k+1] := R[r.sub.k+1] + [[beta].sub.k+1][p.sub.k]
10:  Set k := k + 1
11: end while

Comparing Algorithm 2.1 to the standard forms of the CG method in the literature, it stands out that the Riesz map R takes precisely the role of the application of a preconditioner P, i.e., the evaluation of [P.sup.-1] times a vector. Recalling that the Riesz map depends on the scalar product, we conclude that the choice of a preconditioner in the traditional preconditioned CG method is actually equivalent to the choice of the scalar product in X. Even if X = [R.sup.n], the preconditioned and unpreconditioned variants of the CG method are one and the same; they merely differ in the choice of the scalar product in X. The unpreconditioned CG method corresponds to the implicit choice of the Euclidean scalar product.

In infinite dimensions, the use of the Riesz map is essential in formulating the CG method for the solution of (1.1). Without it, the residual [r.sub.k] G [X.sup.*] cannot be used in (2.2) to update the search direction [p.sub.k] [member of] X since these two vectors belong to different spaces.

The convergence properties of Algorithm 2.1 in a Hilbert space depend on the spectrum of the preconditioned operator RA [member of] [laplace](X), which is self-adjoint, i.e.,

[(RAx, y).sub.x] = [(x, RAy).sub.x].

The spectrum is the complement of {[mu] [member of] R: [(RA - [mu]I).sup.-1] exists in [Laplace](X)} in R. In the finite-dimensional situation, the spectrum consists of the eigenvalues of the generalized eigenvalue problem Ax = [lambda][R-.sup.1]x. Using the condition number, i.e., the quotient of the extremal points in the spectrum of RA, one can reproduce the linear convergence bound of the finite-dimensional preconditioned CG method; see, e.g., [6, Section 2.2] and [5, Section 1.2]. We emphasize, however, that linear bounds based on the condition number are not, in general, descriptive for the practical convergence behaviour of the CG method; see [17, Section 5.6] or [9, 15].

The significance of (the inverse of) R as a preconditioner was recently also pointed out in [16] and [18]. Good preconditioners (Riesz maps) are those which are induced by scalar products in X close to the A-scalar product. This statement continues to hold when X is replaced by one of its finite-dimensional subspaces as one does, e.g., in conforming finite element discretizations of (1.1). We thus infer that practical preconditioners/scalar products for discretized versions of the operator equation Ax = b can be derived by studying the undiscretized problem.

3. The minimal residual method. In this section, A [member of] [laplace](X, [X.sup.*]) is assumed self-adjoint but it may be indefinite (non-coercive). Note that [X.sup.*] is naturally endowed with the scalar product

(3.1) (.,.)[x.sup.*] = (., R.},

in which the Riesz map pulls one of the factors from [X.sup.*] back into X. We sometimes refer to the induced norm [parallel]r][parallel] [x.sup.*] = [{r, Rr}.sup.1/2] = [parallel]r[[parallel].sub.R] as the R-norm.

The MINRES method, introduced in [20], uses the same Krylov subspaces as the CG method, but it seeks to minimize the R-norm of the residual [r.sub.k] [member of] b - A[x.sub.k] [member of] [X.sup.*], i.e.,

[parallel][r.sub.k][parallel].sub.R] = [([r.sub.k],R[r.sub.k]).sup.1/2],

over the shifted dual Krylov subspaces [r.sub.0] + A[[kappa].sub.k](RA; R[r.sub.0]) = [r.sub.0] + AR[[kappa].sub.k](AR; [r.sub.0]) [subset] [X.sup.*]. To carry out this minimization, MINRES builds an orthonormal basis (with respect to the inner product (3.1)) of [[kappa].sub.k](AR; [r.sub.0]) [subset] [X.sup.*], which is denoted by [V.sub.k] [member of] [laplace]([R.sup.k],[X.sup.*]) with "columns" [v.sub.i] [member of] [X.sup.*], i = 1, ..., k.

Orthonormality, i.e., ([v.sub.i], R[v.sub.j]) = [S.sub.ij], is obtained via the Lanczos recursion


with [[??].sub.k] = [(0, ..., 0,1).sup.T] [member of] [R.sup.k] and a coefficient matrix of the form


The matrix [T.sub.k] [member of][R.sup.kxk] in (3.2) equals [[??].sub.k] without the last row.

Using the basis [V.sub.k], the iterates [x.sub.k] [member of] [x.sub.0] + [[kappa].sub.k](RA; R[r.sub.0]) residuals [r.sub.k] can be written as

[x.sub.k] = [x.sub.0] + R[V.sub.k] [[??].sub.k],

[r.sub.k] = [r.sub.0] - R[V.sub.k][[??].sub.k],

for some [[??].sub.k] [member of] [R.sup.k]. The objective of minimizing the residual in the R-norm can thus be expressed as the minimization over all [??] [member of] [R.sup.k] of


We conclude that the minimization of the residual in the R-norm leads to a least-squares problem in [R.sup.k+1] with respect to the Euclidean norm regardless of the space in which the original problem Ax = b is posed. Therefore, from here, the derivation of the Hilbert space version of MINRES parallels the derivation of the classical finite-dimensional method. We only mention that the least-squares problem is solved by maintaining a QR factorization of the matrices [[??].sub.k] by means of Givens rotations. For convenience, we state the complete algorithm as Algorithm 3.1. It coincides with the (preconditioned) implementation given in [6, Algorithm 6.1] except that in Algorithm 3.1 we scale both quantities [v.sub.k] and [z.sub.k] such that [parallel][v.sub.k][[parallel].sub.R] = 1 and [z.sub.k] = R[v.sub.k] are maintained throughout.
ALGORITHM 3.1 (MINRES Method for (1.1) in a Hilbert Space).

1: Set [v.sub.0] := 0 [member of] [X.sup.*] and [w.sub.0] := [w.sub.1]
:= 0 [member of] X

2: Set [v.sub.1] := b - A[x.sub.0] [member of] [X.sup.*]

3: Set [z.sub.1] := R[v.sub.1]

4: Set [[gamma].sub.1] := [([v.sub.1],[z.sub.1]).sup.1/2]

5: Set [z.sub.1] := [z.sub.1]/[[gamma].sub.1] and [v.sub.1]
:= [v.sub.1]/[[gamma].sub.1]

6: Set [[eta].sub.0] := [[gamma].sub.1], [S.sub.0] := [s.sub.1]
:= 0, [C.sub.0] := [c.sub.1] := 1

7: Set k := 1

8: while not converged do

9: Set [[delta].sub.k] := (A[z.sub.k], [z.sub.k])

10: Set [V.sub.k+1] := A[z.sub.k] - [[delta].sub.k] [v.sub.k]
- [[gamma].sub.k][v.sub.k] [v.sub.k-1]

11: Set [z.sub.k+1] := R[v.sub.k+1]

12: Set [[gamma].sub.k+1] := [([v.sub.k+1], [z.sub.k+1]).sup.1/2]

13: Set [z.sub.k+1] := [z.sub.k+1]/[[gamma].sub.k+1] and [v.sub.k+1]
:= [v.sub.k+1]/[[gamma].sub.k+1]

14: Set [[alpha].sub.0] := [c.sub.k][[delta].sub.k] - [C.sub.k-1]
[S.sub.k] [[gamma].sub.k] and [[alpha].sub.1] :=
([[alpha].sup.2.sub.0] [[gamma].sup.2.sub.k+1]).sup.1/2]

15: Set [[alpha].sub.2] := [S.sub.k][[delta].sub.k] + [C.sub.k-1]
[C.sub.k] [[gamma].sub.k] and [[alpha].sub.3] :=
[S.sub.k-1] [[gamma].sub.k]

16: Set [C.sub.k+1] := [[alpha].sub.0]/[[alpha].sub.1] and [S.sub.k+1]
:= [[gamma].sub.k+1]/[[alpha].sub.1]

17: Set [w.sub.k+1] := (1/[[alpha].sub.1]) [[z.sub.k] -
[[alpha].sub.3][W.sub.k-1] - [[alpha].sub.2] [w.sub.k]]

18: Set [x.sub.k] := [x.sub.k-1] + [C.sub.k+1] [[eta].sub.k-1]

19: Set [[eta].sub.k] := -[S.sub.k+1][[eta].sub.k-1]

20: Set k := k + 1

21: end while

We mention that the quantity

[absolute value of ([[eta].sub.k])] = [[parallel][r.sub.k][parallel].sub.R]

gives access to the R-norm of the residual, which is minimized over the sequence of growing shifted Krylov subspaces.

As we have observed for the CG method, we conclude that the choice of the preconditioner is equivalent to choosing the Riesz map, i.e., equivalent to choosing the scalar product in X. This observation also recently guided the study of appropriate scalar products for symmetric saddle-point problems in [25]. Finally, the exposition above explains why indefinite systems, which are to be solved by MINRES, require self-adjoint positive definite preconditioned; compare, e.g., [3, Table 9.1].

REMARK 3.2. For non-symmetric problems in finite dimensions, the replacement of the symmetric Lanczos by the Arnoldi process leads from MINRES to the generalized minimal residual method (GMRES) introduced in [21]. The coefficient matrix [[??].sub.k] will be upper Hessenberg; see, for instance, [6, Section 4.1.1] or [17, Section 2.5.5]. Precisely the same modifications will lead to GMRES for non-self-adjoint problems of type (1.1) in a Hilbert space setting.

4. Conclusions. In this paper we consider the C G and MINRES methods for self-adjoint operator equations Ax = b with A [member of] [laplace](X, [X.sup.*]), where X is a Hilbert space. We present a comprehensive derivation of these methods, which shows that both CG and MINRES inevitably depend on the scalar product one chooses in X. We also see that the "preconditioned" and "unpreconditioned" versions of these algorithms, which are often presented as distinct methods, are actually one and the same. The "unpreconditioned" versions simply correspond to the implicit choice of the Euclidean scalar product, which, of course, is only possible if X [congruent to] [R.sup.n]. We emphasize that the choice of a preconditioner for CG and MINRES is equivalent to the choice of the scalar product in X. This choice, even for discretized versions of the problem, can be guided by an analysis of the spectrum carried out for the undiscretized problem, e.g., at the level of the partial differential equation to be solved. This is the central idea behind operator preconditioning techniques developed, for instance, in [2, 15, 18]. Condition numbers bounded independently of the mesh size can be achieved in this way, which are, however, not in one-to-one correspondence with the fast convergence of Krylov subspace methods.

In a forthcoming publication we will continue this work and address convergence results for CG and MINRES including a new superlinear convergence result for MINRES.

Acknowledgments. This work was supported by a DAAD travel grant 50743976 Pre-conditioners in PDE-Constrained Optimization. The authors would like to thank Andy Wathen for fruitful discussions and the Numerical Analysis group at Oxford University for their hospitality. They also thank Martin Stoll, Zdenek Strakos, and one anonymous referee for their valuable comments on earlier versions of the manuscript, which helped to improve the presentation significantly.


[1] O. AXELSSON AND J. KARATSON, On the rate of convergence of the conjugate gradient method for linear operators in Hilbert space, Numer. Funct. Anal. Optim., 23 (2002), pp. 285-302.

[2] --, Equivalent operator preconditioning for elliptic problems, Numer. Algorithms, 50 (2009), pp. 297-380.

[3] M. BENZI, G. GOLUB, AND J. LIESEN, Numerical solution of saddle point problems, Acta Numer., 14 (2005), pp. 1-137.

[4] J. BRAMBLE AND J. PASCIAK, A preconditioning technique for indefinite systems resulting from mixed approximations of elliptic problems, Math. Comp., 50 (1988), pp. 1-17.

[5] J. W. DANIEL, The conjugate gradient method for linear and nonlinear operator equations, SIAM J. Numer. Anal.,4 (1967), pp. 10-26.

[6] H. ELMAN, D. SILVESTER, AND A. WATHEN, Finite Elements and Fast Iterative Solvers: with Applications in Incompressible Fluid Dynamics, Oxford University Press, New York, 2005.

[7] O. ERNST, Minimal and Orthogonal Residual Methods and their Generalizations for Solving Linear Operator Equations. Habilitation Thesis, Faculty of Mathematics and Computer Sciences, TU Bergakademie Freiberg, 2000.

[8] Z. FORTUNA, Some convergence properties of the conjugate gradient method in Hilbert space, SIAM J. Numer. Anal., 16 (1979), pp. 380-384.

[9] T. GERGELITS AND Z. STRAKOS, Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations, Numer. Algorithms, in press, published online June 2, 2013, doi 10.1007/s11075-013-9713-z.

[10] R. GLOWINSKI, Finite element methods for incompressible viscous flow, in Handbook of Numerical Analysis, Vol. IX. Numerical Methods for Fluids, P. G. Ciarlet and J. L. Lions, eds., North-Holland, Amsterdam, 2003, pp. 3-1176.

[11] R. GLOWINSKI AND S. LAPIN, Iterative solution of linear variational problems in Hilbert spaces: some conjugate gradients success stories, in Conjugate Gradient Algorithms and Finite Element Methods, M. Krizek, P. Neittaanmaki, R. Glowinski, and S. Korotov, eds., Springer, Berlin, 2004, pp. 223-245.

[12] G. H. GOLUB AND D. P. O'LEARY, Some history of the conjugate gradient and Lanczos algorithms: 1948-1976, SIAM Rev., 31 (1989), pp. 50-102.

[13] R. M. HAYES, Iterative methods of solving linear problems on Hilbert space, National Bureau of Standards Applied Mathematics Series No. 39 (1954), pp. 71-103.

[14] M. R. HESTENES AND E. STIEFEL, Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards, 49 (1952), pp. 409-436.

[15] R. HIPTMAIR, Operator preconditioning, Comput. Math. Appl., 52 (2006), pp. 699-706.

[16] R. KIRBY, From functional analysis to iterative methods, SIAM Rev., 52 (2010), pp. 269-293.

[17] J. LIESEN AND Z. STRAKOS, Krylov Subspace Methods, Oxford University Press, Oxford, 2013.

[18] K.-A. MARDAL AND R. WINTHER, Preconditioning discretizations of systems of partial differential equations, Numer. Linear Algebra Appl., 18 (2011), pp. 1-40.

[19] O. NEVANLINNA, Convergence of Iterations for Linear Equations, Birkhauser, Basel, 1993.

[20] C. PAIGE AND M. SAUNDERS, Solution of sparse indefinite systems of linear equations, SIAM J. Numer. Anal., 12 (1975), pp. 617-629.

[21] Y. SAAD AND M. H. SCHULTZ, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856-869.

[22] J. SCHOBERL AND W. ZULEHNER, Symmetric indefinite preconditioned for saddle point problems with applications to PDE-constrained optimization, SIAM J. Matrix Anal. Appl., 29 (2007), pp. 752-773.

[23] M. STOLL AND A. WATHEN, Combination preconditioning and the Bramble-Pasciak + preconditioner, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 582-608.

[24] M. TANI AND V. SIMONCINI, Refined spectral estimates for preconditioned saddle point linear systems in a non-standard inner product, ANZIAM J. Electron. Suppl., 54 (2012), pp. C291-C308.

[25] W. ZULEHNER, Non-standard norms and robust estimates for saddle point problems, SIAM J. Matrix Anal. Appl., 32 (2011), pp. 536-560.

ANDREAS GUNNEL ([dagger]), ROLAND HERZOG ([dagger]), AND EKKEHARD SACHS ([double dagger])

* Received March 30, 2013. Accepted November 25, 2013. Published online on February 24, 2014. Recommended by Z. Strakos.

([dagger]) Technische Universitat Chemnitz, Faculty of Mathematics, D-09107 Chemnitz, Germany ({andreas.guennel, roland.herzog}@mathematik.

([double dagger]) University of Trier, Department of Mathematics, D-54286 Trier, Germany (
COPYRIGHT 2014 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Gunnel, Andreas; Herzog, Roland; Sachs, Ekkehard
Publication:Electronic Transactions on Numerical Analysis
Article Type:Report
Geographic Code:4EUGE
Date:Apr 1, 2014
Previous Article:Estimating the error of Gauss-Turan quadrature formulas using their extensions.
Next Article:A spatially adaptive iterative method for a class of nonlinear operator eigenproblems.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters