# Spectral deflation in Krylov solvers: a theory of coordinate space based methods.

1. Introduction. Krylov space solvers are the standard tool for solving very large sparse linear systems Ax = b by iteration. But for many real-world problems they only converge in a reasonable number of iterations if a suitable preconditioning technique is applied. This is particularly true for problems where the matrix A has eigenvalues of small absolute value--a situation that is very common in practice. A complementary technique for dealing with such problems can be viewed as applying a singular left preconditioner that deflates the matrix in the sense that small eigenvalues are replaced by zero eigenvalues. We first have to identify an approximately invariant subspace Z that belongs to a set of such small eigenvalues. Ways to do that have been extensively discussed in the literature and will therefore not be a topic of this paper; see, e.g., [1, 3, 6, 9, 12, 37, 42, 43, 44, 45, 48, 57, 62]. By using an orthogonal projection P whose nullspace is Z the Krylov space solver is then applied only to the orthogonal complement [Z.sup.[perpendicular to]] by restricting the operator A accordingly. The basis constructed implicitly or explicitly by this restricted operator is augmented by a set of basis vectors for Z. In some algorithms based on short recurrences Z may also include eigenvectors that the iteration has identified well already and which in the sequel might cause loss of orthogonality if new basis vectors were not reorthogonalized against them. In practice, the dimension of the deflation space Z may get increased during the solution process or the space may get adapted, in particular if a restarted algorithm is employed. In this paper we assume for simplicity that Z is fixed.A relevant detail of the approach discussed here is that the basis of Z is assumed to be given as the columns of a matrix of the form Z = AU. So, the preimage of the basis, the columns of U, are assumed to be known. In practice this means that we choose first the matrix U, which also spans an approximately invariant subspace U for the chosen eigenvalues, and then compute the image Z = AU. This implies that the restriction [A|.sub.Z] of A to Z can be inverted trivially: if, say, y = Zk [member of] Z, then [A.sup.-1] y = A-1Zk = Uk [member of] U.

Applying a Krylov space solver to a linear system Ax = b means to construct a sequence of approximate solutions [x.sub.n] that are of the form [x.sub.n] [member of] [x.sub.0] + [K.sub.n] (A, [r.sub.0]), where [x.sub.0] is a chosen initial approximation, [r.sub.0] : [equivalent to] b - A[x.sub.0] is the corresponding residual, and [K.sub.n] (A, [r.sub.0]) is the nth Krylov subspace generated by A from [r.sub.0]. (For its definition see Section 2.) Then, [r.sub.n] [member of] [r.sub.0] + A[K.sub.n](A, [r.sub.0]), and the goal is to make [r.sub.n] small in some norm. Therefore, solving the linear system with a Krylov space solver can be understood as successively approximating [r.sub.0] by elements of the subspaces AKn(A, [r.sub.0]).

In the methods described here first, A[K.sub.n](A, [r.sub.0]) will be replaced by the subspace A[K.sub.n]([??],[[??].sub.0]), where the deflated operator [??] :[perpendicular to] PAP is singular, and [[??].sub.0] :[perpendicular to] P[r.sub.0] [member of] [Z.sup.[perpendicular to]], so that we will have [K.sub.n] ([??], [[??].sub.0]) [subset or equal to] [Z.sup.[perpendicular to]]. Note that on [Z.sup.[perpendicular to]], and thus also on the Krylov sub-space, the restriction of [??] is equal to the restriction of PA; thus only one application of P is needed for applying [??]. On the other hand, as search space for approximate solutions [x.sub.n, this] Krylov subspace will be augmented by U, that is,

(1.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If [Z.sup.[perpendicular to]] is A-invariant, A[K.sub.n] ([??], [[??].sub.0]) [subset or equal to] [Z.sup.[perpendicular to]] so we can view the approach chosen here as splitting up [r.sub.0] in its two orthogonal components [[??].sub.0] [member of] [Z.sup.[perpendicular to]] and [r.sub.0] - [[??].sub.0] [member of] Z. The preimage of the latter component can be computed in the trivial way outlined before, while the preimage of [[??].sub.0] is approximately computed with a Krylov space solver for [??] = [[??].sub.0] acting only in [Z.sup.[perpendicular to]]. However, some complications occur if [Z.sup.[perpendicular to]] is not A-invariant, which is the usual case. Treating these complications suitably is the main aim of this paper. In any case, we will see that we can first solve the restricted problem [??] = [[??].sub.0] by a standard method such as GMRes [53] and subsequently compute the still 'missing' component of the solution by solving a small triangular linear system.

While we will quickly also review the 'symmetric case', where the linear system is Hermitian (or real and symmetric), we are here mostly interested in the 'non-symmetric case', where our main message is that it may be preferable to replace the orthogonal decomposition of [r.sub.0] by a non-orthogonal one. To this end, P must be chosen as an oblique projection with the property that when its nullspace Z is A-invariant, so is its range [[??].sup.[perpendicular to]]. In this way, we not only can annul eigenvalues, but also deflate the corresponding left and right invariant subspaces. This choice leads then in a straightforward way to a 'truly deflated' GMRes and to deflated QMR [28]. Like in the symmetric case, if [Z.sup.[perpendicular to]] is A-invariant, the convergence speed of the deflated method is then fully determined by the nondeflated eigenvalues of A and the corresponding invariant subspace. There is no need for deriving new convergence estimates unless we want to estimate the influence of an inexact choice of the subspace.

Our general approach can be used to define deflated versions of any Krylov space solver. But in this paper we concentrate on coordinate space based methods such as GMRES, MIN-Res [49], and QMR, where the Arnoldi or the Lanczos method is used to generate a series of bases of the nested Krylov subspaces. As is well known, this allows us to reformulate a minimum residual problem as an equivalent or approximately equivalent least squares problem in coordinate space, which can be solved by updating the QR decomposition of a Hessenberg or tridiagonal matrix.

Orthogonal and biorthogonal residual methods such as CG [34] and BiCG [40, 23] can also be realized in this way, but are then normally considered less attractive, perhaps due to the possible nonexistence of some of the iterates. Here, at the end, we only introduce related deflated quasi-(bi)orthogonal residual methods.

A further main goal of this paper is to present all these methods in a common framework that relies on a splitting of the space into two complementary subspaces, which can be chosen in various ways. We favor here the above mentioned choice reflecting a partition of the spectrum, but in the nonsymmetric case this leads to a conflict with the choice imposed by residual minimization. In contrast to our treatment, the excellent general treatment and review of augmentation methods by Eiermann, Ernst, and Schneider [16] is mostly restricted to the application of orthogonal projections and does not capitalize upon the knowledge of bases for both U and Z assumed here (unless they are A-invariant and thus equal). A further difference is that their treatment is aiming for augmented minimal residual methods, in particular GM-Res, while we will drop optimality in Sections 5-9 and replace it by some near-optimality. Another interesting discussion and review of augmentation and deflation methods is due to Simoncini and Szyld [55, [section]9].

It is a well-known fact about Krylov space solvers that aiming for the smallest 2-norm of the residual, that is, applying GMRes without restarts, is not only excessively memory consuming, but is often also not much faster than using alternative methods that are suboptimal. In practice, it is not important to find the fastest solver, but to apply an effective preconditioning or multilevel method. Augmentation and deflation are powerful options along these lines, and there are several different ways to apply the basic ideas. Moreover, it is no problem to combine them with other preconditioning techniques.

Literature. Augmentation and deflation of Krylov space solvers have been proposed in various forms in a large number of publications. Many of the methods differ not only algorithmically and numerically, but also mathematically. Some keywords associated with such methods are '(spectral) deflation', 'augmented bases', 'recycling Krylov subspaces', 'spectral preconditioning', and 'singular preconditioning'. The primary goal is always to speed up the convergence of a solver, but the application to linear systems with multiple right-hand sides and to systems with slowly changing matrix and right-hand side is also often mentioned.

To our knowledge, the first suggestion of an augmented Krylov space method that included both the deflation of the matrix and the corresponding projection of the initial residual came from Nicolaides [48], who submitted on May 13, 1985, such a deflated CG algorithms based on the three-term recursions for iterates and residuals. Independently, Dostal [13] submitted in January 1987 a mathematically equivalent deflated CG algorithm based on the well-known coupled two-term recursions; he even gave an estimate for the improvement of the condition number. In June 1987 Mansfield [41] submitted additional numerical evidence for what he referred to as Nicolaides' method of deflation, but he was actually using a 2-term CG algorithm. The same algorithm was more than ten years later again discovered by Erhel and Guyomarc'h [19] (deflation of a previously constructed Krylov subspace), by Saad, Yeung, Erhel, and Guyomarc'h [54], and, independently, by Vuik, Segal, and Meijerink [61], who combined it with preconditioning by incomplete Cholesky decomposition. All three papers refer to Nicolaides [48], but not to Dostal [13] and Mansfield [41], whose articles are much closer to their work. From a Google scholar search one can conclude that it was Kolotilina [39] who ultimately promoted Dostal's paper [13] to a larger audience. But, his two related papers [14, 15] are not even mentioned by her. Early citations to Mansfield, who also had two follow up papers, are by Fischer [22] and Kolotilina [39]. To achieve the optimality of the CG error vector in the A-norm an oblique projection has to be used (see Sections 11 and 12), which can be viewed as an A-orthogonal projection however, and has nothing to do with the oblique projections promoted here. Before, in 1992, Kharchenko and Yeremin [37], followed, in 1994, by Erhel, Burrage, and Pohl [18] suggested GMRes algorithms with augmented basis and a corresponding nonsingular right preconditioner that moves the small eigenvalues to a multiple large eigenvalue. Later Baglama, Calvetti, Golub, and Reichel [6] constructed a left preconditioner with the same effect; see [16, pp. 286-289] for a brief comparison of these three preconditioners. Also in the mid-1990s, Morgan [43] proposed GMRes with augmented basis but no explicit deflation of the matrix, and de Sturler [11] suggested an inner-outer GMRes/GCR algorithm with augmented basis and later, in other publications, several related methods. Saad [52] put together a general analysis of Krylov space methods with augmented basis, which was further generalized in the above mentioned survey article of Eiermann, Ernst, and Schneider [16]. Many more publications followed; see, e.g., [1, 24, 45, 57, 63] for further references. The starting point for the present paper has been the description of recycled MinRes or RMinRes by Wang, de Sturler, and Paulino [62], which, after a minor modification that does not change the mathematical properties, fits exactly Jnto our framework. Their orthogonal projection P and the corresponding deflated matrix A have been used before, e.g., in[16,11,12]. They are the basic tools of our approach in 2-4. But so far the oblique projection P that is the basis of our approaches of Sections 5-9 only seems to have been used for Ahuja's Recycling BICG(RBICG) [4, 5], which does not fit into our framework; see Section 12 for how it relates to our work. In particular, the oblique projection applied by Erlangga and Nabben [20] for their version of deflated GMRes is different from our. In fact, the projection of [20] generalizes the one that is typical for deflated CG [48,13,41]. The connection to some of these alternative choices will be explained in Section 11. Our approach is also different from the one of Abdel-Rehim, Morgan, and Wilcox [2] for their deflated BiCGStab, and the one of Abdel-Rehim, Stathopoulos, and Orginos [3] for their Lanczos based combined equation and eigenvalue solver.

We must also mention that in a series of papers that culminates in [21, 47, 60] it has been shown recently that deflation, domain decomposition, and multigrid can be viewed as instances of a common algebraic framework.

Outline. We start in Section 2 by introducing the basic setting for a particular version of augmented and deflated GMRes based on an orthogonal projection that annuls approximate small eigenvalues, in the sense that they get moved to zero. The possibility of breakdowns of this method and its adaptation to symmetric problems, where GMRes turns into MinRes, are then discussed in Sections 3-4. In Sections 5-6, we modify the basic setting by introducing an oblique projection that enables us to deflate approximate (possibly generalized) eigenspaces and to introduce a truly deflated GMRes method. By making use of an adjoint Krylov space generated by AH we next explain in Sections 7-9 how we can adapt our approach to the nonsymmetric Lanczos algorithm and introduce a deflated QMR method and a simplified deflated QMR method. The latter has, e.g., a well-known application in quantum chromodynamics. Moreover, in Section 10 we describe a different way of computing the component of the solution that lies in U, and in Section 12 we briefly point out that our framework could in principle also be used to define coordinate space based deflated (bi)orthogonal residual methods that are approximately equivalent to deflated CG and BiCG methods.

Notation. We denote the range (or, the image) of a matrix M by R(M). For the nullspace (or kernel) of M we write N (M). Sometimes we introduce the additional notation M :[equivalent to] R(M) for the range. As usual, the first column of the n x n unit matrix is [e.sub.1]; additionally, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is [e.sub.1] with a extra zero component appended to it. Likewise, [[H.bar].sub.n] and [[T.bar].sub.n] will be (n + 1) x n matrices whose top n x n submatrices are [H.sub.n] and [T.sub.n,] respectively.

2. Deflation by orthogonal projection; deflated GMRES. Consider a nonsingular linear system Ax = b of size N x N. Let U [member of] [C.sup.N x [kappa]] have full rank [kappa], where 1 [less than or equal to] [kappa] < N, and set

U :[equivalent to] R(U), Z :[equivalent to] AU, Z :[equivalent to] R(Z) = AU,

and

E :[equivalent to] [Z.sup.H] Z, Q :[equivalent to] [ZE.sup.-1] [Z.sup.H], P :[equivalent to] I - Q = I - [ZE.sup.-1] [Z.sup.H].

The subspaces U and Z will be used to augment the search spaces for the approximate solutions [x.sub.n] and the corresponding residuals [r.sub.n] :[equivalent to] b - A[x.sub.n], respectively. Note that [Q.sup.2] = Q, [P.sup.2] = P, [Q.sup.H] = Q, and [P.sup.H] = P; so, Q is the orthogonal projection onto Z, while P is the orthogonal projection onto the orthogonal complement [Z.sup.[perpendicular to]] of Z.

If the columns [u.sub.j] of U [member of] [C.sup.N x [kappa]] are chosen to be [A.sup.H] A-orthonormal, so that the columns of Z = AU form an orthonormal basis of Z, which we will from now on assume, then E = [I.sub.k] and the formulas for Q and P simplify to

(2.1) Q = Z[Z.sup.H], P = I - Q = I - Z[Z.sup.H].

Alternatively, we could compute a QR decomposition of AU to find a matrix Z with orthonormal columns; see Section 6, where we will temporarily apply this.

As mentioned in the introduction, the first basic idea is to restrict the Krylov space solver to [Z.sup.[perpendicular to]] by projecting the initial residual [r.sub.0] into this space and by replacing the original operator A by its restriction to this space:

[[??].sub.0] :[equivalent to] P[r.sub.0], [??]:[equivalent to] PAP.

A corresponding initial approximation [[??].sub.0] is not needed. (Any [[??].sub.0] [member of] [x.sub.0] + U would satisfy [[??].sub.0] :[equivalent to] P[r.sub.0] = P(b - A[x.sub.0)] = P(b - A[[??].sub.0)], and for theoretical purposes we could even set [??[].sub.0] :[equivalent to] [A.sup.-1] PA[x.sub.0] to achieve that [[??].sub.0] = Pb - A[[??].sub.0], or [[??].sub.0] :[equivalent to] [A.sup.-1] (PA[x.sub.0] + Qb) to achieve that [[??].sub.0] = b - A[[??].sub.0.]) Note that rank [??] [less than or equal to] N - k since rank P = N - k, so [??] is always singular.

Given any initial guess [x.sub.0], the second basic idea is to approximate the solution [x.sub.*] :[equivalent to] [A.sup.-1]b by iterates [x.sub.n] from the following affine space:

(2.2) [x.sub.n] [member of] xo + [[??].sub.n] + U, where

(2.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is the nth Krylov subspace generated by [??] from [[??].sub.0]. Since [[??].sub.0] [member of] [Z.sup.[perpendicular to]] and R([??]) [subset or equal to] R(P) = [Z.sup.[perpendicular to]], we have [K.sub.n] [subset or equal to]. The choice (2.2) implies that

(2.4) [r.sub.n] :[equivalent to] b - A[x.sub.n] [member of] [r.sub.o] + A[[??].sub.n] + Z.

If we construct a nested sequence of orthogonal bases for the Krylov subspaces [[??].sub.n] by an Arnoldi process started with v0 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], we can express this, for each n, by the Arnoldi relation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] andan extended (n + 1) x n upper Hessenberg matrix [[H.bar].sub.n]. But since R([V.sub.n)] = [[??].sub.n,] [subset or equal to] [Z.sup.[perpendicular to]]we have P[V.sub.n] = [V.sub.n], and therefore

(2.5) [??][V.sub.n] = [PAPV.sub.n] = [PAV.sub.n],

so that the Arnoldi relation simplifies to

(2.6) [PAV.sub.n] = [V.sub.n+1] [[H.bar].sub.n].

This means that only one projection P is needed for applying [??] in [Z.sup.[perpendicular to]].

In view of (2.2) we can represent [x.sub.n] as

(2.7) [x.sub.0] = [x.sub.0] + [V.sub.n] [k.sub.n] + [Um.sub.n]

with coordinate vectors [k.sub.n] [member of] [C.sup.n] and [m.sub.n] [member of] [C.sup.k]. In the usual way, multiplication by A and subtraction from b yields then for the residuals [r.sub.n] :[equivalent to] b - A[x.sub.n] the representation

(2.8) [r.sub.n] = [r.sub.0] - AV [sub.n][k.sub.n] - Z[m.sub.n].

Due to the Arnoldi relation (2.6) and the orthogonal decomposition [r.sub.0] = [[??].sub.0] + Q[r.sub.0] = [v.sub.0][beta] + Q[r.sub.0] this becomes, with [C.sub.n] :[equivalent to] [Z.sup.H] [AV.sub.n] [member of] [C.sup.kxn] and Q = [ZZ.sup.H,] and in analogy to the derivation for the symmetric case in [62] (1),

(2.9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

(210) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

may be called deflated GMRes quasi-residual in analogy to the terminology of [28]. One option is to choose [r.sub.n] of minimal 2-norm. Then (2.9) is the key relation for a GMRes-like approach to this problem: [r.sub.n] is represented in terms of the basis consisting of the columns of Z and [V.sub.n+1]. Since we assume Z to have orthonormal columns as in (2.1), [Z [V.sub.n+1]] has orthonormal columns too, and the coordinate map is isometric in the 2-norms of Z [direct sum] R([V.sub.n+1]) [subset or equal to] [C.sup.N] and [C.sup.fc+n+1], respectively, so that

(2.11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As in the original GMRes method [53] the minimization of [[parallel][r.sub.n][parallel].sub.2] reduces in the nth step to a least squares problem for minimizing the right-hand side of (2.11), which can be solved recursively by updating in each iteration the QR decomposition of the (n +1) x n Hessenberg matrix [[H.bar].sub.n]. Note that the first k columns of the least square problem are in diagonal form, hence, a fortiori in upper triangular form already. Hence, the (k + n +1) x (k + n) least squares problem in (2.11) decouples from the beginning into an (n +1) x n least squares problem for kn and an explicit formula for [m.sub.n]:

(2.12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This decomposition of the problem suggests that we search first for a solution of the reduced least squares problem, that is, determine a suitable size n, the matrices [V.sub.n] and [[H.bar].sub.n] resulting from the Arnoldi process, and the corresponding solution [k.sub.n] in coordinate space. This first stage can be understood as solving the singularly preconditioned system PAx = Pb by standard GMRes, or as solving [??] = [[??].sub.o] in [Z.sup.[perpendicular to]] by GMRes. Subsequently, we may calculate the related mn. There is no need to compute [m.sub.n] for all n since the 2-norm of the residual [r.sub.n] is not affected by the second stage if [m.sub.n] is chosen according to (2.12).

We will call the resulting algorithm deflated GMRes though it is not equivalent to the methods introduced by Morgan [43] and Chapman and Saad [9] under this name. (2) Our proposal also differs from those of Kharchenko and Yeremin [37] and Erhel, Burrage, and Pohl [18], who construct nonsingular preconditioners that move small eigenvalues away from zero. However, in Section 5 we will come up with another proposal for the nonsymmetric case, which we think is better suited to deflate approximate eigenpairs.

3. Breakdowns of deflated GMRES. Unfortunately, in general, the deflated GMRes method of Section 2 can break down since the Arnoldi process described by the relation [??][V.sub.n] = [V.sub.n+1] [[H.bar].sub.n], which is used to set up the least squares problem in (2.12), is applied with a singular matrix [??]. The least squares problem originates from solving [??] = [[??].sub.o] by GMRes for some [??][member of] [Z.sup.[perpendicular to]]. Since R([??]) [subset or equal to] [Z.sup.[perpendicular to]] and [[??].sub.o] [member of] [Z.sup.[perpendicular to]], the linear system and the Arnoldi process are restricted to [Z.sup.[perpendicular to]]. Hence, it is the restriction of A to [Z.sup.[perpendicular to]] which matters. This restriction is singular if and only if rank [??] < N - k = dim [Z.sup.[perpendicular to]]. But recall that in applications the eigenvalues of this restriction are supposed to approximate the nondeflated 'large' eigenvalues of A; therefore, in practice it is very unlikely that the restriction is singular and breakdowns can occur.

If rank [??] < N - k, it may happen that [v.sub.o] [member of] N([??]) [intersection] [Z.sup.[perpendicular to]] or that, for some n > 1, [v.sub.n-1] [member of] N([??]) [intersection] R([??])[subset or equal to] N([??]) [intersection] [Z.sup.[perpendicular to]]. Then [??][v.sub.n-1] = o and, trivially, the component orthogonal to Kn = R(Vn) of this vector is also zero and cannot be normalized. Moreover, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] o, so the last column of [H.sub.n] is zero except for its undetermined (n + 1, n)-element, which we may set equal to 0 too. In particular, the top square part [H.sub.n] of [[H.bar].sub.n] is singular. Hence, the Arnoldi process terminates after detecting the invariant subspace R([V.sub.n]) = Kn, and GMRes breaks down. Note that dim ([??][K.sub.n]) = rank ([??][V.sub.n]) = rank ([V.sub.n][H.sub.n]) = n - 1 since rank [H.sub.n] = n - 1. Is this the only type of breakdown?

The application of Krylov space methods to singular systems has been investigated in detail by Freund and Hochbruck [27, [section][section] 3-4] and others. In particular, the application of GMRes to such systems has been analyzed by Brown and Walker [8]. Lemma 2.1 of [8] adapted to our situation reads as follows.

Lemma 1. If GMRes is applied to [??] = [[??].sub.o] and if dim [[??].sub.n] = n holds for some n [greater than or equal to] 1, then exactly one of the following three statements holds:

(i) dim([??][[??].sub.n]) = n - 1 and [??][not equal to] [[??].sub.o] for every [??][member of] [[??].sub.n];

(ii) dim([??][[??].sub.n]) = dim [K.sub.n+1] = n, [[??].sub.n] :[equivalent to] [V.sub.n] [k.sub.n] is uniquely defined, and [??][[??].sub.n] [equivalent to] [[??].sub.o];

(iii) dim([??][[??].sub.n]) = n, dim [[??].sub.n+1] = n +1, [[??].sub.n] is uniquely defined, but [??] [x.sub.n] [[??].sub.o.]

We call Case (i) a breakdown of GMRes, Case (ii) the termination of GMRes, and Case (iii) the continuation of GMRes. (In contrast, Brown and Walker [8] and other authors also call Case (ii) a breakdown, although in this case the aim of finding a solution of the linear system has been achieved.) Note that Case (i) implies that dim [[??].sub.n+1] = n, hence also in this case the Krylov space is exhausted.

In the situation where [??][v.sub.n-1] = o discussed before, we have obviously Case (i) since Arnoldi terminates, but the resulting equation [[e.bar].sub.1] = Haka has no solution. That this is more generally a consequence of dim(A[K.sub.n]) = n - 1 can be seen as follows: if we had chosen for Kn the so-called Krylov basis, that is

[V.sup.K.sub.n]) :[equivalent to] [[[??].sub.o] A[[??].sub.o] ... [A.sup.n-1] [[??].sub.o]],

then, in Case (i), the Hessenberg relation resulting after n steps would be [??][V.sup.K.sub.n) = [V.sup.K.sub.n])[H.sup.K.sub.n]), with a companion matrix [H.sup.(K).sub.n]) that has a zero element in its upper right corner, so that [[e.bar].sub.1] [member of] R([H.sup.(K).sub.n]). This just reflects the fact that the restriction of [??] to [[??].sub.n] has a zero eigenvalue: the last column of [H.sup.K.sub.n]) contains the coefficients of the characteristic polynomial. Note also that the basis transformation from [V.sub.n] to [V.sup.(K).sub.n]) is represented by a triangular matrix and leaves the direction of the first basis vector invariant.

Clearly, dim([??][[??].sub.n]) = n - 1 (i.e., Case (i)) holds if and only if N([??]) [intersection] [[??].sub.n] [not equal to] {o}. Conversely, if this breakdown condition does not occur for any n, GMRes will ultimately terminate with Case (ii), where the unique solution of [??] = [[??].sub.o] is found. At intermediate steps, where Case (iii) occurs, [[??].sub.n] = [V.sub.n] [k.sub.n] is the best least squares solution out of [[??].sub.n].

In summary we obtain for deflated GMRes applied to Ax = b the following theorem.

Theorem 2. If [[??].sub.o] [member of] N([??]), then as long as N([??]) [intersection] [[??].sub.n] = {o}, the deflated GMRes method defined by (2.6)-(2.7) and (2.12) yields in the nth step the approximate solution [x.sub.n] G xo + Kn + U whose residual [r.sub.n] has minimal '2-norm.

However, if N([??]) [intersection] [Z.sup.[perpendicular to]] = {o} and if [x.sub.o] is chosen such that [[??].sub.o] [member of] N([??]), then (and only then) deflated GMRes breaks down in the first step where n =1. Moreover, at step n > 1, if (and only if) N ([??]) [intersection] [[??].sub.n] [not equal to] {o}, the method breaks down when attempting to construct [v.sub.n]. In case of a breakdown, the search space xo + [[??].sub.n] + U does not contain the exact solution [x.sub.*].

If [Z.sup.[perpendicular to]] is A-invariant, breakdowns cannot happen, [C.sub.n] = O, and the Arnoldi relation (2.6) can be replaced by

(3.1) [AV.sub.n] = [V.sub.n+1] [[H.bar].sub.n].

Proof. It remains to prove the last two sentences. Firstly, for a proof by contradiction, assume that the search space contains [x.sub.*], so [x.sub.*]:[equivalent to] [A.sup.-1]b = xo + [[??].sub.*] + [u.sub.*], where [[??].sub.*] [member of] [[??].sub.n] and u* G U. Then, since PAu* = o and PA?* = A?*,

o = b - A([x.sub.o] + [[??].sub.*] + [u.sub.*]) = Pro - PA[[??].sub.*] - PA[u.sub.*] + (I - P)(ro - A[[??].sub.*] - [Au.sub.*])

= ([[??].sub.o] - [[??].sub.*)] + Q([r.sub.o] - [[??].sub.*] - [Au.sub.*]).

Since the first parenthesis is in J-, while the second term is in Z, both must be zero. In particular, we must have [[??].sub.o] = [[??].sub.*]. However, this contradicts case (i) of Lemma 1, which applies when deflated GMRes breaks down and says that [[??].sub.*] [member of] [[??].sub.n].

Secondly, if ZL is A-invariant, we have in extension of (2.5) at the nth step

(3.2) [??][V.sub.n] = [PAPV.sub.n] = [PAV.sub.n] = [AV.sub.n].

This implies that solving the system [??] = [[??].sub.o] with GMRes (and starting vector [[??].sub.o] = o) is equivalent to solving A[??] = [[??].sub.o] with GMRes. Since A is nonsingular, there are no breakdowns (described by Case (i) of Lemma 1), and ultimately the solution will be found (i.e., Case (ii) will occur).

Finally, since R([V.sub.n]) [subset or equal to] [Z.sup.[perpendicular to]] and the latter set is assumed to be A-invariant, we have ft([AV.sub.n]) [subset or equal to] A [Z.sup.[perpendicular to]] = ZL, so that [C.sub.n] = ZH [AV.sub.n] = O.

Eqs. (3.1) and (3.2) suggest that in the case where [Z.sup.[perpendicular to]] is A-invariant we might apply GMRes with A instead of PA. But in some cases this might be risky due to round-off effects: round-off components in Z may grow fast since [A.sup.-1] has large eigenvalues there.

Note that for n = 0 the breakdown condition [[??].sub.o] [member of] N([??]) can be written as N([??]) [intersection] [[??].sub.o] = {o}, in accordance with the breakdown condition for the nth step.

The following simple 2 x 2 example taken from [31] exemplifies a breakdown in the first

step:

(3.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [??] = PAP = O and vo = [[??].sub.o] = [??]o, hence [[??]v.sub.o] = o. So, Z = N(P) = span {[e.sub.2]}, [Z.sup.[perpendicular to]] = span {[e.sub.1]}, [v.sub.o] [member of] N([??]) [intersection] [Z.sup.[perpendicular to]] here, and we have a breakdown in the first step.

We will generalize this example in the Appendix, where we will show that breakdowns are also possible at any later step up to n = N - 1 .

Based on Theorem 2 we may formulate conditions that characterize the possibility of breakdowns in case of an unlucky choice of [x.sub.o], that is, an unlucky [[??].sub.o] [member of] [Z.sup.[perpendicular to]].

Corollary 3. Deflated GMRes can break down in the first Arnoldi step (for determining v1) if and only if the following four equivalent conditions hold:

(1) N([??]) [intersection] [Z.sup.[perpendicular to]] [not equal to] {o},

(2) A[Z.sup.[perpendicular to]] [intersection] Z [not equal to] {o}

(3) A[Z.sup.[perpendicular to]] + Z [not equal to] [C.sup.N],

(4) rank [??] < n - k.

If these conditions are fulfilled for some given A and Z, then we can choose [x.sub.o] (if b is given), so that GMRes breaks down in the first step.

The equivalent Conditions (1)-(4) are also necessary for deflated GMRes to break down in a later step.

Conversely, a breakdown cannot occur in any step if equality holds in Conditions (1)-(4), or, equivalently, if N([??]) = Z, that is, if A [Z.sup.[perpendicular to]] [direct sum] Z = [C.sup.N].

Proof. According to Theorem 2, Condition (1) characterizes the possibility of a breakdown in the first step. It says that breakdowns are possible if and only if there exists y = Py [member of] [Z.sup.[perpendicular to]]\{o} with PAy = PAPy = [??]y = o, that is, with o [not equal to] Ay [member of] N(P) = Z. This is equivalent to Condition (2). Moreover, since dim Z = k and dim A [Z.sup.[perpendicular to]] = dim [Z.sup.[perpendicular to]] = N - k, the second condition is equivalent to the third one. Finally, Z = N(P) [subset or equal to] N([??]) and therefore Condition (1) implies that dim N([??]) > dim Z = k , that is, rank [??] < n - k, and vice versa.

For a breakdown at step n > 1 we need, by Theorem 2, N([??]) n [[??].sub.n] [not equal to] {o}. Since [[??].sub.n] [subset or equal to] span {[[??].sub.0]} + R([??]) [subset or equal to], Condition (1) must hold.

Conditions for the impossibility of breakdowns are obtained by negating the Conditions (1)-(4), noting that always N([??]) [contains] Z, and observing the dimension statements given above.

Finally, we point out the following fact.

Corollary 4. The assumption that [Z.sup.[perpendicular to]] is A-invariant is sufficient, but not necessary for guaranteeing that no breakdown can occur.

Proof. Since A is nonsingular, [Z.sup.[perpendicular to]] is A-invariant if and only if [Z.sup.[perpendicular to]] = [Z.sup.[perpendicular to]]. This condition means that on the left-hand side of the negated Condition (3) of Corollary 3 we have an orthogonal direct sum:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

However, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] will hold whenever A [Z.sup.[perpendicular to]] [intersection] Z = {o}; hence, the condition that [Z.sup.[perpendicular to]] be A-invariant appears not to be necessary for guaranteeing no breakdowns. The following example proves this claim.

Example. We slightly modify the example of (3.3) by choosing

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As before, Z = span {[e.sub.2]}, but now A [Z.sup.[perpendicular to]] = A span {[e.sub.1]} = span {[e.sub.1] + [e.sub.2}] [not equal to]= [Z.sup.[perpendicular to]]. Hence, A [Z.sup.[perpendicular to]] [[direct sum] Z = [C.sup.2]. Consequently, for any [[??].sub.o] = Pro [not equal to] o there will be no breakdown.

Remarks. (i) Note that when A is not Hermitian, then the property that [Z.sup.[perpendicular to]] is A-invariant does not imply that Z is A-invariant, and vice versa.

(ii) In case of a breakdown we might restart deflated GMRES with a new column [z.sub.k+1] :[equivalent to] [v.sub.n] appended to Z. Repeating this measure if needed we will ultimately find a least square problem of type (2.11) with residual ||[r.sub.n]||2 = 0 and with, say, the original k replaced by k + l. However, we cannot find the approximate solution [x.sub.n] from (2.7) unless we know the preimages [u.sub.k+i] satisfying [v.sub.k+i] = [Au.sub.k+i], i = 1'...' l.

(iii) Some further results on breakdowns of deflated GMRes and on how to avoid them in deflated MinRes are given in [31]. (3)

4. Spectral deflation for symmetric problems. If A is Hermitian, then so is A, and therefore the Arnoldi process can be replaced by a three-term symmetric Lanczos process, and the extended Hessenberg matrix [[H.bar].sub.n] of the previous section turns into an extended tridiagonal matrix [[T.bar].sub.n], for which a symmetric Lanczos relation

(4.1) [PAV.sub.n] = [V.sub.n+1] [[T.bar].sub.n]

holds and whose upper square part [T.sub.n] is Hermitian. A deflated MinRes algorithm called RMinRes for the so simplified setting has been described in detail by Wang, de Sturler, and Paulino [62]. The same update procedure as in the original MINRES method [49] can be applied to find the QR decomposition of [T.sub.n]. Wang et al. [62] also show that the approximate solutions [x.sub.n] can still be updated by short recurrences. This is also seen from the fact stressed here and [x.sub.n] [31] that the results of RMinRes can be found by solving first the projected problem [??] = [[??].sub.o] in [Z.sup.[perpendicular to]] by MinRes and then adding to the solution a correction term in Z; see Section 10.

In the Hermitian case the properties of deflated GMRes given in Theorem 2 and Corollary 3 persist and also hold for deflated MinRes. In particular, the possibility of a breakdown in the first step is still illustrated by the 2 x 2 example in (3.3). The possibility of a breakdown at a later step is still proven by the example in the Appendix, since the matrix A there is real symmetric.

We can reformulate the first part of Theorem 2 for deflated MinRes as follows.

Theorem 5. Let A be Hermitian; then so is [??]. If [[??].sub.o] [not member of] [N([??])= R([??])sup.[perpendicular to]], then as long as N([??]) [intersection] [[??].sub.n] = {o}, the deflated MinRes method obtained by adapting deflated GMRES to the symmetric case yields in the nth step the approximate solution [x.sub.n] [member of] [x.sub.o] + [[??].sub.n] + U whose residual [r.sub.n] has minimal 2-norm.

Conversely, if N([??]) [intersection] [[??].sub.n] = {o} for some n [greater than or equal to] 1 then (and only then) deflated MinRes breaks down in the nth step.

Again, breakdowns cannot occur if [Z.sup.[perpendicular to]] is A-invariant, and in this case the projected Lanczos relation (4.1) can be replaced by the Lanczos relation

(4.2) [AV.sub.n] = [V.sub.n+1] [[T.bar].sub.n].

A special feature of the symmetric case is that [Z.sup.[perpendicular to]] is A-invariant if and only if Z is A-invariant. This is due to the fact that eigenspaces belonging to different eigenvalues are mutually orthogonal, and higher dimensional eigenspaces can be split up in mutually orthogonal ones if needed. The definition [??] = PAP and the fact that P is the orthogonal projection onto ZL yield then the following result on the spectral deflation of A.

Theorem 6. Let A be Hermitian. If Z is A-invariant, then [Z.sup.[perpendicular to]] is also A-invariant and the restrictions of A, [??], and O to Z and [Z.sup.[perpendicular to]] satisfy

(4.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Of course, (4.3) holds also if A is non-Hermitian, and, by chance, both Z and [Z.sup.[perpendicular to]] are A-invariant.

5. Deflation by oblique projection: basic setting. So far we have based deflated GMRes and MinRes on orthogonal projections Q and P :[equivalent to] I - Q, but for GMRes and other solvers for nonsymmetric linear systems of equations it is more appropriate to consider oblique projections since the eigenspaces of A are typically not mutually orthogonal. Our approach is based on the natural splitting of [C.sup.N] into the direct sum of two A-invariant subspaces. In general, the corresponding decomposition of the residual search space will no longer be an orthogonal one. We therefore modify the setting of Section 2 as follows.

Let U [member of] [C.sup.Nxk] and [??] [member of] [C.sup.Nxk] have full rank k, and assume they are chosen such that the matrix E defined by

Z :[equivalent to] AU, E :[equivalent to] [[??].sup.H]Z

is nonsingular. Then set

U :[equivalent to]R(U), Z :[equivalent to]R(Z) = AU, [??] :[equivalent to] R([??]),

and

(5.1) Q :[equivalent to] [ZE.sup.-1] [[??].sup.H], P :[equivalent to] I - Q = I - [ZE.sup.-1] [[??].sup.H].

Note that still [Q.sup.2] = Q and [P.sup.2] = P, butnow

(5.2) QZ = Z, Q[[??].sup.[perpendicular to]] = {o}, PZ = {o}, p[[??].sup.[perpendicular to]] = [[??].sup.[perpendicular to]],

where, as before, [[??].sup.[perpendicular to]] denotes the orthogonal complement of [??]. JSo, Q is the oblique projection onto Z along [[??].sup.[perpendicular to]], while P is the oblique projection onto [[??].sup.[perpendicular to]] along Z. In particular, N(P) = Z' R(P) = [[??].sup.[perpendicular to]]. Again, the subspaces U and Z will be used to augment the search spaces for the approximate solutions [x.sub.n] and the corresponding residuals [r.sub.n], respectively.

If the k columns [[??].sub.j] of [??] are chosen biorthogonal to the k columns [z.sub.j] of Z, which means that these two sets of columns form dual bases of [??] and Z, then E = [[??].sub.H] Z = [I.sub.k] and the formulas for Q and P simplify as before:

(5.3) Q = Z[[??].sup.H], P = I - Q = I - Z[[??].sup.H].

Note that this is automatically true if we choose the columns of Z as (right-handside) eigenvectors of A and the columns of [??] as the corresponding left eigenvectors. This property even generalizes to multiple eigenvalues and defective matrices if the eigenvectors are suitably chosen.

As in Section 2 we further let

[[??].sub.o] :[equivalent to] [Pr.sub.o], [??] :[equivalent to] PAP.

Note that still

(5.4) N([??]) [[contains].bar] N(P) = Z, R([??]) [subset or equal to] R(P) = [[??].sup.[perpendicular to]],

so that [??][|.sub. [[??].sup.[perpendicular to]] , the restriction of [??] to [[??].sup.[perpendicular to]], is a possibly singular endomorphism of [[??].sup.[perpendicular to]]. Consequently, the Krylov subspaces [[??].sub.n] defined in (2.3) are all subsets of [[??].sup.[perpendicular to]] since [[??].sub.0] [member of] [[??].sup.[perpendicular to]]. Therefore, we will be able to restrict a Krylov space solver to [[??].sup.[perpendicular to]].

The reason for choosing this subspace lies in the following generalization of Theorem 6. Recall that a simple A-invariant subspace is an A-invariant subspace with the property that for any eigenvector it contains, it also contains all the other eigenvectors and generalized eigenvectors that belong to the same eigenvalue; see [58]. In other words, choosing a simple A-invariant subspace induces a splitting of the characteristic polynomial into two co-prime factors and a related decomposition of the Jordan canonical form.

THEOREM 7. Assume that Z is a simple k-dimensional A-invariant subspace and [??] is the corresponding [A.sup.H]-invariant subspace, that is, for any Z, [??][member of] [C.sup.Nxk] with Z = R(Z) and [??] = R([??]) there are G, [??][member of] [C.sup.kxk] such that, with E :[equivalent to] [[??].sup.H]Z,

(5.5) AZ = ZG, [A.sup.H] [??] = [??], [??] = [E.sup.-H] [G.sup.H] [E.sup.H].

Then [[??].sup.[perpendicular to]] is also A-invariant and Z [[direct sum] [[??].sup.[perpendicular to]] 1 = [C.sup.N]. Moreover, the restrictions of A, [??], and O to Z and [[??].sup.[perpendicular to]] 1 satisfy

(5.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. To fix our mind, let us first choose a special basis for Z and assume that A has a Jordan decomposition

(5.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where despite our notation [Z.sub.[perpendicular to]] is at this point not yet known to be related to [[??].sup.[perpendicular to]] . Eqn. (5.7) just reflects the fact that Z is A-invariant in the assumed sense, that J is the Jordan canonical form of A[|.sub.Z], and that Z contains the corresponding eigenvectors and generalized eigenvectors, while [J.sup.[perpendicular to]] is the Jordan canonical form of A [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the columns of [[??].sup.[perpendicular to]] are the corresponding eigenvectors and generalized eigenvectors. So, [[??].sup.[perpendicular to]] just consists of the 'remaining' eigenvectors and generalized eigenvectors and J1 consists of the 'remaining' Jordan blocks. Clearly, R([[??].sup.[perpendicular to]]) is also an A-invariant subspace, and Z [direct sum] R([[??].sup.[perpendicular to]]) is a direct sum, but in general not an orthogonal one. (Actually we could weaken the assumption: we need the separation of the Jordan blocks of A into two sets, but we need not that the eigenvalues are necessarily different in the two sets.)

As is well-known, the rows of the inverse of [Z [[??].sup.[perpendicular to]]] are the left-hand side eigenvectors and generalized eigenvectors of A, or, equivalently, the complex conjugate of the right-hand side eigenvectors and generalized eigenvectors of [A.sup.H]. To allow for another pair of bases for the induced pair of invariant subspaces of [A.sub.H], we let, for some nonsingular E and [E.sup.[perpendicular to]] [member of] [C.sup.kxk],

(5.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

so that E :[equivalent to] [[??].sup.H] Z as before, and, in addition,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

From the last two equations it follows that indeed R([Z.sup.[perpendicular to]]) = [Z.sup.[perpendicular to]] and R([[??].sup.[perpendicular to]]) = [[??].sup.[perpendicular to]], and by (5.7) the latter space was seen to be A-invariant. Moreover, multiplying (5.7) from both sides with the inverse of [ Z [[??].sup.[perpendicular to]] ] and inserting (5.8) yields

(5.9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

So, the complex-conjugate of the columns of Z and Z1 span left-invariant subspaces. Finally, taking the Hermitian transpose leads to

(5.10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which implies in particular that A [sup.H][??] = [??][E.sup.-H] [J.sup.H] [E.sup.H]. This establishes (5.5) in the case where G = J and [??] = [E.sup.-H] [J.sup.H] [E.sup.H]. The general case of G and G follows by noting that we did nowhere make any use of the Jordan structure of J and J , but only of the 2 x 2 block diagonal structure in (5.7), that is, we referred to the Jordan structure just to ease the discussion.

On the other hand, when indeed starting from a Jordan decomposition (5.7) of A and choosing [??] and [Z.sup.[perpendicular to]] so that E = [I.sub.k] and [E.sup.[perpendicular to]] = [I.sub.N-k,] we turn (5.10) into a Jordan decomposition (with lower bidiagonal Jordan blocks) of [A.sup.H].

Finally, it follows from (5.7) and the properties of P that

(5.11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

So, [??]Z = O, and by comparison with (5.7) we find [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which proves (5.6).

But also in the typical situation where Z and [??] are not A-invariant this pair ofspaces is well chosen, as the following simple fact underlines.

Lemma 8. Let Z, [??] [member of] [C.sub.Nxk] be given such that E :[equivalent to] [[??].sup.H] Z is nonsingular, let Z :[equivalent to] R(Z) and [??] :[equivalent to] R([??]), and choose [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] such that their columns consist of bases of the orthogonal complements [Z.sup.[perpendicular to]] and [[??].sup.[perpendicular to]], respectively. Then

(5.12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where all three matrices are nonsingular. In particular, [E.sup.[perpendicular to]] is nonsingular too, and

(5.13) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

are both decompositions of [C.sup.N] into (in general nonorthogonal) complements.

Proof. The block diagonal structure of the right-hand side of (5.12) holds by definition of [Z.sup.[perpendicular to]] and [[??].sup.[perpendicular to]] , but we need to show that on the left-hand side the matrices [[??] [Z.sup.[perpendicular to]]] and [[??] [Z.sup.[perpendicular to]]] are nonsingular, i.e., their columns are linearly independent.

Let [Z.sup.[perpendicular to]] be any nonzero element of [Z.sup.[perpendicular to]], [Z.sup.H][Z.sup.[perpendicular to]] = o and [Z.sup.[perpendicular to]] [not equal to] o. For a proof by contradiction, let us assume that z is a linear combination of columns of [??], i.e., z = [Z.sup.[perpendicular to]]k for some k [member of] [C.sup.N-k]. Then,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which implies that k = o, and thus z = o in contrast to our assumption. It follows that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. An analogue argument shows that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Remark. Note that, by definition, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are two other decompositions of [C.sup.N], and they even feature orthogonal complements. In contrast, in general, the decompositions in (5.13) are not orthogonal, but they are adapted to the operator A if Z is

exactly or nearly A-invariant. In (5.7) we assumed that Z and [[??].sup.[perpendicular to]] contain eigenvectors and generalized eigenvectors, which, in general, is not true in the setting of this and the following sections. In general, we will have

(5.14) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where the blocks [G.sub.12] and [G.sub.21] can be expected to contain only small elements if Z and [[??].sup.[perpendicular to]] are nearly A-invariant.

6. Deflation by oblique projection: truly deflated GMRES. Let us now come to the details of a correctly deflated GMRes based on the observations of the previous section. Given an initial guess xo, we choose as in Section 2 iterates [x.sub.n] from

(6.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where the Krylov subspaces [[??].sub.n] are still defined by (2.3). This implies that

(6.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We again construct a nested sequence of orthogonal bases for the Krylov subspaces [[??].sub.n] by an Arnoldi process started with [v.sub.o] :[equivalent to] [[??].sub.o]/[beta], where now [[??].sub.o] :[equivalent to] [Pr.sub.o] [member of] [[??].sup.[perpendicular to]] and [beta] :[equivalent to] [[parallel][[??].sub.o][parallel].sub.2]. As before, this is expressed by the Arnoldi relation AVa = Vn+iHn. Since R(Vn) = K?n C Z?1, we have [PV.sub.n] = [V.sub.n], and therefore again

(6.3) [??][V.sub.n] = [PAPV.sub.n] = [PAV.sub.n],

so that the Arnoldi relation still simplifies to

(6.4) [PAV.sub.n] = [V.sub.n+1] [[H.bar].sub.n].

However, recall that P and, hence, AA are now defined differently.

In view of (6.1) we represent [x.sub.n] again as

(6.5) [x.sub.n] = [x.sub.o] + [V.sub.n] [k.sub.n] + [Um.sub.n]

with coordinate vectors [k.sub.n] [member of] [C.sup.n] and [m.sub.n] [member of] [C.sup.k]. Regarding the residuals, where we prefer a representation in terms of an orthonormal basis, we note that Z cannot be expected to have such columns, whence we propose to QR-decompose Z first:

(6.6) Z = [Z.sub.o] [R.sub.QR], [Z.sup.H.sub.o] [Z.sub.o] = [I.sub.k].

Then, after inserting AU = Z = [Z.sub.o][R.sub.QR], we get

(6.7) [r.sub.n] = [r.sub.o] - A[V.sub.n] [k.sub.n] - [Z.sub.o] [R.sub.QR] [m.sub.n].

Due to the Arnoldi relation (6.4) and the decomposition ro = ?o + Qro = vo/3 + Qro this becomes now, with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(6.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

(6 9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is the truly deflated GMRes quasi-residual.

The columns of each [Z.sub.o] and [V.sub.n+1] are still orthonormal, but those of [Z.sub.o] need no longer be orthogonal to those of [V.sub.n+i]. So, in general, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], but since

(6.10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we have at least

(6.11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It is therefore tempting to minimize [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] instead of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and as in Section 2 this amounts to solving an n x (n+1) least squares problem with the extended Hessenberg matrix [[H.bar].sub.n] for minimizing [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], that is, for finding [k.sub.n] and subsequently choosing [m.sub.n] such that [q.sup.o.sub.n] = o:

(6.12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

At this point we see that the QR decomposition of Z is actually not needed since we can achieve that [q.sup.o.sub.n] = o and thus [Z.sub.o][q.sup.o.sub.n] = o. In other words, we can represent [r.sub.n] as

(6.13) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with

(6 14) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and are then lead to the same solution as given by (6 12) Formally there is very little difference between this algorithm and the one of Section 2, but there is an essential mathematical improvement regarding the deflation of A. In view of Theorem 7 we call the new algorithm truly deflated GMRES.

In practice, this algorithm will be applied with restarts, and the matrices Z and [??] with the approximate right and left eigenvectors may be updated at each restart.

Truly deflated GMRes can break down in the same way as deflated GMRes. Here is the adaptation of Theorem 2, which only requires very small changes.

Theorem 9. If [[??].sub.o] [member of] N([??]), then as long as N([??]) [intersection] [[??].sub.n] = {o}, the truly deflated GMRes method defined by (6.4)-(6.5), (6.9), and (6.12) yields in the nth step the approximate solution [x.sub.n] [member of] [x.sub.o] + [[??].sub.n] [C.sub.n] + U whose quasi-residual [[q.bar].sub.n] defined by (6.9) has minimal 2-norm.

However, if N([??]) n [[??].sup.[perpendicular to]] = {o} and if xo is chosen such that [[??].sub.o] G N([??]), then (and only then) truly deflated GMRes breaks down in the first step where n = 1. Moreover, at step n > 1, if (and only if) N ([??]) [intersection] [[??].sub.n] [not equal to] {o}, the method breaks down when attempting to construct [v.sub.n]. In case of a breakdown, the search space xo + [[??].sub.n] + U does not contain the exact solution [x.sub.*].

If [[??].sup.[perpendicular to]] is A-invariant, breakdowns cannot happen, [C.sub.n] = O, and the Arnoldi relation (6.4) can be replaced by

(6.15) [AV.sub.n] = [V.sub.n+1][H.sub.n].

Proof. Essentially we just have to replace in the proof of Theorem 2 every occurrence of [Z.sup.[perpendicular to]] by [[??].sup.[perpendicular to]]. This applies also to the last sentence, including (6.15). Inthat proofwe only made use of Z and [Z.sup.[perpendicular to]] being complimentary subspaces, but not of their orthogonality.

Corollaries 3 and 4 can also be adapted easily.

7. Deflation by oblique projection: the adjoint Krylov space. Some very efficient, computing time and memory space reducing alternatives to GMRES are based on the nonsymmetric Lanczos biorthogonalization process. Our aim of the next two sections is to adapt the approach of the previous two sections to these alternatives, in particular to the quasiminimal residual (QMR) method of Freund and Nachtigal [28], which is fully analogous to GMRes. To this end, we first need to look at the adjoints of the projections Q and P of (5.1) and the adjoint of our restricted operator [??] :[equivalent to] PAP.

The adjoint projections are defined by

(7.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

from which we see that the properties (5.2) of Q and P are supplemented as follows:

(7.2a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(7.2b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

So, [Q.sup.H] is the oblique projection onto [??] along [Z.sup.[perpendicular to]] while [P.sup.H] is the oblique projection onto [Z.sup.[perpendicular to]] along [??]. Inparticular,

(7.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For the adjoint operator [[??].sup.H] = [P.sup.H] [A.sup.H] [P.sup.H] this means that

(7.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We define the dual Krylov subspaces (sometimes called the shadow spaces) started from [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by

(7.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Methods based on implicitly or explicitly constructing for each n a pair of biorthogonal bases should choose the right and left bases, respectively, such that

(7.6a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(7.6b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In the rest of this section let us again consider the case where Z is A-invariant, which led to Theorem 7 and motivated using deflated solvers in the first place. Theorem 7 translates to the adjoint operator as follows.

Theorem 10. Under the assumptions of Theorem 7, [??] and [Z.sup.[perpendicular to]] are [A.sup.H] invariant, and the restrictions of [A.sup.H], [[??].sup.H], and O to [??] and [Z.sup.[perpendicular to]] satisfy

(7.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. We take Z and [[??].sub.[perpendicular to]] as given by the Jordan decomposition (5.7), and choose [??] and [[??].sub.[perpendicular to]], as towards the end of the proof of Theorem 7, such that E = [I.sub.k] and [E.sup.[perpendicular to]] = [I.sub.N-k.] Then, (5.9) simplifies to

(7.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

while (5 10) becomes

(7.9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

From the proof of Theorem 7 we know already that [??] and [Z.sub.[perpendicular to]] contain in their columns bases of [??] and [Z.sup.[perpendicular to]], respectively; so these two spaces are [A.sup.H]-invariant. Finally, in analogy to (5.11) we have

(7.10) = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

from which, by comparison with (7.9), we find the result (7.7).

8. Deflation by oblique projection: deflated QMR. Now we are ready to introduce a deflated QMR method that is analogous to our truly deflated GMRes, but replaces the Arnoldi process by the nonsymmetric Lanczos process. The latter has the important feature that it can provide approximations of both right and left eigenvectors. For details about the QMR method, see Freund and Nachtigal [28]; for a presentation in the notation used here (4), see [32]. Deflated QMR is started with the pair

(8.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(8.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [[??].sub.o] must be chosen such that [[??].sub.o] [member of] [Z.sup.[perpendicular to]] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The Arnoldi relation (6.4) is then replaced by a pair of Lanczos relations

(8.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where we may enforce that all columns of [V.sub.n+1] and [[??].sub.n+1] have 2-norm one, and where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is nonsingular diagonal or, if look-ahead steps [26] are needed, block-diagonal. With this choice (7.6a) and (7.6b) hold.

So, if we start again from the ansatz (6.5) for the approximate solutions [x.sub.n], which implies the representation (6.7) for the residuals, and if we again QR-decompose AU = Z = [Z.sub.o] [R.sub.QR] as in (6.6), we obtain exactly as in (6.8)

(8.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

(8.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is now the deflated QMR quasi-residual. Note that formally the only change is the replacement of the extended Hessenberg matrix [[H.bar].sub.n] by an extended tridiagonal matrix [[T.bar].sub.n] (or a block tridiagonal one if look-ahead steps are needed). This means short recurrences (except for the very unlikely special situation of a long look-ahead step) and thus no need to store the columns of [V.sub.n] and [[??].sub.n] since, in fact, the component [V.sub.n] [k.sub.n] of the approximate solutions [x.sub.n] can be updated step by step, as in MinRes.

Since we have chosen to QR-decompose Z--assuming that the number k of its columns is small--we still have [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] as in (6.11). However, the other essential change is that the columns of [V.sub.n+1] are no longer orthogonal, so, in general, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], unlike in (6.11). And, since [V.sub.n] has changed, so has [C.sub.n] :[equivalent to] [[??].sub.H] [AV.sub.n].

Nevertheless, as in QMR, we may choose to minimize [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] instead of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 112, and as in Section 2 this amounts to solving first an n x (n + 1) least squares problem with the extended tridiagonal matrix [[T.bar].sub.n] for minimizing [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and for finding [k.sub.n]. Next, [m.sub.n] is chosen such that [q.sup.o.sub.n] = o:

(8.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As in Section 6, the QR decomposition of Z is seen to be unnecessary. Updating the least squares problem (8.6) by updating the QR decomposition of Tn is done as in MinRes and QMR.

Also deflated QMR can break down in the same way as deflated GMRes. The corresponding adaptation of the first part of Theorem 2 again requires only minor changes. But additionally, QMR may break down due to a serious breakdown of the nonsymmetric Lanczos process; see, e.g., [26, 32] for a discussion of these breakdowns. They can nearly always be circumnavigated by look-ahead.

Theorem 11. If [[??].sub.o] [not equal to] N([??]), then as long as N([??]) n [[??].sub.n] = {o} and as long as there are no serious Lanczos breakdowns, the deflated QMR method defined by (6.5) and (8.3)(8.6)yields in the nth step the approximate solution [x.sub.n] [not member of] [x.sub.o] + [[??].sub.n] + U whose quasi-residual [[q.bar].subn] defined by (8.5) has minimal 2-norm.

However, apart from Lanczos breakdowns, if N ([??]) [intersection] [[??].sup.[perpendicular to]] [not equal to] {o} and if [x.sub.o] is chosen such that [[??].sub.o] [member of] N ([??]), then (and only then) deflated QMR breaks down in the first step where n = 1. Moreover, at step n > 1, if (and only if) N ([??]) [intersection] [[??].sub.n] [not equal to] {o}, the method breaks down when attempting to construct [v.sub.n]. In case of these two latter types of breakdown, the search space [x.sub.o] + [[??].sub.n] + U does not contain the exact solution.

Proof. Here, we have to replace in the proof of Theorem 2 not only every occurrence of [Z.sup.[perpendicular to]] by [[??].sup.[perpendicular to]], but also [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], 'orthogonality to [[??].sub.n]' by 'orthogonality to [[??].sub.n]', and 'Arnoldi' by 'Lanczos'. Then the arguments remain the same as in the proof of Theorem 9.

9. Deflation by oblique projection: deflated simplified QMR. If A is Hermitian and the Lanczos biorthogonalization algorithm is started with [[??].sub.0] = [v.sub.0], then it simplifies to the symmetric Lanczos algorithm since [[??].sub.n] = [V.sub.n] and [[??].sub.n] = [[T.bar].sub.n] = [T.sub.n]. Consequently, QMR just simplifies to MinRes, where, in particular, only one matrix-vector product is needed per step. As pointed out by Freund [25] there are other situations where one can profit from a similar simplification. In fact, Rutishauser [50] made the point that, in theory, the matrix-vector product by [A.sup.H] in the nonsymmetric Lanczos algorithm can be avoided since, for every square matrix A there exists a nonsingular matrix S such that [A.sup.T] = [SAS.sup.-1], that is, [A.sup.T] is always similar to A; see, e.g., [35, p. 134] for a proof of this result. Choosing [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] yields then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for n > 0; therefore, the multiplication by [A.sup.H] can be replaced by a multiplication by S followed by complex conjugation. The vectors ?n are temporarily needed to compute the recursion coefficients stored in [T.sub.n].

However, in general, the spectral decomposition of A is needed to construct S, and this makes this idea normally unfeasible. But there are some interesting situations, where the matrix S is known and simple to multiply with. Freund [25] lists several classes of S-symmetric and S-Hermitian matrices satisfying by definition [A.sup.T]S = SA, S = [S.sup.T] and [A.sup.H]S = SA, S = [S.sup.H], respectively. But we note that the symmetry conditions S = [S.sup.T] or S = [S.sup.H] are not needed for the simplification.

In one popular application of deflated Krylov space methods, the Wilson formulation of the lattice Dirac operator in lattice Quantum Chromodynamics (QCD), the Wilson matrix A has the form A = [I--.sub.k] W, where k [member of] R and W is S-Hermitian for a diagonal matrix S with diagonal elements [+ or -]1. See [7, 10, 29] for early contributions making use of this feature and [2, 1, 46, 57] for some samples of the many publications that make use of deflation in lattice QCD.

So, compared to QMR, simplified QMR reduces the cost in both time and memory. Regarding modifications for the deflated version, there is not much change before one gets to the details of an implementation. In particular, (8.4)-(8.6) remain unchanged.

10. An alternative interpretation of the augmentation component. We have seen that in each of the deflated Krylov space methods presented here and based on the ansatz [x.sub.n] = [x.sub.o] + [V.sub.n] [k.sub.n] + [Um.sub.n], the solution can be found in two steps: first, an (n + 1) x n leastsquare problem with an extended Hessenberg or tridiagonal matrix is solved for [k.sub.n], then an explicit formula for mn is evaluated in order to determine the augmentation component [Um.sub.n] of the approximate solution and the corresponding augmentation component--[Zm.sub.n] of the residual. As mentioned, the first part can be viewed as applying the corresponding standard Krylov space method to the singular linear system [??] = [[??].sub.o.] For example, in deflated GMRes, checking the derivation of the least-square problem in (2.12),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we readily see that it is the coordinate space equivalent of the least squares problem

(10.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

in the space Z1. On the other hand, mn :[equivalent to] ZHro - Cnkn yields in residual space

(10.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

a formula relating three vectors in Z. The corresponding correction for the iterates is

(10.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now, let us define, with the optimal [k.sub.n],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

so that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Then (10.1)-(10.3) take the form

(10.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(10.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(10.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(10.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This clarifies for deflated GMRES the relationship between the problems in coordinate space and those in the Krylov subspace [K.sub.n] C Zin the affine space [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and in the augmented space [x.sub.o] + [[??].sub.n] + U.

Likewise, with differently defined matrices [V.sub.n+i], [[??]n.sub.,] Q, P, [C.sub.n], and the new matrix [??], and thus also with different [??], [[??].sub.o], and [[??].sub.n], the least squares problem of truly deflated GMRes in (6.12) corresponds to one in [[??].sup.[perpendicular to]] that is formally identical with (10.1) and can be recast as (10.4) or (10.5). Moreover, the formula [m.sub.n] :[equivalent to] [[??].sub.H] [r.sub.o]--[C.sub.n] [k.sub.n] yields in the residual space still (10.6), while in the search space of the approximants we get analogously to (10.7)

(10.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The property that (10.4) and (10.5) remain valid can be understood from the fact that in (6.11) the term [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] vanishes for the optimal choice of [x.sub.n], while for the other term [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] the coordinate map is still isometric because the basis of [[??].sub.n+1], which consists of the columns of [V.sub.n+1], is orthonormal. But, in general, even if [[??].sup.[perpendicular to]] is A-invariant, [r.sub.n] is no longer the minimal residual from [r.sub.o] + A[[??].sub.n] + Z, since Z and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] need not be orthogonal to each other.

For deflated QMR, the restricted minimal norm properties (10.4) - (10.5) are no longer valid, but the derivations of (10.6) and (10.8) remain unchanged, although the matrices [V.sub.n+1], [[??].sub.n], and [C.sub.n] have again new meanings.

Yet another interpretation of the augmentation component [Um.sub.n] is found as follows. Let us consider the oblique projection framework of Sections 5-8 first, with E :[equivalent to] [[??].sup.H] Z = [I.sub.k] as in our presentation of truly deflated GMRes and deflated QMR. We further define

(10.9) [M.sub.A] :[equivalent to] [U[??].sup.H], [Q.sub.A] :[equivalent to] I - MAA = I - [U[??].sup.H] A,

noting that both [M.sub.A]A and [Q.sub.A] are projections. Inserting them into (10.8) we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and we end up with

(10.10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This formula holds for truly deflated GMRes and for deflated QMR. An analogous formula holds in the situation of Sections 2-4,jhat is, for GMRes and MinRes deflated with orthogonal projections. We have to replace [??] by Z and the pair [M.sub.A], [Q.sub.A] by

(10.11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

to obtain likewise

(10.12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The last formula is the 'correction formula' of Theorem 2.2 in [31] for the case where B = A there and our normalization E = [I.sub.k] holds. Both (10.10) and (10.12) relate the approximate solutions [x.sub.n] of the augmented and deflated method to the approximate solutions [[??].sub.n] of a deflated but not augmented method: [x.sub.n] [member of] [x.sub.o] + [[??].sub.n]. The term [Um.sub.n] = [M.sub.A](b - A[[??].sub.n]) or [Um.sub.n] = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], respectively, is the 'correction' due to augmentation.

11. Other projections used in augmentation and deflation methods. Many publications on particular augmentation and deflation methods apply projections that are different from the projections P that are the basis of our approach. In this section we introduce two parameter-dependant projections [P.sub.B] and [Q.sub.B] that cover many of published proposals, the parameter B being a nonsingular matrix of the same size as A. The most relevant choices for B are

1. B = I for deflated CG, BiCG, andFOM [51],

2. B = [A.sup.H] for deflated CR, GCR [17], MinRes, and GMRes,

3. B = A for deflated BiCR [56].

We start here from a setting suitable for deflated BiCG and BiCR that will be treated fully in [30]. Then we specialize it to the setting for CG, FOM, CR, GCR, MinRes, and GMRes considered in [31], which covers most of the published approaches. Similar to the situation in our Sections 5-8 we let

U :[equivalent to]R(U), Z :[equivalent to] AU, Z :[equivalent to] R(Z), U :[equivalent to]R(U), [??] :[equivalent to] [A.sup.H] [??],[??] :[equivalent to] R([??]),

but now we exchange E by a more general [E.sub.B] [member of] [C.sup.kxk] and introduce a matrix M [member of] [C.sup.NxN] that replaces our Q:

EB :[equivalent to] [[??].sup.H]BAU, M :[equivalent to] [UE.sup.-1.sub.B] [[??].sup.H].

Of course, we assume that [E.sub.B] is nonsingular. Finally, we introduce two projections [P.sub.B] and QB as well as a corresponding projection [[??].sub.B] of A, all defined in Table 11.1, which also lists kernels and ranges of these three operators. In the case where B = I these operators have been used by Erlangga and Nabben [20].

In contrast, by comparing [E.sub.B] with E we see that in Section 5 the choice was B = A. In this case we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Note that [Q.sub.A] is the same as in (10.9) if E = [I.sub.k] since [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. However, [M.sub.A] = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in general. But the following holds:

Theorem 12. For the projected operators A of Sections 5-8 and A b of Table 11.1 with B = A holds

(11.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Moreover, under the assumptions of Theorem 7, where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(11.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and therefore [??] = [[??].sub.A] on [C.sup.N].

Proof. By definition, [??] = PAP, where P is a projection with N(P) = Z and R(P) = [[??].sup.[perpendicular to]]. Consequently, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Moreover, if Z is A-invariant,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Finally, under the assumptions of Theorem 7, also [[??].sup.[perpendicular to]] is A-invariant and, by (5.6),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Altogether, we obtain (11.2) and, since Z [direct sum] [Z.sup.[perpendicular to]] = [C.sup.N] under these assumptions, there holds [??] = [[??].sub.A] on [C.sup.N].

An analogous result holds in the situation of Sections 2-4. There is no dual space there, so we redefine

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

PB, [Q.sub.B], and [[??].sub.B] can be defined as before, but their ranges slightly differ; see Table 11.2. This is the situations considered in [31]. (But note that our B is defined differently and equals BH in the notation of [31].) The case where B = I covers deflated CG [48,13,41,61,19, 54] and is also a topic of study in [21, 47, 60] and related work.

Comparing [E.sub.B] with E of Section 2 we see that B = [A.sup.H] here. Then we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the same as in (10.11) if E = [I.sub.k] since [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The following analog of Theorem 12 holds:

Theorem 13. For the projected operators [??] of Sections 2-4 and [[??].sub.B] of Table 11.2 with B = [A.sup.H] holds

(11.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Moreover, if Z and [Z.sup.[perpendicular to]] are A-invariant, then

(11.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and therefore [??] = [[??].sub.A] on [C.sup.N].

Proof. The proof is fully analogous to the one of Theorem 12 and is left out here.

In summary, the two slightly different projections P used here in Sections 2-4 and in Sections 5-8 coincide with the projections Pah and PA defined in Table 11.2 (for B = [A.sup.H]) and Table 11.1 (for B = A), respectively, but they differ from the projections Pi defined there when B = I. The latter projections are those used in deflated CG [48, 13, 41] and deflated BiCG [30]. Moreover, even when [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] our deflated operator [??] = PAP differs in general from the deflated operators [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [[??].sub.A], respectively, unless Z and [Z.sup.[perpendicular to]] or [[??].sup.[perpendicular to]] are exactly right and left A-invariant subspaces.

12. Deflated quasi-(bi)orthogonal residual methods. The GMRes algorithm of Saad and Schultz [53] is just one incidence of a so-called minimal residual (MR) method: a Krylov space solver whose iterates and residuals restricted by

(12.1) [x.sub.n] [member of] [x.sub.o] + [K.sub.n](A, [r.sub.o]), [r.sub.n] [member of] [r.sub.o] + A[K.sub.n](A, [r.sub.o])

have the minimal norm property [[parallel][r.sub.n][parallel].sub.2] = min!, which is equivalent to the Galerkin condition

(12.2) [r.sub.n] [perpendicular] A[K.sub.n](A, [r.sub.o]).

Other methods with the same mathematical properties are the Generalized Minimum Residual (GCR) method [17], the MinRes algorithm of Paige and Saunders [49] for Hermitian matrices, and, the Conjugate Residual (CR) method of Stiefel [59] for Hpd matrices. While MINRES and GMRES transplant the problem into coordinate space, CG and GCR use directly recursions for [x.sub.n] and [r.sub.n].

There is an analogue family of so-called orthogonal residual (OR) methods, where (12.2) is replaced by another Galerkin condition,

(12.3) [r.sub.n] [perpendicular] Kn(A, [r.sub.o]),

which implies that the residuals are mutually orthogonal. This family includes the ubiquitous conjugate gradient(CG) methodofHestenes and Stiefel [34] forHpd matrices, whichhas the property that the residuals have minimal [A.sup.-1]-norm, or, equivalently, the error vectors have minimal A-norm. Another one is the Full Orthogonalization Method (FOM) ofSaad [51]. Of course, if A is not Hpd, there is no [A.sup.-1]-norm, and therefore no minimal norm property. Moreover, for some n an iterate characterized by (12.1) and (12.3) need not exist. Therefore there is little interest in this method.

Of much greater importance is the biconjugate gradient (BiCG) method of Lanczos [40] and Fletcher [23], where the Galerkin condition (12.3) is replaced by the Petrov-Galerkin condition

(12.4) [r.sub.n] [perpendicular] [K.sub.n] ([A.sup.H], [[??].sub.o]),

with a freely selectable ?ro. There is still the drawbackthat iterates may not exist and further breakdown problems lurk (see, e.g., [32]), but this is balanced by the enormous advantage of short recurrences for iterates and residuals. Eq. (12.4) implies that the residuals [r.sub.n] and the so-called shadow residuals [[??].sub.n] of the fictitious linear system [A.sup.H] [??] = [[??].sub.o] (with initial approximation [[??].sub.o] :[equivalent to] o) are mutually biorthogonal.

If we consider a transplantation of an OR method to coordinate space, it follows immediately that [r.sub.n] = [r.sub.o] + AV [sub.n][k.sub.n] is a scalar multiple of [v.sub.n], the (n + 1)th basis vector generated by the Arnoldi or the nonsymmetric Lanczos process, respectively. Moreover, inserting the Arnoldi relation [AV.sub.n] = [V.sub.n+1] [[??].sub.n] or the Lanczos relation [AV.sub.n] = [V.sub.n+1] [[??].sub.n] we see that the coordinate vector [k.sub.n] satisfies

(12.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

respectively, with the n x n matrices [H.sub.n] and [T.sub.n] that are the 'upper parts' of the matrices [[??].sub.n] and [[??].sub.n] used in the coordinate space based MR methods. Solving recursively these linear systems by LR or QR factorization we obtain coordinate based OR methods. In the case of the tridiagonal matrices [T.sub.n] it is possible to derive short recurrences for the iterates and residuals, but this means essentially that we apply a CG-like or BICG-like algorithm.

In this section we want to point out that we can define augmented and deflated methods that are not quite (bi)orthogonal residual methods, but might be called deflated quasi(bi)orthogonal residual methods and have the property that they turn into deflated (bi)orthogonal residual methods if K is A-invariant. We start again from

(12.6) [x.sub.n] = [x.sub.o] + V[sub.n][k.sub.n] + [Um.sub.n], [r.sub.n] = [r.sub.o]--AV [sub.n][k.sub.n] - [Zm.sub.n].

and a representation of [r.sub.n] in terms ofthe basis of [K.sub.n+1][[direct sum]Z given by [[V.sub.n+1] Z]. Deflated CG [48, 13, 41,61, 19, 54] and deflated FOM are normally characterized by

(12.7) [r.sub.n] [perpendicular to] [[??].sub.n] [direct sum] U.

For CG, i.e., for Hpd A, it has been implicitly shown in various ways [13, 36, 48] (see also [19, Thm. 4.1] and [54, Thm 4.2]) that this implies the following optimality result, forwhich we provide the sketch ofa straightforward proof.

Theorem 14. Assume A is Hpd, define [K.sub.n] and U as in Section 2, and let again [x.sub.*] :[equivalent to] [A.sup.-1]b. Then the condition (12.7) implies that [x.sub.n] is optimal in the sense that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] a is minimal under the restriction [x.sub.n] [member of] [x.sub.o] + [K.sub.n] [direct sum] U.

Proof. Assume [x.sub.n] and [r.sub.n] are represented as in (12.6), and let

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then straightforward differentiation shows that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Any stationary point is characterized by zero gradients, that is, by [r.sub.n] [perpendicular to] R([V.sub.n]) = [K.sub.n] and [r.sub.n] [perpendicular to] R(U) = U. Moreover, we have there a minimum since [V.sup.H.sub.n] [AV.sub.n] and [U.sup.H] AU are Hpd.

The deflated CG algorithms of [48, 13, 41, 61, 19, 54] fulfill condition (12.7), and thus maintain global optimality. For deflation they implicitly or explicitly apply oblique projections, namely [P.sub.I] or [Q.sub.I] of Table 11.2 (with B = I and [A.sup.T] = A, so that [P.sub.I] = [Q.sup.T.sub.I]). Dostal [13] calls MA a conjugate projection. Moreover, these algorithms are all based on recurrences for iterates and residuals, so they are not coordinate space based. But unless Z is exactly A-invariant, the approach promoted in this paper which leads to the decomposition [[??].sub.n] [direct sum] Z is in conflict with a global optimization criteria valid for [K.sub.n] [direct sum] U. To obtain simple coordinate space based methods we may drop global optimality and replace (12.7) by

(12.8) [r.sub.n] [perpendicular to] [direct sum] Z.

We will call a method with this property a deflated quasi-orthogonal residual (DQOR) method. For such a method we have the following trivial corollary.

Corollary 15. Under the assumptions of Theorem 14, if Z is A-invariant, the condition (12.8) implies that [x.sub.n] is optimal in the sense that [[parallel][x.sub.n] - [x.sub.*][parallel].sub.A] is minimal under the restriction (12.1).

Proof If Z is A-invariant, U = A~XZ = Z. So, (12.8) implies (12.7) here.

With the quasi-residual [[??].sub.n] of (2.10), the condition (12.8) transforms into

(12.9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

if we consider Ck+n as the subspace of Ck+n+1 characterized by a zero last component. This means that the first k + n components of qn must be zero, that is,

(12 10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This system is upper block triangular with a unit (1 1) block, and therefore it reduces to a linear system with the (2' 2) block for computing kn and an explicit formula for mn, in complete analogy to the least squares problem (2.11) that we solved before:

(12.11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In the setting of deflated GMRes of Section 2 these two formulas define a corresponding particular DQOR method. If A is Hermitian, we can replace [H.sub.n] by the tridiagonal [T.sub.n] and profit from short recurrences for updating [x.sub.n].

In the setting of truly deflated GMRes of Section 6, where [[??].sub.n] is defined by (6.9), the conditions (12.8) and (12.9) are no longer equivalent. For simplicity we may just fulfil the latter, which yields (12.10), except that [Z.sup.H] is replaced by [[??].sup.H,] so that (12.11) turns into

(12.12) [H.sub.n][k.sub.n] = [e.sub.1][beta], mn :[equivalent to] [[??].sup.H] [r.sub.o] - [C.sub.n][k.sub.n].

This defines another particular DQOR method.

Finally, in the setting of deflated QMR of Section 8 condition (12.9) leads to

(12.13) [T.sub.n][k.sub.n] = [e.sub.1][beta], mn :[equivalent to] [[??].sup.H] [r.sub.o] - [C.sub.n][k.sub.n].

As can be readily verified, in this setting condition (12.9) is equivalent to

(12.14) [r.sub.n] [perpendicular to] [[??].sub.n] [direct sum] [??],

which characterizes a deflated quasi-biorthogonal residual (DQBiOR) method. The Recycling BiCG (RBiCG) method of Ahuja [4, 5] seems to be of this type.

DQOR and DQBIOR methods are in general not optimal. But we think that this is a minor disadvantage. It is shared by the class of orthogonal residual methods, whose residual norms depend in a well-known way discovered by Paige and Saunders [49] from those of the corresponding MR method; see, e.g., [16] and [33].

Conclusions. We have described several augmented and deflated Krylov methods for solving Ax = b that all fit into a common theoretical framework. They are coordinate space based in the sense that we generate recursively bases for the augmented search spaces [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for the iterates [x.sub.n] and the corresponding residual [r.sub.n], respectively, and determine the coordinates of [x.sub.n]. Here, Z = AU. The typical examples are deflated MinRes, GMRes, and QMR. Details differ from the proposals in the literature: for MinRes a little, for GMRes much more.

We assume that a basis for U is given, and that typically, but not necessarily, this subspace is close to an A-invariant subspace belonging to eigenvalues of small absolute value. Deflation replaces these by zero. We point out that the deflated operator [??] :[equivalent to] PAP and the corresponding Krylov subspaces [[??].sub.n] :[equivalent to] [K.sub.n]([??], [[??].sub.o)] generated from [[??].sub.o] :[equivalent to] Pro can be chosen in different ways. For deflated MinRes an orthogonal projection P on [Z.sup.[perpendicular to]] is appropriate. The same projection is also the standard for deflated GMRes. We suggest for non-Hermitian A another choice: an oblique projection onto [[??].sup.[perpendicular to]] along [??]. Here Z is an approximately left A-invariant subspace corresponding to the same eigenvalues as U and Z. This choice has the major advantage that in the case of exact A-invariance, these eigenspaces are really deflated in the sense that the kernel of [??] contains U = Z, while on [[??].sup.[perpendicular to]] the operators [??] and A coincide. The so deflated methods are based on the nonorthogonal decomposition [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which needs to be complimented by an analogous nonorthogonal decomposition [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for the shadow residual search space if the nonsymmetric Lanczos algorithm is applied to generate the bases. These decompositions lead to truly deflated GMRes and deflated QMR.

As further alternatives we suggest deflated quasi-orthogonal residual (DQOR) methods and deflated quasi-biorthogonal residual (DQBiOR) methods that are simple analogs of the deflated MR and QMR methods discussed before.

While the deflated operators [??] we promote are defined differently from those in most of the literature (exceptforthe one in, e.g., [62], which coincides in the symmetric case), we can show that in the case where Z is exactly A-invariant our deflated operators are equivalent with those (for Hermitian and non-Hermitian problems, respectively) that are discussed in two companion papers [31, 30] and have the widely used standard form, but are geared towards different Petrov-Galerkin conditions.

Acknowledgment. The author is indebted to Jorg Liesen and Reinhard Nabben of the TU Berlin to bring up this topic and to share with him their insight. He would also like to thank Andre Gaul, Daniel Kressner, and Jens-Peter Zemke for discussing specific questions related to this paper. Finally, the referees' fair and useful comments were much appreciated.

Appendix: An example where deflated MinRes and GMRes break down after any given number of steps. Let us consider examples of size N x N that are of the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where M is a symmetric nonsingular (N - 2) x (N - 2) matrix whose minimal polynomial is of degree k, where 1 [less than or equal to] k [less than or equal to] N - 2. Clearly, A is real symmetric and nonsingular too. We obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

so that in the notation of Section 2 we have in particular

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We can choose b and [x.sub.o] such that [r.sub.o] = [[??].sub.o] = [Pr.sub.o] = [[ 1 0 [w.sup.T]].sup.T], where w satisfies

(12.15) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with [[beta].sub.k] [not equal to] 0. Here, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a comonic representation of the minimal polynomial of M. Relation (12.15) is achieved by choosing w in general position with respect to the eigenvectors of M. For example, we could choose

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and w as a vector of ones.

The first k +1 Krylov vectors [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

They are linearly independent, hence a basis of [K.sub.K+1]. In view of (12.15) they satisfy

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Consequently, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], whence according to Theorems 2 and 5 deflated GM-Res and deflated MinRes (and thus also RMinRes of [62]) break down when attempting to construct [v.sub.K+1], while, obviously, they do not break down before. To understand this better consider the image of the Krylov basis under the mapping A, which spans [??][[??].sub.K+1]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Due to (12.15) these [kappa] + 1 vectors are linearly dependent, so dim [??][[??].sub.[kappa]+1] = [kappa] only, which shows that we have Case (i) of Lemma 1, namely a breakdown during step k + 1 of the Arnoldi process. Here, 2 [less than or equal to] [kappa] +1 < N.

For a breakdown in the first step we could, for example, consider the jsame type of A with an arbitrary M combined with P = [e.sub.1] [e.sup.T.sub.1] and an arbitrary w. Then [??] = O, and the method will fail for any initial [[??].sub.o] = o.

However, as we mentioned in the beginning of Section 3, a breakdown is very unlikely if Z is chosen such that an approximately invariant subspace is deflated and the deflated eigenvalues are well separated from the not deflated ones. In our example AZ = span {[Ae.sub.2]} = span {[e.sub.1]}, so Z is not at all approximately invariant.

REFERENCES

[1] A. M. ABDEL-REHIM, R. B. MORGAN, D. A. NICELY, AND W. WILCOX, Deflated and restarted symmetric Lanczos methods for eigenvalues and linear equations with multiple right-hand sides, SIAM J. Sci. Comput., 32 (2010), pp. 129-149.

[2] A. M. ABDEL-REHIM, R. B. MORGAN, AND W. WILCOX,DeflatedBiCGStab for linear equations in QCD problems, arXiv:0710.1988, (2007).

[3] A. M. ABDEL-REHIM, A. STATHOPOULOS, AND K. ORGINOS, Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides, Technical Report WM-CS-209-06, Department of Computer Science, College of William & Mary, Williamsburg, VA, USA, July 2009.

[4] K. AHUJA, Recycling bi-Lanczos algorithms: BiCG, CGS, and BiCGSTAB, Master Thesis etd-08252009161256, Department of Mathematics, Virginia Polytechnic Institute and State University, VA, USA, August2009.

[5] K. AHUJA, E. DE STURLER, S. GUGERCIN, AND E. CHANG, Recycling BiCG with an application to model reduction, SIAM J. Sci. Comput., (2011). Accepted for publication; see arXiv:1010.0762v4.

[6] J. BAGLAMA, D. CALVETTI, G. H. GOLUB, AND L. REICHEL, Adaptively preconditioned GMRES algorithms, SIAM J. Sci. Comput., 20 (1998), pp. 243-269.

[7] A. BORICI, Krylov Subspace Methods in Lattice QCD, Diss. ETH no. 11689, ETH Zurich, 1996.

[8] P. N. BROWN AND H. F. WALKER, GMRES on (nearly) singular systems, SIAM J. Matrix Anal. Appl., 18

(1997), pp. 37-51.

[9] A. CHAPMAN AND Y. S AAD, Deflated and augmented Krylov subspace techniques, Numer. Linear Algebra Appl., 4 (1997), pp. 43-66.

[10] P. DE FORCRAND,Progress on lattice QCD algorithms, Nucl. Phys. B (Proc. Suppl.), 47 (1996), pp. 228-235.

[11] E. DE STURLER, Nested Krylov methods based on GCR, J. Comput. Appl. Math., 67 (1996), pp. 15-41.

[12] --, Truncation strategies for optimal Krylov subspace methods, SIAM J. Numer. Anal., 36 (1999). pp. 864-889.

[13] Z. DostAl, Conjugate gradient method with preconditioning by projector, Int. J. Comput. Math., 23 (1988), pp. 315-323.

[14] --, Projector preconditioning and domain decomposition methods, Appl. Math. Comput., 37 (1990), pp. 75-81.

[15] --, Conjugate progector preconditioning for the solution of contact problems, Int. J. Numer. Methods Engrg, 34 (1992), pp. 271-277.

[16] M. EiERMANN, O. G. ERNST, AND O. SCHNEIDER, Analysis of acceleration strategies for restarted minimal residual methods, J. Comput. Appl. Math., 123 (2000), pp. 261-292.

[17] S. C. EISENSTAT, H. C. ELMAN, AND M. H. SCHULTZ, Variational iterative methods for nonsymmetric systems of linear equations, SIAM J. Numer. Anal., 20 (1983), pp. 345-357.

[18] J. ERHEL, K. BURRAGE, AND B. POHL, Restarted GMRES preconditioned by deflation, J. Comput. Appl. Math., 69 (1996), pp. 303-318.

[19] J. ERHEL AND F. GUYOMARC'H, An augmented conjugate gradient method for solving consecutive symmetric positive definite linear systems, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1279-1299.

[20] Y. A. ERLANGGA AND R. NABBEN, Deflation and balancing preconditioners for Krylov subspace methods applied to nonsymmetric matrices, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 684-699.

[21] --, Algebraic multilevel Krylov methods, SIAM J. Sci. Comput., 31 (2009), pp. 3417-3437.

[22] P. F. FISCHER, An overlapping Schwarz method for spectral element solution of the incompressible Navier Stokes equations, J. Comput. Phys., 133 (1997), pp. 84-101.

[23] R. FLETCHER, Conjugate gradient methods for indefinite systems, in Numerical Analysis, Dundee, 1975, G. A. Watson, ed., vol. 506 of Lecture Notes in Mathematics, Springer, Berlin, 1976, pp. 73-89.

[24] J. FRANK AND C. Vuik, On the construction of deflation-based preconditioners, SIAM J. Sci. Comput., 23 (2001), pp. 442-462.

[25] R. W. FREUND, Lanczos-type algorithms for structured non-Hermitian eigenvalue problems, in Proceedings of the Cornelius Lanczos International Centenary Conference, J. D. Brown, M. T. Chu, D. C. Ellison, and R. J. Plemmons, eds., SIAM, Philadelphia, PA, 1994, pp. 243-245.

[26] R. W. FREUND, M. H. GUTKNECHT, AND N. M. NACHTIGAL, An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, SIAM J. Sci. Comput., 14 (1993), pp. 137-158.

[27] R. W. FREUND AND M. HOCHBRUCK, On the use of two QMR algorithms for solving singular systems and applications inMarkov chain modeling, Numer. Linear Algebra Appl., 1 (1994), pp. 403-420.

[28] R. W. FREUND AND N. M. NACHTIGAL, QMR: a quasi-minimal residual method for non-Hermitian linear systems, Numer. Math., 60 (1991), pp. 315-339.

[29] A. FROMMER AND B. MEDEKE, Exploiting structure in Krylov subspace methods for the Wilson fermion matrix, in 15th IMACS World Congress on Scientific Computation, Modelling and Applied Mathematics, Vol. 3, Computational Physics, Chemistry and Biology, A. Sydow, ed., Wissenschaft und Technik Verlag, 1997, pp. 485-490.

[30] A. GAUL, M. H. GUTKNECHT, J. Liesen, AND R. NABBEN, Deflated and augmented Krylov subspace methods: A framework for deflated BiCG and related solvers. In preparation.

[31] --, Deflated and augmented Krylov subspace methods: Basic facts and a breakdown-free deflated MINRES, MATHEON Preprint 759, Technical University Berlin, January 2011.

[32] M. H. GUTKNECHT, Lanczos-type solvers for nonsymmetric linear systems of equations, Acta Numerica, 6 (1997), pp. 271-397.

[33] M. H. GUTKNECHT AND M. RozloZnIk, By how much can residual minimization accelerate the conver gence of orthogonal residual methods?, Numerical Algorithms, 27 (2001), pp. 189-213.

[34] M. R. HESTENES AND E. STIEFEL, Methods of conjugate gradients for solving linear systems, J. Res. Natl. Bur. Stand., 49 (1952), pp. 409-435.

[35] R. A. HORN AND C. R. JOHNSON, Matrix Analysis, Cambridge University Press, New York, 1985.

[36] W. D. JOUBERT AND T. A. MANTEUFFEL, Iterative methods for nonsymmetric linear systems, in Iterative Methods for Large Linear Systems, D. R. Kincaid and L. J. Hayes, eds., Academic Press, San Diego, CA, USA, 1990, ch. 10, pp. 149-171.

[37] S. A. KHARCHENKO AND A. Y. YEREMIN, Eigenvalue translation based preconditioners for the GMRES(k)

method, Numer. Linear Algebra Appl., 2 (1995), pp. 51-77. [38] L. Y. KOLOTILINA, Preconditioning of systems of linear algebraic equations by means of twofold deflation. I. Theory (in Russian), Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI), 229 (1995), pp. 95-152, 323.

[39] --, Twofold deflation preconditioning of linear algebraic systems. I. Theory, Journal of Mathematical Sciences, 89 (1998), pp. 1652-1689. Translation of [38].

[40] C. LANCZOS, Solution of systems of linear equations by minimized iterations, J. Res. Natl. Bur. Stand., 49 (1952), pp. 33-53.

[41] L. MANSFIELD, On the use of deflation to improve the convergence of conjugate gradient iteration, Communications in Applied Numerical Methods, 4 (1988), pp. 151-156.

[42] R. B. MORGAN, Computing interior eigenvalues of large matrices, Linear Algebra Appl., 154/156 (1991), pp. 289-309.

[43] --,A restarted GMRES method augmented with eigenvectors, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 1154-1171.

[44] --, Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1112-1135.

[45] --, GMRES with deflated restarting, SIAM J. Sci. Comput., 24 (2002), pp. 20-37.

[46] R. B. MORGAN AND W. WILCOX, Deflated iterative methods for linear equations with multiple right-hand sides, arXiv:math-ph/0405053v2, (2004).

[47] R. NABBEN AND C. VUIK, A comparison of abstract versions of deflation, balancing and additive coarse grid correction preconditioners, Numer. Linear Algebra Appl., 15 (2008), pp. 355-372.

[48] R. A. NICOLAIDES, Deflation of conjugate gradients with applications to boundary value problems, SIAM J. Numer. Anal., 24 (1987), pp. 355-365.

[49] C. C. PAIGE AND M. A. SAUNDERS, Solution of sparse indefinite systems of linear equations, SIAM J. Numer. Anal., 12 (1975), pp. 617-629.

[50] H. RUTISHAUSER, Beitmge zur Kenntnis des Biorthogonalisierungs-Algorithmus von Lanczos, Z. Angew. Math. Phys., 4 (1953), pp. 35-56.

[51] Y. S AAD, Krylov subspace methods for solving large unsymmetric systems, Math. Comp., 37 (1981), pp. 105126.

[52] --, Analysis of augmented Krylov subspace methods, SIAM J. Matrix Anal. Appl., 18 (1997), pp. 435449.

[53] Y. S AAD AND M . H. S CHULTZ, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856-869.

[54] Y. SAAD, M. YEUNG, J. ERHEL, AND F. GUYOMARC'H, A deflated version of the conjugate gradient algorithm, SIAM J. Sci. Comput., 21 (2000), pp. 1909-1926.

[55] V. SiMONCINI AND D. B. SZYLD, Recent computational developments in Krylov subspace methods for linear systems, Numer. Linear Algebra Appl., 14 (2007), pp. 1-59.

[56] T. SOGABE, M. SUGIHARA, AND S.-L. ZHANG, An extension of the conjugate residual method to nonsymmetric linear systems, J. Comput. Appl. Math., 226 (2009), pp. 103-113.

[57] A. STATHOPOULOS AND K. ORGINOS, Computing and deflating eigenvalues while solving multiple right hand side linear systems with an application to quantum chromodynamics, SIAM J. Sci. Comput., 32 (2010), pp. 439-462.

[58] G. W. STEWART AND J. G. SUN, Matrix Perturbation Theory, Computer Science and Scientific Computing, Academic Press, Boston, 1990.

[59] E. STIEFEL, Relaxationsmethoden bester Strategie zur Losung linearer Gleichungssysteme, Comm. Math. Helv., 29 (1955), pp. 157-179.

[60] J. M. TANG, R. NABBEN, C. VUIK, AND Y. A. ERLANGGA, Comparison of two-level preconditioners derived from deflation, domain decomposition and multigrid methods, J. Sci. Comput., 39 (2009), pp. 340-370.

[61] C. VUIK, A. SEGAL, AND J. A. MEIJERINK, An efficient preconditioned CG method for the solution of a class of layered problems with extreme contrasts in the coefficients, J. Comput. Phys., 152 (1999), pp. 385-403.

[62] S. WANG, E. DE STURLER, AND G. H. PAULINO, Large-scale topology optimization using preconditioned Krylovsubspace methods with recycling, Int. J. Numer. Methods Engrg., 69 (2007), pp. 2441-2468.

[63] M. C. YEUNG, J. M. TANG, AND C. VUIK, On the convergence of GMRES with invariant-subspace deflation, Report 10-14, Delft Institute of Applied Mathematics, Delft University of Technology, 2010.

(1) To change to the notation of [62] substitute, in particular, Z [right arrow] C and [C.sub.n] [right arrow] [B.sub.n.]

(2) In both [9] and [43] a cycle of deflated GMRES consists in first applying a fixed number of GMRES steps with A starting from [x.sub.o] (instead of using [??] and [x.sub.o)], and then adding k orthogonalization steps to the vectors [Au.sub.j]. This yields at the end an (m + k + 1) X (m + k) least squares problem. So the orthogonal projection P is only applied at the end of each cycle. For an alternative interpretation and realization of Morgan's method see [16, [section]4.3] and [44].

(3) Note, however, that [??] is defined differently in [31].

(4) Except that in [32] [V.sub.k] and [[??].sub.k] were called [[??].sub.k] and [??]k, respectively.

MARTIN H. GUTKNECHT ([dagger])

* Received March 21, 2012. Accepted for publication April 10, 2012. Published online May 10, 2012. Recommended by L. Reichel.

([dagger]) Seminar for Applied Mathematics, ETH Zurich, CH-8092 Zurich, Switzerland (mhg@math. ethz . ch). This work was started while the author was visiting the TU Berlin, supported by the DFG Forschungszentrum MATHEON and the Mercator Visiting Professorship Program of the DFG.

TABLE 11.1 The projections [P.sub.B] and [Q.sub.B] and the projected operator [[??].sub.B] for a generalization of the situation of Sections 5-8. null definition space [P.sub.B] I - AMB [??] [Q.sub.B] I - MBA u [[??].sub.B] [P.sub.B]A = [P.sub.B] u A[Q.sub.B] = A[Q.sub.B] range range if B = A [P.sub.B] [(BHU).sup.[??]] [[??].sup.[??]] [Q.sub.B] [([A.sup.H][B.sup.H] [([A.sup.H][??]) [??]).sup.[??]] .sup.[??]] [[??].sub.B] [([B.sup.H][??]) [[??].sup.[??]] .sup.[??]] TABLE 11.2 The projections [P.sub.B] and [Q.sub.B] and the projected operator [[??].sub.B] for a generalization of the situation of Sections 2-4. null definition space Pb I - AMB [??] Qb I - MBA u [[??].sub.B] [P.sub.B]A = [P.sub.B] u A[Q.sub.B] = A[Q.sub.B] range if B = range [A.sup.H] Pb [([B.sup.H]u).sup.[??]] [[??].sup.[??]] Qb [([A.sup.H][B.sup.H] [([A.sup.H]Z) u).sup.[??]] .sup.[??]] [[??].sub.B] [([B.sup.H]u).sup.[??]] [[??].sup.[??]]

Printer friendly Cite/link Email Feedback | |

Author: | Gutknecht, Martin H. |
---|---|

Publication: | Electronic Transactions on Numerical Analysis |

Article Type: | Report |

Geographic Code: | 4EXSI |

Date: | Jan 1, 2012 |

Words: | 17171 |

Previous Article: | Estimations of the trace of powers of positive self-adjoint operators by extrapolation of the moments. |

Next Article: | Spectral analysis of a block-triangular preconditioner for the Bidomain system in electrocardiology. |

Topics: |