Printer Friendly

Preconditioned recycling Krylov subspace methods for self-adjoint problems.

1. Introduction. Sequences of linear algebraic systems frequently occur in the numerical solution process of various kinds of problems. Most notable are implicit time-stepping schemes and Newton's method for solving nonlinear equation systems. It is often the case that the operators in subsequent linear systems have similar spectral properties or are in fact equal. To exploit this, a common approach is to factorize the operator once and apply the factorization to the following steps. However, this strategy typically has high memory requirements and is thus hardly applicable to problems with many unknowns. Also, it is not applicable if subsequent linear operators are even only slightly different from each other.

The authors use the idea of an alternative approach that carries over spectral information from one linear system to the next by extracting approximations of eigenvectors and using them in a deflation framework [2,29, 30,41]. For a more detailed overview on the background of such methods, see [14]. The method is designed for Krylov subspace methods in general and is worked out in this paper for the MINRES method [37] in particular.

The idea of recycling spectral information in Krylov subspace methods is not new. Notably, Kilmer and de Sturler [24] adapted the GCRO method [3] for recycling in the setting of a sequence of linear systems. Essentially, this strategy consists of applying the MINRES method to a projected linear system where the projection is built from approximate eigenvectors for the first matrix of the sequence. Wang, de Sturler, and Paulino proposed the RMINRES method [52] that also includes the extraction of approximate eigenvectors. In contrast to Kilmer and de Sturler, the RMINRES method is a modification of the MINRES method that explicitly includes these vectors in the search space for the following linear systems (augmentation). Similar recycling techniques based on GCRO have also been used by Parks et al. [38], Mello et al. [28], Feng, Benner, and Korvink [12], and Soodhalter, Szyld, and Xue [49]. A different approach has been proposed by Giraud, Gratton, and Martin [19], where a preconditioner is updated with approximate spectral information for use in a GMRES variant.

GCRO-based methods with augmentation of the search space, including RMINRES, are mathematically equivalent to the standard GMRES method (or MINRES for the symmetric case) applied to a projected linear system [14]. Krylov subspace methods that are applied to projected linear systems are often called deflated methods. In the literature, both augmented and deflated methods have been used in a variety of settings; we refer to Eiermann, Ernst, and Schneider [9] and the review article by Simoncini and Szyld [45] for a comprehensive overview.

In general, Krylov subspace methods are only feasible in combination with a preconditioner. In [52] only factorized preconditioners of the form A [approximately equal to] CCT can be used such that instead of Ax = b the preconditioned system [C.sup.-1]A[C.sup.-T]y = [C.sup.-1]b is solved. In this case the system matrix remains symmetric. While preconditioners like (incomplete) Cholesky factorizations have this form, other important classes like (algebraic) multigrid do not. A major difference of the method presented here from RMINRES is that it allows for a greater variety of preconditioners. The only restrictions on the preconditioner [M.sup.-1] are that it has to be self-adjoint and positive-definite and that its inverse has to be known; see the discussion at the end of Section 2.3 for more details. While this excludes the popular class of multigrid preconditioners with a fixed number of cycles, full multigrid preconditioners are admissible. To the best knowledge of the authors, no such method has been considered before. Note that the requirement of a self-adjoint and positive-definite preconditioner [M.sup.-1] is common in the context of methods for self-adjoint problems (e.g., CG and MINRES) because it allows to change the inner product implicitly. With such a preconditioner, the inertia of A is preserved in [M.sup.-1]A. Deflation is able to remedy the problem to a certain extent, e.g., if A has only a few negative eigenvalues.

Moreover, general inner products are considered, which facilitate the incorporation of arbitrary preconditioners and allow to exploit the self-adjointness of a problem algorithmically when its natural inner product is used. This leads to an efficient three-term recurrence with the MINRES method instead of a costly full orthogonalization in GMRES. One important example of problems that are self-adjoint with respect to a non-Euclidean inner product are nonlinear Schrodinger equations, presented in more detail in Section 3. General inner products have been considered before; see, e.g., Freund, Golub, and Nachtigal [13] or Eiermann, Ernst, and Schneider [9]. Naturally, problems which are Hermitian (with respect to the Euclidean inner product) also benefit from the results in this work.

In many of the previous approaches, the underlying Krylov subspace method itself has to be modified for including deflation; see, e.g., the modified MINRES method of Wang, de Sturler, and Paulino [52, Algorithm 1]. In contrast, the work in the present paper separates the deflation methodology from the Krylov subspace method. Deflation can thus be implemented algorithmically as a wrapper around any existing MINRES code, e.g., by optimized high-performance variants. The notes on the implementation in Sections 2.2 and 2.3 discuss efficient realizations thereof.

For the sake of clarity, restarting--often used to mitigate memory constraints--is not explicitly discussed in the present paper. However, as noted in Section 2.3, it can be added easily to the algorithm without affecting the presented advantages of the method. Note that the method in [52] does not require restarting because it computes Ritz vectors from a fixed number of Lanczos vectors (cycling); cf. Section 2.3. Since the non-restarted method maintains global optimality over the entire Krylov subspace (in exact arithmetic), it may exhibit a more favorable convergence behavior than restarted methods.

In addition to the deflation of computed Ritz vectors, other vectors can be included that carry explicit information about the problem in question. For example, approximations to eigenvectors corresponding to critical eigenvalues are readily available from analytic considerations. Applications for this are plentiful, e.g., flow in porous media considered by Tang et al. [51] and nonlinear Schrodinger equations; see Section 3.

The deflation framework with the properties presented in this paper are applied in the numerical solution of nonlinear Schrodinger equations. Nonlinear Schrodinger equations and their variations are used to describe a wide variety of physical systems, most notably in superconductivity, quantum condensates, nonlinear acoustics [48], nonlinear optics [17], and hydrodynamics [34]. For the solution of nonlinear Schrodinger equations with Newton's method, a linear system has to be solved with the Jacobian operator for each Newton update. The Jacobian operator is self-adjoint with respect to a non-Euclidean inner product and indefinite. In order to be applicable in practice, the MINRES method can be combined with an AMG-type preconditioner that is able to limit the number of MINRES iterations to a feasible extent [43]. Due to the special structure of the nonlinear Schrodinger equation, the Jacobian operator exhibits one eigenvalue that moves to zero when the Newton iterate converges to a nontrivial solution and is exactly zero at a solution. Because this situation only occurs in the last step, no linear system has to be solved with an exactly singular Jacobian operator but the severely ill-conditioned Jacobian operators in the final Newton steps lead to convergence slowdown or stagnation in the MINRES method even when a preconditioner is applied. For the numerical experiments, we consider the Ginzburg-Landau equation, an important instance of nonlinear Schrodinger equations, that models phenomena of certain superconductors. We use the proposed recycling MINRES method and show how it can help to improve the convergence of the MINRES method. All enhancements of the deflated MINRES method, i.e., arbitrary inner products and preconditioners, are required for this application. As a result, the overall time consumption of Newton's method for the Ginzburg-Landau equation is reduced by roughly 40%.

The deflated Krylov subspace methods described in this paper are implemented in the Python package KryPy [15]; solvers for nonlinear Schrodinger problems are available from PyNosh [16]. Both packages are free and open-source software. All results from this paper can be reproduced with the help of these packages.

The paper is organized as follows: Section 2 gives a brief overview on the preconditioned MINRES method for an arbitrary nonsingular linear operator that is self-adjoint with respect to an arbitrary inner product. The deflated MINRES method is described in Section 2.2 while Section 2.3 presents the computation of Ritz vectors and explains their use in the overall algorithm for the solution of a sequence of self-adjoint linear systems. In Section 3 this algorithm is applied to the Ginzburg-Landau equation. Sections 3.1 and 3.2 deal with the numerical treatment of nonlinear Schrodinger equations in general and the Ginzburg-Landau equation in particular. In Section 3.3 numerical results for typical two- and three-dimensional setups are presented.

2. The MINRES method.

2.1. Preconditioned MINRES with arbitrary inner product. This section presents well-known properties of the preconditioned MINRES method. As opposed to ordinary textbook presentations, this section incorporates a general Hilbert space. For K [member of] {R, C} let H be a K-Hilbert space with inner product [<x, x>.sub.H] and induced norm [[parallel] * [parallel].sub.H]. Throughout this paper, the inner product [<x, x>.sub.H] is linear in the first and anti-linear in the second argument and we define L(H) := {[Laplace] : H [right arrow] H | [Laplace] is linear and bounded}. The vector space of k-by-l matrices is denoted by [K.sup.k,l]. We wish to obtain x [member of] H from

(2.1) Ax = b,

where A [member of] L(H) is [<x, x>.sub.H] -self-adjoint and invertible and b [member of] H. The self-adjointness implies that the spectrum [sigma](A) is real. However, we do not assume that A is definite.

Given an initial guess [x.sub.0] [member of] H, we can approximate x by the iterates

(2.2) [x.sub.n] [x.sub.0] + [y.sub.n] with [y.sub.n] [member of] [K.sub.n](A, [r.sub.0]),

where [r.sub.0] = b - [Ax.sub.0] is the initial residual and [K.sub.n](A, [r.sub.0]) = span{[r.sub.0], [Ar.sub.0],..., [A.sup.n-1][r.sub.0]} is the nth Krylov subspace generated by A and [r.sub.0]. We concentrate on minimal residual methods here, i.e., methods that construct iterates of the form (2.2) such that the residual [r.sub.n]:= b - [Ax.sub.n] has minimal [[parallel]x[parallel].sub.H]-norm, that is,

(2.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [[product].sup.0.sub.n] is the set of polynomials of degree at most n with p(0) = 1. For a general invertible linear operator A, the minimization problem in (2.3) can be solved by the GMRES method [40] which is mathematically equivalent to the MINRES method [37] if A is self-adjoint [26, Section 2.5.5].

To facilitate subsequent definitions and statements for general Hilbert spaces, we use a block notation for inner products that generalizes the common block notation for matrices:

Definition 2.1. For k, l [member of] N and two tuples of vectors X = [[x.sub.1],..., [x.sub.k]] [member of] [H.sup.k] and Y = [[y.sup.1],...,[y.sub.l]] [member of] [H.sup.l], the product [<x, x>.sub.H] : [H.sup.k] x [H.sup.l] [right arrow] [K.sup.k,l] is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For the Euclidean inner product and two matrices X [member of] CN,k and Y [member of] CN,l, the product takes the form [<X,Y>.sub.2] = [X.sup.H]Y.

A block X [member of] [H.sup.k] can be right-multiplied with a matrix just as in the plain matrix case:

Definition 2.2. For X [member of] [H.sup.k] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] right multiplication is defined by

XZ := [[[k.summation over (i=1)] [z.sub.ij][x.sub.i]].sub.j=1,...,i] [member of] [H.sup.l].

Because the MINRES method and the underlying Lanczos algorithm are often described for Hermitian matrices only (i.e., for the Euclidean inner product), we recall very briefly some properties of the Lanczos algorithm for a linear operator that is self-adjoint with respect to an arbitrary inner product [<x, x>.sub.H] [8]. If the Lanczos algorithm with inner product [<x, x>.sub.H] applied to A and the initial vector [v.sub.1] = [r.sub.0]/[[parallel][r.sub.0][parallel].sub.H] has completed the nth iteration, then the Lanczos relation

A[V.sub.n] = [V.sub.n+1][[T.bar].sub.n]

holds, where the elements of [V.sub.n+1] = [[v.sub.1];..., [v.sub.n+1]] [member of] [H.sup.n+1] form a [<x, x>.sub.H]- orthonormal basis of [K.sub.n+1](A,[r.sub.0]), i.e., span{[v.sub.1],...,[V.sub.n+1]} = [K.sub.n+1] (A,[r.sub.0]) and [<[V.sub.n+1], [V.sub.n+1]>.sub.H] = [I.sub.n+1]. Note that the orthonormality implies [[parallel][V.sub.n+1]z[parallel].sub.H] = [parallel]z[parallel]2 for all z [member of] [K.sub.n+1]. The matrix [[T.bar].sub.n] [member of] [R.sup.n+1,n] is real-valued (even if K = C), symmetric, and tridiagonal with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The nth approximation of the solution of the linear system (2.1) generated with the MINRES method and the corresponding residual norm, cf. (2.2) and (2.3), can then be expressed as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

By recursively computing a QR decomposition of [[T.bar].sub.n], the minimization problem in (2.3) can be solved without storing the entire matrix [[T.bar].sub.n] and, more importantly, the full Lanczos basis [V.sub.n].

Let N := dim H < [infinity], and let the elements of W [member of] [H.sup.N] form a [<x, x>.sub.H]-orthonormal basis of H consisting of eigenvectors of A. Then AW = WD for the diagonal matrix D = diag([[lambda].sub.i],..., [[lambda].sub.N]) with A's eigenvalues [[lambda].sub.i],..., [[lambda].sub.N] [member of] R on the diagonal. Let [r.sup.W.sub.0] [member of] [K.sup.N] be the representation of [r.sub.0] in the basis W, i.e., [r.sub.0] = [Wr.sup.W.sub.0]. According to (2.3), the residual norm of the nth approximation obtained with MINRES can be expressed as

(2.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

From [[parallel][r.sup.W.sub.0][parallel].sub.2] = [[parallel][Wr.sup.W.sub.0][parallel].sub.H] = [[parallel][r.sub.0][parallel].sub.H] and [[parallel].sub.p](D)[[parallel].sub.2] = [max.sub.i[member of]{1,...,N}] [absolute value of p([[lambda].sub.i])], we obtain the well-known MINRES worst-case bound for the relative residual norm [20, 27]

(2.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This can be estimated even further upon letting the eigenvalues of A be sorted such that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for a s [member of] [N.sub.0]. By replacing the discrete set of eigenvalues in (2.5) by the union of the two intervals [I.sup.-] := [[[lambda].sub.1], [[lambda].sub.s]] and [I.sup.+] := [[[lambda].sub.s+1], [[lambda].sub.N]], one gets

(26) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [n/2] is the integer part of n/2; cf. [20, 27]. Note that this estimate does not take into account the actual distribution of the eigenvalues in the intervals [I.sup.-] and [I.sup.+]. In practice a better convergence behavior than the one suggested by the estimate above can often be observed.

In most applications, the MINRES method is only feasible when it is applied with a preconditioner. Consider the preconditioned system

(2.7) [M.sup.-1]Ax = [M.sup.-1]b,

where M [member of] L(H) is a [<x, x>.sub.H]-self-adjoint, invertible, and positive-definite linear operator. Note that [M.sup.-1]A is not -self-adjoint but self-adjoint with respect to the inner product [<x, x>.sub.M] defined by [<x, y>.sub.M] := [<Mx, y>.sub.H] = [<x, My>.sub.H]. The MINRES method is then applied to (2.7) with the inner product [<x, x>.sub.M] and thus minimizes [[parallel][M.sup.-1] (b - [A.sup.x])[parallel].sub.M]. From an algorithmic point of view it is worthwhile to note that only the application of [M.sup.-1] is needed and the application of M for the inner products can be carried out implicitly; cf. [11, Chapter 6]. Analogously to (2.6) the convergence bound for the residuals [[??].sub.n] produced by the preconditioned MINRES method is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus the goal of preconditioning is to achieve a more favorable spectrum of [M.sup.-1] A with an appropriate [M.sup.-1].

2.2. Deflated MINRES. In many applications even with the aid of a preconditioner, the convergence of MINRES is hampered--often due to the presence of one or a few eigenvalues close to zero that are isolated from the remaining spectrum. This case has recently been studied by Simoncini and Szyld [46]. Their analysis and numerical experiments show that an isolated simple eigenvalue can cause stagnation of the residual norm until a harmonic Ritz value approximates the outlying eigenvalue well.

Two strategies are well-known in the literature to circumvent the stagnation or slowdown in the convergence of preconditioned Krylov subspace methods described above: augmentation and deflation. In augmented methods, the Krylov subspace is enlarged by a suitable subspace that contains useful information about the problem. In deflation techniques, the operator is modified with a suitably chosen projection in order to "eliminate" components that hamper convergence; e.g., eigenvalues close to the origin. For an extensive overview of these techniques we refer to Eiermann, Ernst, and Schneider [9] and the survey article by Simoncini and Szyld [45]. Both techniques are closely intertwined and even turn out to be equivalent in some cases [14]. Here, we concentrate on deflated methods and first give a brief description of the recycling MINRES (RMINRES) method introduced by Wang, de Sturler, and Paulino [52] before presenting a slightly different approach.

The RMINRES method by Wang, de Sturler, and Paulino [52] is mathematically equivalent [14] to the application of the MINRES method to the "deflated" equation

(2.8) [P.sub.1]A[??] = [P.sub.1]b,

where for a given d-tuple U [member of] [H.sup.d] of linearly independent vectors (which constitute a basis of the recycling space) and C := AU, the linear operator [P.sub.1] [member of] L(H) is defined by [P.sub.1]x := x - C[<C, C>H1(C,x)H. Note that, although [P.sub.1] is a [<x, x>.sub.H]-self-adjoint projection (and thus an orthogonal projection), [P.sub.1]A in general is not. However, as outlined in [52, Section 4], an orthonormal basis of the Krylov subspace can still be generated with MINRES' short recurrences and the operator [P.sub.1]A because [K.sub.n]([P.sub.1]A, [P.sub.1][r.sub.0]) = [K.sub.n]([P.sub.1]A[P.sup.*.sub.1], [P.sub.1][r.sub.0]). Solutions of equation (2.8) are not unique for d > 0 and thus x was replaced by [??]. To obtain an approximation [x.sub.n] of the original solution x from the approximation [[??].sub.n] generated with MINRES applied to (2.8), an additional correction has to be applied:

[x.sub.n] = [[??].sub.1][[??].sub.n] + U[<C, C>.sup.-1.sub.H][<C, b>.sub.H],

where [[??].sub.1] [member of] L(H) is defined by [[??].sub.1]x := x - U[<C, C>.sup.-1.sub.H](C, Ax)H.

Let us now turn to a slightly different deflation technique for MINRES which we formulate with preconditioning directly. We will use a projection which has been developed in the context of the CG method for Hermitian and positive-definite operators [4, 33, 51]. Under a mild assumption, this projection is also well-defined in the indefinite case. In contrast to the orthogonal projection [P.sub.1] used in RMINRES, it is not self-adjoint but instead renders the projected operator self-adjoint. This is a natural fit for an integration with the MINRES method.

Our goal is to use approximations to eigenvectors corresponding to eigenvalues that hamper convergence in order to modify the operator with a projection. Consider the preconditioned equation (2.7) and assume for a moment that the elements of U = [[u.sub.1],..., [u.sub.d]] [member of] [H.sup.d] form a [<x, x>.sub.M]-orthonormal basis consisting of eigenvectors of [M.sup.-1]A, i.e., [M.sup.-1]AU = UD with a diagonal matrix D = diag([[lambda].sub.1],..., [[lambda].sub.d]) [member of] [R.sup.d,d]. Then [<U, [M.sup.-1]AU>.sub.M] = [<U, U>.sub.M]D = D is nonsingular because we assumed that A is invertible. This motivates the following definition:

DEFINITION 2.3. Let M, A [member of] L(H) be invertible and [<x, x>.sub.H]-self-adjoint operators and let M be positive-definite. Let U [member of] [H.sup.d] be such that [<U, [M.sup.-1]AU>.sub.m] = [<U, AU>.sub.H] is nonsingular. We define the projections Pm, P [member of] L(H) by

(2.9). [P.sub.M]x := x - [M.sup.-1]AU[<U, [M.sup.-1]AU>.sup.-1.sub.M][<U,x>.sub.M] and Px := x -AU[<U, AU>.sup.-1.sub.H][<U,x>.sub.H].

The projection [P.sub.M] is the projection onto range[(U).sup.[perpendicular to]M] along range([M.sup.-1]AU), whereas P is the projection onto range[(U).sup.[perpendicular to]H] along range(AU).

The assumption in Definition 2.3 that [<U, [M.sup.-1] AU>.sub.M] is nonsingular holds if and only if range([M.sup.-1]AU) [intersection]range[(U).sup.[perpendicular to]M] = {0} or equivalently if range(AU) [intersection]range[(U).sup.[perpendicular to]H] = {0}. As stated above, this condition is fulfilled if U contains a basis of eigenvectors of [M.sup.-1]A and also holds for good-enough approximations thereof; see, e.g., the monograph of Stewart and Sun [50] for a thorough analysis of perturbations of invariant subspaces. Applying the projection [P.sub.M] to the preconditioned equation (2.7) yields the deflated equation

(2.10) [P.sub.M] [M.sup.-1][A.sub.[??]] = [P.sub.M][M.sup.-1]b.

The following lemma states some important properties of the operator [P.sub.M][M.sup.-1] A. Lemma 2.4. Let the assumptions in Definition 2.3 hold. Then

1. [P.sub.M][M.sup.-1] = [M.sup.-1]P.

2. PA = A[P.sup.*] where [P.sup.*] is the adjoint operator of P with respect to [<x, x>.sub.H], defined by [P.sup.*]x = x - U[<U, AU>.sup.-1.sub.H][<AU,x>.sub.H].

3. [P.sub.M][M.sup.-1] A = [M.sup.-1]PA = [M.sup.-1]A[P.sup.*] is self-adjoint with respect to [<x, x>.sub.M].

4. For each initial guess [x.sub.0] [member of] H, the MINRES method with inner product [<x, x>.sub.M] applied to equation (2.10) is well defined at each iteration until it terminates with a solution of the system.

5. If [x.sub.n] is the nth approximation and [P.sub.M][M.sup.-1] b - [P.sub.M][M.sup.-1][A[??].sub.n] the corresponding residual generated by the MINRES method with inner product [<x, x>.sub.M] applied to (2.10) with initial guess [[??].sub.0] [member of] H, then the corrected approximation

(2.11) [x.sub.n] := [P.sup.*][[??].sub.n] + U[<U, AU>.sup.-1.sub.H][<U, b>.sub.H]

fulfills

(2.12) [M.sup.-1]b - [M.sup.-1][Ax.sub.n] = [P.sub.M][M.sup.-1]b - [P.sub.M][M.sup.-1] A[[??].sub.n].

(Note that (2.12) also holds for n = 0.)

Proof. Statements 1, 2, and the equation in 3 follow from elementary calculations. Because

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

holds for all x, y [member of] H, the operator [P.sub.M][M.sup.-1] A is self-adjoint with respect to [<x, x>.sub.M].

Statement 4 immediately follows from [14, Theorem 5.1] and the self-adjointness of [P.sub.M][M.sup.-1] A. Note that the referenced theorem is stated for the Euclidean inner product but it can easily be generalized to arbitrary inner products. Moreover, GMRES is mathematically equivalent to MINRES in our case, again due to the self-adjointness.

Statement 5 follows from 1 and 3 by direct calculations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now that we know that MINRES is well-defined when applied to the deflated and preconditioned equation (2.10), we want to investigate the convergence behavior in comparison with the original preconditioned equation (2.7). The following result is well-known for the positive-definite case; see, e.g., Saad, Yeung, Erhel, and Guyomarc'h [41]. The proof is quite canonical and given here for convenience of the reader.

LEMMA 2.5. Let the assumptions in Definition 2.3 and N := dim H < [infinity] hold. If the spectrum of the preconditioned operator [M.sup.-1]A is [sigma]([M.sup.-1]A) = {[[lambda].sub.i],..., [[lambda].sub.N]} and for d > 0 the elements of U [member of] [H.sup.d] form a basis of the [M.sup.-1] A-invariant subspace corresponding to the eigenvalues [[lambda].sub.1],..., [[lambda].sub.d], then the following holds:

1. The spectrum of the deflated operator [P.sub.M][M.sup.-1] A is

[sigma]([P.sub.M][M.sup.-1]A) = {0} [union] {[[lamda].sub.d+1],..., [[lambda].sub.n]}.

2. For n [greater than or equal to] 0, let [x.sub.n] be the nth corrected approximation (cf. statement 5 of Lemma 2.4) of MINRES applied to (2.10) with inner product [<x, x>.sub.M] and initial guess [[??].sub.0]. The residuals [r.sub.n]:= [M.sup.-1]b - [M.sup.-1][Ax.sub.n] then fulfill

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. From the definition of [P.sub.M] in Definition 2.3, we obtain [P.sub.M][M.sup.-1]AU = 0 and thus know that 0 is an eigenvalue of [P.sub.M][M.sup.-1]A with multiplicity at least d. Let the elements of V [member of] [H.sup.N-d] be orthonormal and such that [M.sup.-1]AV = V[D.sub.2] with [D.sub.2] = diag([[lambda].sub.d+1],..., [[lambda].sub.N]). Then [<U, V>.sub.M] = 0 because [M.sup.-1]A is self-adjoint with respect to [<x, x>.sub.M]. Thus [P.sub.M]V = V and the statement follows from [P.sub.M][M.sup.-1]AV = V[D.sub.2].

Because the residual corresponding to the corrected initial guess is

[r.sub.0] = [P.sub.M][M.sup.-1] (b - A[[??].sub.0]) [member of] range[(U).sup.[perpendicular to]M] = range(V),

where V is defined as above, we have [r.sub.0] = [Vr.sup.V.sub.0] for a [r.sup.V.sub.0] [member of] [K.sup.N-d]. Then with [D.sub.2] as above, we obtain by using the orthonormality of V similar to (2.4):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.2.1. Notes on the implementation. By item 1 in Lemma 2.4, [P.sub.M][M.sup.-1] = [M.sup.-1]P, the MINRES method can be applied to the linear system

(2.13) [M.sup.-1]PA[??] = [M.sup.-1]Pb

instead of (2.10). When an approximate solution [[??].sub.n] of (2.13) is satisfactory, then the correction (2.11) has to be applied to obtain an approximate solution of the original system (2.1). Note that neither M nor its inverse [M.sup.-1] show up in the definition of the operator P or its adjoint operator [P.sup.*] which is used in the correction. Thus the preconditioner [M.sup.-1] does not have to be applied to additional vectors if deflation is used. This can be a major advantage since the application of the preconditioner operator [M.sup.-1] is the most expensive part in many applications.

The deflation operator P as defined in Definition 2.3 with U [member of] [H.sup.d] needs to store 2d vectors because aside from U also C := AU should be precomputed and stored. Furthermore, the matrix E := [(<U, C>.sub.H] [member of] [K.sup.d,d] or its inverse have to be stored. The adjoint operator [P.sup.*] needs exactly the same data, so no more storage is required. The construction of C needs d applications of the operator A but--as stated above--no application of the preconditioning operator [M.sup.-1]. Because E is Hermitian, d(d + 1)/2 inner products have to be computed. One application of P or [P.sup.*] requires d inner products, the solution of a linear system with the Hermitian d-by-d matrix E, and d vector updates. We gather this information in Table 2.1.

Instead of correcting the last approximation [[??].sub.n], it is also possible to start with the corrected initial guess

(2.14) [x.sub.0] = [P.sup.*][[??].sub.0] + U[<U, AU>.sup.1.sub.H][<U,b>.sub.H]

and to use [P.sup.*] as a right "preconditioner" (note that [P.sup.*] is singular in general). The difference is mainly of algorithmic nature and will be described very briefly.

For an invertible linear operator B [member of] L(H), the right-preconditioned system ABy = b can be solved for y and then the original solution can be obtained from x = By. Instead of [x.sub.0], the initial guess [y.sub.0] := [B.sup.-1][x.sub.0] is used and the initial residual [r.sub.0] = b - A[By.sub.0] = b - [Ax.sub.0] equals the residual of the unpreconditioned system. Then the iterates

[y.sub.n] = [y.sub.0] + [Z.sub.n] with [Z.sub.n] [member of] [K.sub.n] (AB,[r.sub.0])

and [x.sub.n] := [By.sub.n] = [x.sub.0] + [B.sub.zn] are constructed such that the residual [r.sub.n]= b - [ABy.sub.n] = b - [Ax.sub.n] is minimal in [[parallel]*[parallel].sub.H]. If the operator AB is self-adjoint, the MINRES method can again be used to solve this minimization problem. Note that [y.sub.0] is not needed and will never be computed explicitly. The right-preconditioning can of course be combined with a positive definite preconditioner as described in the introduction of Section 2.

We now take a closer look at the case B = [P.sup.*] which differs from the above description because [P.sup.*] is not invertible in general. However, even if the right-preconditioned system is not consistent (i.e., b [not member of] range(A[P.sup.*])), the above strategy can be used to solve the original linear system. With [x.sub.0] from equation (2.14), let us construct the iterates

(2.15) [x.sub.n] = [x.sub.0] + [P.sup.*][y.sub.n] with [y.sub.n] [member of] [K.sub.n]([M.sup.-1]A[P.sup.*],[r.sub.0]) such that the residual

(2.16) [r.sub.n] = [M.sup.-1]b -[M.sup.-1][Ax.sub.n]

has minimal [[parallel]*[parallel].sub.M]-norm. Inserting (2.15) and the definition of [x.sub.0] into equation (2.16) yields [r.sub.n] = [M.sup.-1]Pb - [M.sup.-1]VAyn with [y.sub.n] [member of] [K.sub.n]([M.sup.-1]AV*,[r.sub.0]) = [K.sub.n]([M.sup.-1]PA, [r.sub.0]). The minimization problem is thus the same as in the case where MINRES is applied to the linear system (2.13), and because both the operators and initial vectors coincide, the same Lanczos relation holds. Consequently the MINRES method can be applied to the right-preconditioned system

[M.sup.-1]A[P.sup.*]y = [M.sup.-1]b, x = [P.sup.*]y

with the corrected initial guess [x.sub.0] from equation (2.14). The key issue here is that the initial guess is treated as in (2.15). A deflated and preconditioned MINRES implementation following these ideas only needs the operator [P.sup.*] and the corrected initial guess [x.sub.0]. A correction step at the end is then unnecessary.

2.3. Ritz vector computation. So far we considered a single linear system and assumed that a basis for the construction of the projection used in the deflated system is given (e.g., some eigenvectors are given). We now turn to a sequence of preconditioned linear systems

(2.17) [M.sup.-1.sub.(k)][A.sub.(k)][x.sup.(k)] = [M.sup.-1.sub.(k)][b.sup.(k)],

where [M.sub.(k)], [A.sub.(k)] [member of] L(H) are invertible and self-adjoint with respect to [<x, x>.sub.H], [M.sub.(k)] is positive definite, and [x.sup.(k)], [b.sup.(k)] [member of] H for k [member of] {1,..., M}. To improve the readability we use subscript indices for operators and superscript indices for elements or tuples of the Hilbert space H. Such a sequence may arise from a time dependent problem or a nonlinear equation where solutions are approximated using Newton's method (cf. Section 3). We now assume that the operator [M.sup.-1.sub.(k+1)][A.sub.(k+1)] differs only slightly from the previous operator [M.sup.- 1.sub.(k)][A.sub.(k)]. Then it may be worthwhile to extract some eigenvector approximations from the Krylov subspace and the deflation subspace used in the solution of the kth system in order to accelerate convergence of the next system by deflating these extracted approximate eigenvectors.

For explaining the strategy in more detail, we omit the sequence index for a moment and always refer to the kth linear system if not specified otherwise. Assume that we used a tuple U [member of] [H.sup.d] whose elements form a [<x, x>.sub.M]-orthonormal basis to set up the projection Pm (cf. Definition 2.3) for the kth linear system (2.17). We then assume that the deflated and preconditioned MINRES method, with inner product [<x, x>.sub.M] and initial guess [x.sub.0], has computed a satisfactory approximate solution after n steps. The MINRES method then constructs a basis of the Krylov subspace [K.sub.n]([P.sub.M][M.sup.-1]A, [r.sub.0]) where the initial residual is [r.sub.0] = [P.sub.M][M.sup.-1](b - [Ax.sub.0]). Due to the definition of the projection we know that [K.sub.n]([P.sub.M][M.sup.-1]A, [r.sub.0]) 1m range(U), and we now wish to compute approximate eigenvectors of [M.sup.-1]A in the subspace S := [K.sub.n]([P.sub.M][M.sup.-1]A, [r.sub.0]) [direct sum] range(U). We can then pick some approximate eigenvectors according to the corresponding approximate eigenvalues and the approximation quality in order to construct a projection for the deflation of the (k + 1)st linear system.

Let us recall the definition of Ritz pairs [36]:

DEFINITION 2.6. Let S [subset or equal to] H be a finite dimensional subspace and let B [member of] L(H) be a linear operator. (w, [mu]) [member of] S x C is called a Ritz pair of B with respect to S and the inner product <x, x> if

Bw - [mu]w [perpendicular to]{v) S.

The following lemma gives insight into how the Ritz pairs of the operator [M.sup.-1]A with respect to the Krylov subspace [K.sub.n]([P.sub.M][M.sup.-1] A,[r.sub.0]) and the deflation subspace range(U) can be obtained from data that are available when the MINRES method found a satisfactory approximate solution of the last linear system.

LEMMA 2.7. Let the following assumptions hold:

* Let M, A, U, [P.sub.M] be defined as in Definition 2.3 and let [<U, U>.sub.M] = [I.sub.d].

* The Lanczos algorithm with inner product <x, x> M applied to the operator [P.sub.M][M.sup.-1]A and an initial vector v [member of] range[(U).sup.[perpendicular to]M] proceeds to the nth iteration. The Lanczos relation is

(2.18) [P.sub.M][M.sup.-1]A[V.sub.n] = [V.sub.n+1][[T.bar].sub.n]

with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [s.sub.n] [member of] R is positive and [T.sub.n] [member of] [R.sup.n,n] is tridiagonal, symmetric, and real- valued.

* Let S := [K.sub.n]([P.sub.M][M.sup.-1]A, v) [cross product] range(U) and w := [[V.sub.n], U][??] [member of] S for a [??] [member of] [K.sub.n+d]. Then (w, [mu]) [member of] S x R is a Ritz pair of [M.sup.-1]A with respect to S and the inner product [<x, x>.sub.M] if and only if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where B := [<[V.sub.n], AU>.sub.H] and E := [<U, AU>.sub.H].

Furthermore, the squared [[parallel]*[parallel].sub.M]-norm of the Ritz residual [M.sup.-1][A.sub.w] - [mu]w is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. (w, [mu]) is a Ritz pair of [M.sup.-1]A with respect to S = range([[V.sub.n], U]) and the inner product <x, x) if and only if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where the last equivalence follows from the orthonormality of U and [V.sub.n] and the fact that range(U) [[perpendicular to].sub.M] [K.sub.n]([P.sub.M][M.sup.-1]A, v) = range([V.sub.n]). We decompose the left-hand side as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The Lanczos relation (2.18) is equivalent to

(2.19) [M.sup.-1]A[V.sub.n] = [V.sub.n+1][[T.bar].sub.n] + [M.sup.-1]AU [<U, AU>.sup.-1.sub.H] [<AU, [V.sub.n]>.sub.H],

from which we can conclude with the <x, x>m-orthonormality of [[V.sub.n+1], U] that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The characterization of Ritz pairs is complete by recognizing that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Only the equation for the residual norm remains to be shown. Therefore we compute with (2.19)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The squared residual [[parallel]x[parallel].sub.M]-norm thus is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] can be shown with the same techniques as above.

Remark 2.8. Lemma 2.7 also holds for the (rare) case that [K.sub.n]([P.sub.M][M.sup.-1]A, v) is an invariant subspace of [P.sub.M][M.sup.-1] A which we excluded for readability reasons. The Lanczos relation (2.18) in this case is PM[M.sup.-1]A[V.sub.n] = [V.sub.n][[T.bar].sub.n], which does not change the result.

REMARK 2.9. Instead of using Ritz vectors for deflation, alternative approximations to eigenvectors are possible. An obvious choice are harmonic Ritz pairs (w, p) [member of] S x C such that

(2.20) Bw - [mu]w [[perpendicular to].sub.<x, x>] BS;

see [31, 36, 52]. However, in numerical experiments no significant difference between regular and harmonic Ritz pairs could be observed; see Remark 3.6 in Section 3.3.

Lemma 2.7 shows how a Lanczos relation for the operator [P.sub.M][M.sup.-1]A (that can be generated implicitly in the deflated and preconditioned MINRES algorithm, cf. the end of Section 2.2) can be used to obtain Ritz pairs of the "undeflated" operator [M.sup.-1] A. An algorithm for the solution of the sequence of linear systems (2.17) as described in the beginning of this section is given in Algorithm 2.1. In addition to the Ritz vectors, this algorithm can include auxiliary deflation vectors [Y.sup.(k)].

Algorithm 2.1 Algorithm for the solution of the sequence of linear
systems (2.17).

Input: For k [member of] {1,..., M} we have:
    * [M.sub.(k)] [member of] L(H) is [<x, x>.sub.H]-self-adjoint and
       positive-definite.                         [??]preconditioner
    * [A.sub.(k)] [member of] L(H) is [<x, x>.sub.H]-self-adjoint.
                                                  [??] operator
    * [b.sup.(k)], [x.sub.0]k [member of] H.      [??] right hand side
                                                     and initial guess
* [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] * auxiliary
  deflation vectors (may be empty)

1: W = [] [member of] [H.sup.0] * no Ritz vectors available in first
   step
2: for k =1 [right arrow] M do
3:     U = orthonormal basis of span [W, [Y.sup.(k)]] with respect to
       [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
4:     C = [A.sub.(k)]U, E = [<U,C>.sub.H] * [P.sup.*] as in Lemma 2.4
5:     [x.sub.0] = [P.sup.*][x.sub.k.sub.0]) + U[E.sup.-1][<U,
       [b.sup.(k)]>.
       sub.H] * corrected initial guess
6:     [x.sup.(k).sub.n],[V.sub.n+1], [T.bar.n], B = MINRES(A(k)
      ,[b.sub.(k)], [M.sup.-1.sub.(k)], [P.sup.*],[x.sub.0], [epsilon])
         MINRES is applied to [M.sup.-1.sub.(k)] [A.sub.(k)][x.sup.(k)]
         = [M.sup.-1.sub.(k)][b.sup.(k)] with inner product
         [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
         right preconditioner [P.sup.*], initial guess [x.sub.0] and
         tolerance
         [epsilon] > 0; cf. Section 2.2.
         Then:
           * The approximation [x.sup.(k).sub.n] fulfills [MATHEMATICAL
             EXPRESSION NOT REPRODUCIBLE IN ASCII]
           * The Lanczos relation [M.sup.-1.sub.(k)] [A.sub.(k)]
             [P.sup.*]
             [V.sub.n] = [V.sub.n+1][[T.bar].sub.n] holds.
           * B = [<[V.sub.n], C>.sub.H] is generated as a byproduct of
             the application of [P.sup.*].
7: [w.sub.1],...,[w.sub.m], [[mu].sub.1],..., [[mu].sub.m],
   [[rho].sub.1],..., [[rho].sub.m] = Ritz (U,[V.sub.n+1], [[T.bar]
   .sub.n],B,C,E,[M.sup.-1.sub.(k)]
         Ritz(...) computes the Ritz pairs ([w.sub.j],[[mu].sub.j]) for
         j [member of] {1,..., m} of [M.sup.-1.sub.(k)] with
         respect to span[U, [V.sub.n]] and the inner product
         [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
         Lemma 2.7. Then:
           * [w.sub.1],..., [w.sub.m] form [MATHEMATICAL EXPRESSION NOT
             REPRODUCIBLE IN ASCII]-orthonormal basis of span[U,
             [V.sub.n]].
           * The residual norms [MATHEMATICAL EXPRESSION NOT
             REPRODUCIBLE IN ASCII] are also returned.
8: W = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for pairwise
   distinct [i.sub.1],...,[i.sub.d] [member of] {1,..., m}.
   | Pick d Ritz vectors according to Ritz value and residual norm.
9: end for


2.3.1. Selection of Ritz vectors. In step 8 of Algorithm 2.1, up to m Ritz vectors can be chosen for deflation in the next linear system. It is unclear which choice leads to optimal convergence. The convergence of MINRES is determined by the spectrum of the operator and the initial residual in an intricate way. In most applications one can only use rough convergence bounds of the type (2.6) which form the basis for certain heuristics. Popular choices include Ritz vectors corresponding to smallest- or largest-magnitude Ritz values or smallest Ritz residual norms. No general recipe can be expected.

2.3.2. Notes on the implementation. We now comment on the implementational side of the determination and utilization of Ritz pairs while solving a sequence of linear systems (cf. Algorithm 2.1). The solution of a single linear system with the deflated and preconditioned MINRES method was discussed in Section 2.2. Although the MINRES method is based on short recurrences due to the underlying Lanczos algorithm--and thus only needs storage for a few vectors--we still have to store the full Lanczos basis [V.sub.n+1] for the determination of Ritz vectors and the Lanczos matrix [[T.bar].sub.n] [member of] [R.sup.n+1n]. The storage requirements of the tridiagonal Lanczos matrix are negligible while storing all Lanczos vectors may be costly. As customary for GMRES, this difficulty can be overcome by restarting the MINRES method after a fixed number of iterations. This could be added trivially to Algorithm 2.1 as well by iterating lines 3 to 8 with a fixed maximum number of MINRES iterations for the same linear system and the last iterate as initial guess. In this case, the number n is interpreted not as the total number of MINRES iterations but as the number of MINRES iterations in a restart phase. As an alternative to restarting, Wang et al. [52] suggest to compute the Ritz vectors in cycles of fixed length s. At the end of each cycle, new Ritz vectors are computed from the previous Ritz vectors and the s Lanczos vectors from the current cycle. All but the last two Lanczos vectors are then dropped since they are not required for continuing the MINRES iteration. Therefore, the method in [52] is able to maintain global optimality of the approximate solution with respect to the entire Krylov subspace (in exact arithmetic), which may lead to faster convergence compared to restarted methods. Note that a revised RMINRES implementation with performance optimizations has been published in [28]. Both restarting and cycling thus provide a way to limit the memory requirements. However, due to the loss of information, the quality of computed Ritz vectors and thus their performance as recycling vectors typically deteriorates with both strategies. For example, this has been observed experimentally by Wang et al. [52], where recycling vectors from shorter cycles were less effective.

In the experiments in this manuscript, neither restarting nor cycling is necessary since the preconditioner sufficiently limits the number of iterations (cf. Section 3.3). Deflation can then be used to further improve convergence by directly addressing parts of the preconditioned operator's spectrum. An annotated version of the algorithm can be found in Algorithm 2.1. Note that the inner product matrix B is computed implicitly row-wise in each iteration of MINRES by the application of [P.sup.*] to the last Lanczos vector [v.sub.n] because this involves the computation of <AU, [v.sub.n> = [B.sup.H] [e.sub.n].

2.3.3. Overall computational cost. An overview of the computational cost of one iteration of Algorithm 2.1 is given in Table 2.2. The computation of one iteration of Algorithm 2.1 with n MINRES steps and d deflation vectors involves n + d +1 applications of the preconditioner [M.sup.-1] and the operator A. These steps are typically very costly and dominate the overall computation time. This is true for all variants of recycling Krylov subspace methods. With this in mind, we would like to take a closer look at the cost induced by the other elements of the algorithm. If the inner products are assumed Euclidean, their computation accounts for a total of 2N x ([d.sup.2] + nd + 3d + 2n) FLOPs. If the selection strategy of Ritz vectors for recycling requires knowledge of the respective Ritz residuals, an additional 2N x [d.sup.2] FLOPs must be invested. The vector updates require 2N x (3/2[d.sup.2] + 2nd + 5/2d + 7n) FLOPs, so in total, without computation of Ritz residuals, 2N x (5/2[d.sup.2] + 3nd + 11/2d + 9n) FLOPs are required for one iteration of Algorithm 2.1 in addition to the operator applications.

Comparing the computational cost of the presented method with restarted or cycled methods is hardly possible. If the cycle length s in [52] equals the overall number of iterations n, then that method requires 2N x (6[d.sup.2] + 3nd + 3d + 2) FLOPs for updating the recycling space. In practice, the methods show a different convergence behavior because s [much less than] n and the involved projections differ; cf. Section 2.2.

Note that the orthonormalization in line 3 is redundant in exact arithmetic if only Ritz vectors are used and the preconditioner does not change. Further note that the orthogonalization requires the application of the operator M, i.e., the inverse of the preconditioner. This operator is not known in certain cases, e.g., for the application of only a few cycles of an (algebraic) multigrid preconditioner. Orthogonalizing the columns of U with an inaccurate approximation of M (e.g., the original operator B) will then make the columns of U formally orthonormal with respect to a different inner product. This may lead to wrong results in the Ritz value computation. A workaround in the popular case of (algebraic) multigrid preconditioners is to use so many cycles that M [approximately equal to] B is fulfilled with high accuracy. However, this typically leads to a substantial increase in computational cost and, depending on the application, may defeat the original purpose of speeding up the Krylov convergence by recycling.

Similarly, round-off errors may lead to a loss of orthogonality in the Lanczos vectors and thus to inaccuracies in the computed Ritz pairs. Details on this are given in Remark 3.8.

3. Application to nonlinear Schrodinger problems. Given an open domain [OMEGA] [subset or equal to] [R.sup.{2'3}], nonlinear Schrodinger operators are typically derived from the minimization of the Gibbs energy in a corresponding physical system and have the form

(3.1) S : X [right arrow] Y, S([psi]):=(kappa] + V + g[[absolute value of L].sup.2])L[psi] in [OMEGA],

with X [subset or equal to] [L.sup.2] ([OMEGA]) being the natural energy space of the problem and Y [subset or equal to] [L.sup.2] ([OMEGA]). If the domain is bounded, then the space X may incorporate boundary conditions appropriate to the physical setting. The linear operator K is assumed to be self-adjoint and positive-semidefinite with respect to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a given scalar potential, and [member of] > 0 is a given nonlinearity parameter. A state [??] : [OMEGA] [right arrow] C is called a solution of the nonlinear Schrodinger equation if

(3.2) S ([??]) = 0.

Generally, one is only interested in nontrivial solutions [??] [??] 0. The function [??] is often referred to as order parameter and its magnitude [[absolute value of [??]|.sup.2] typically describes a particle density or, more generally, a probability distribution. Note that, because of

(3.3) S (exp{ix}[psi]) = exp{ix}S ([psi]),

one solution [??] [member of] X is really just a representative of the physically equivalent solutions {exp{i[shi]}L : [shi] [member of] R}.

For the numerical solution of (3.2), Newton's method is popular for its fast convergence in a neighborhood of a solution: given a good-enough initial guess [[psi].sub.0], the Newton process generates a sequence of iterates [[psi].sub.k] which converges superlinearly towards a solution [??] of (3.2). In each step k of Newton's method, a linear system with the Jacobian

(3.4) J ([psi]):X [right arrow] Y J([psi]) [phi]:= ([kappa] + V + 2g[[absolute value of L].sup.2]) [phi] + g[[psi].sup.2][bar.[phi]].

of S at [[psi].sub.k] needs to be solved. Despite the fact that states [psi] are generally complex-valued, J([psi]) is linear only if X and Y are defined as vector spaces over the field R with the corresponding inner product

(3.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This matches the notion that the specific complex argument of the order parameter is of no physical relevancy since [[absolute value of r exp{i[alpha]}f].sup.2] = [[absolute value of].sup.2] for all r, [alpha] [member of] R, [psi] [member of] X (compare with (3.3)).

Moreover, the work in [42] gives a representation of adjoints of operators of the form (3.4), from which one can derive the following assertion:

Corollary 3.1. For any given [psi] [member of] Y, the Jacobian operator J(f) (3.4) is self-adjoint with respect to the inner product (3.5).

An important consequence of the independence of states of the complex argument (3.3) is the fact that solutions of equation (3.1) form a smooth manifold in X. Therefore, the linearization (3.4) in solutions always has a nontrivial kernel. Indeed, for any [psi] [member of] X

(3.6) J ([psi])(i[psi]) = (K + V + 2g[[absolute value of [psi]].sup.2]) (if) - gi[[psi].sup.2][bar.[psi]] = i ([kappa] + V + g[[absolute value of [psi]].sup.2])[[psi] = iS ([psi]),

so for nontrivial solutions [??] [member of] X, [psi] [??] 0, S ([??]) = 0, the dimensionality of the kernel of J (f) is at least 1.

Besides the fact that there is always a zero eigenvalue in a solution [??] and that all eigenvalues are real, not much more can be said about the spectrum; in general, J([psi]) is indefinite. The definiteness depends entirely on the state [psi]. For [psi] being a solution to (3.1), it is said to be stable or unstable depending whether or not J([psi]) has negative eigenvalues. Typically, solutions with low Gibbs energies tend to be stable whereas highly energetic solutions tend to be unstable. For physical systems in practice, it is uncommon to see more than ten negative eigenvalues for a given solution state.

3.1. Principal problems for the numerical solution. While the numerical solution of nonlinear systems itself is challenging, the presence of a singularity in a solution as in (3.6) adds two major obstacles for using Newton's method.

* Newton's method is guaranteed to converge towards a solution [??] Q-superlinearly in the area of attraction only if [??] is nondegenerate, i.e., the Jacobian in [??] is regular. If the Jacobian operator does have a singularity, then only linear convergence can be guaranteed.

* While no linear system has to be solved with the exactly singular J(f), the Jacobian operator close to the solution J ([??] + [delta][psi]) will have at least one eigenvalue of small magnitude, i.e., the Jacobian system becomes ill-conditioned when approaching a solution.

Several approaches have been suggested to deal with this situation; for a concise survey of the matter, see [21]. One of the most used strategies is bordering, which suggests extending the original problem S([psi]) = 0 by a so-called phase condition to pin down the redundancy [1],

(3.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If y and p(x) are chosen according to some well-understood criteria [23], the Jacobian systems can be shown to be well-conditioned throughout the Newton process. Moreover, the bordering can be chosen in such a way that the linearization of the extended system is self-adjoint in the extended scalar product if the linearization of the original problem is also self-adjoint. This method has been applied to the specialization of the Ginzburg-Landau equations (3.10) before [42] and naturally generalizes to nonlinear Schrodinger equations in the same way. One major disadvantage of the bordering approach, however, is that it is not clear how to precondition the extended system even if a good preconditioner for the original problem is known.

In the particular case of nonlinear Schrodinger equations, the loss of speed of convergence is less severe than in more general settings. Note that there would be no slowdown at all if the Newton update [delta][psi], given by

(3.8) J([psi])[delta][psi] = -S([psi])

was consistently orthogonal to the null space if close to a solution [??]. While this is not generally true, one is at least in the situation that the Newton update can never be an exact multiple of the direction of the approximate null space i[psi]. This is because

J(f)([alpha]i[psi]) = -S([psi]), [alpha] [member of] R,

together with (3.6), is equivalent to

[alpha]iS ([psi]) = -S ([psi]),

which can only be fulfilled if S([psi]) = 0, i.e., if [psi] is already a solution.

Consequently, loss of Q-superlinear convergence is hardly ever observed in numerical experiments. Figure 3.1, for example, shows the Newton residual for the two- and three-dimensional test setups, both with the standard formulation and with the bordering (3.7) as proposed in [42]. Of course, the Newton iterates follow different trajectories, but the important thing to note is that in both plain and bordered formulation, the speed of convergence close to the solution is comparable.

The more severe restriction lies in the numerical difficulty of solving the Jacobian systems in each Newton step due to the increasing ill-posedness of the problem as described above. However, although the Jacobian has a nontrivial near-null space close to a solution, the problem is well-defined at all times. This is because, by self-adjointness, its left near-null space coincides with the right near-null space, span{i[psi]}, and the right-hand-side in (3.8), -S(f), is orthogonal to i[psi] for any [psi]:

(3.9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The numerical problem is hence caused only by the fact that one eigenvalue approaches the origin as the Newton iterates approach a solution. The authors propose to handle this difficulty at the level of the linear solves for the Newton updates using the deflation framework developed in Section 2.

3.2. The Ginzburg-Landau equation. One important instance of nonlinear Schrodinger equations (3.1) is the Ginzburg-Landau equation that models supercurrent density for extreme-type-II superconductors. Given an open, bounded domain [OMWGA] [subset or equal to] [R.sup.{2,3}], the equations are

(3.10) [PROGRAM LISTING NOT REPRODUCIBLE IN ASCII]

The operator [kappa] is defined as

[kappa]: X [right arrow] Y,

[kappa][phi] := [(-i[nabla] - A).sup.2][phi],

with the magnetic vector potential [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] [7]. The operator [kappa] describes the energy of a charged particle under the influence of a magnetic field B = [nabla] x A and can be shown to be Hermitian and positive-semidefinite; the eigenvalue 0 is assumed only for A = 0 [43]. Solutions p of (3.10) describe the density [[absolute value of [??]].sup.2] of electric charge carriers and fulfill the inequalities 0 [less than or equal to] [[absolute value of [??]].sup.2] [less than or equal to] 1 pointwise [7]. For two-dimensional domains, they typically exhibit isolated zeros referred to as vortices; in three dimensions, lines of zeros are the typical solution pattern; see Figure 3.2.

3.2.1. Discretization. For the numerical experiments in this paper, a finite-volumetype discretization is employed [6, 43]. Let [[OMEGA].sup.(h)] be a discretization of Q with a triangulation [{[T.sub.i]}.sup.m.sub.i=1], [[union].sup.m.sub.i=i] [T.sup.i] = [[??].sup.(h)], and the node-centered Voronoi tessellation [{[[OMEGA].sub.k]}.sup.n.sub.k=1], [[union].sup.n.sub.k=1] [[OMEGA].sub.k] = [[OMEGA].sup.(h)] Let further [e.sub.i,j] denote the edge between two nodes i, j. The discretized problem is then to find [[psi].sup.(h)] [member of] [C.sup.n] such that

[for all]k [member of]{1,...,n} : 0= [([s.sup.(h)][[psi].sup.(h)]).sub.k] := [([[kappa].sup.(h)][[psi].sup.(h)]).sup.k] - [[psi].sup.(h).sub.k] (1 - [[absolute value of [[phi].sup.(h).sub.k]].sup.2]),

where the discrete kinetic energy operator [K.sup.(h)] is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with the discrete inner product

<[[psi].sup.(h)],[[psi].sup.(h)]> := [n.summation over (k=1)] [absolute value of [[OMEGA].sub.k]] [[psi].sup.(h).sub.k],[[bar.[psi]].sup.(h).sub.k]

and edge coefficients [[alpha].sub.i,j]- [member of] R [43]. The magnetic vector potential A is incorporated in the so-called link variables,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

along the edges [e.sub.i,j] of the triangulation.

Remark 3.2. In matrix form, the operator [K.sup.(h)] is represented as the product [K.sup.(h)] = [D.sup.-1] [??] of the diagonal matrix [D.sup.-1], [D.sub.i,i] = [absolute value of [[OMEGA].sub.i]] and a Hermitian matrix [??].

This discretization preserves a number of invariants of the problem, e.g., gauge invariance of the type [??] := exp[{i[chi]}.sub.[phi]], [??] := A + [[nabla].sub.[chi]] with a given [chi] [member of] C 1([OMEGA]). Moreover, the discretized energy operator [K.sup.(h)] is Hermitian and positive-definite [43]. Analogous to (3.4), the discretized Jacobian operator at [[phi].sup.(h)] is defined by

[J.sup.(h)]([[psi].sup.(h))] : [C.sup.n] [right arrow] [C.sup.n], [J.sup.(h)]([[psi].sup.(h)])[[phi].sup.(h)] := ([K.sup.(h)] - 1 + 2[[absolute value of [[phi].sup.(h)]].sup.2]) [[phi].sup.(h)] + [([[psi].sup.(h)]).sup.2][[bar.[phi]].sup.M].

where the vector-vector products are interpreted entry-wise. The discrete Jacobian is self-adjoint with respect to the discrete inner product

(3.11) [<[[psi].sup.(h)],[[phi].sup.(h)]>.sub.R] := R ([n.summation over (k=1)] [absolute value of [[OMEGA].sub.k]] [[bar.[psi]].sup.(h).sub.k][[psi].sup.(h).sub.k]),

and the statements (3.6), (3.9) about the null space carry over from the continuous formulation.

REMARK 3.3 (Real-valued formulation). There is a vector space isomorphism between [R.sup.2n] and [C.sup.n] as vector spaces over R given by the basis mapping a : [C.sup.n] [right arrow] [R.sup.2n],

[alpha]([e.sup.(n).sub.j] = [e.sup.(2n).sub.j]), [alpha]([ie.sup.(n).sub.j]) = [e.sup.(2n).sub.n+j].

In particular, note that the dimensionality of [C.sup.n.sub.R] is 2n. The isomorphism a is also isometric with the natural inner product [<x, x>.sub.R] of [C.sup.n.sub.R] since for any given pair [phi], [psi] [member of] [C.sup.n] one has

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Moreover, linear operators over [C.sup.n.sub.R] generally have the form L[psi] = A[psi] + B[bar.[psi]] with some A, B [member of] [C.sup.nxn], and because of

Lw = [lambda]w [??] ([alpha]L[[alpha].sup.-1])[alpha]w = [lambda][alpha]w,

the eigenvalues also exactly convey to its real-valued image [alpha]L[[alpha].sup.-1].

This equivalence can be relevant in practice as quite commonly the original complex valued problem in [C.sup.n] is implemented in terms of [R.sup.2n]. Using the natural inner product in this space will yield the expected results without having to take particular care of the inner product.

3.3. Numerical experiments. The numerical experiments are performed with the following two setups.

Test setup 1 (2D). The circle [[OMEGA].sub.2D] := {x [member of] [R.sup.2] : [[parallel]x[parallel].sub.2] < 5} and the magnetic vector potential A(x) := m x (x - [x.sub.0])/[[parallel]x - [x.sub.0][parallel].sup.3] with m := (0,0,1)T and [x.sub.0] := [(0,0,5).sup.T], corresponding to the magnetic field generated by a dipole at [x.sub.0] with orientation m. A Delaunay triangulation for this domain with 3299 nodes was created using Triangle [44]. With the discrete equivalent of [[psi].sub.0](x) = cos([pi]y) as initial guess, the Newton process converges after 27 iterations with a residual of less than [10.sup.-10] in the discretized norm; see Figure 3.1 The final state is illustrated in Figure 3.2.

Test setup 2 (3D). The three-dimensional L-shape

[[OMEGA].sub.3D] := {x [member of] [R.sup.3] : [[parallel]x[parallel].sub.[infinity]] < 5}\[R.sup.3.sub.+],

discretized using Gmsh [18] with 72166 points. The chosen magnetic vector field is constant [B.sub.3D](x) := [3.sup.-1/2](1,1,1)T, represented by the vector potential [A.sub.3D](x) := 1/2[B.sub.3D] x x. With the discrete equivalent of [[psi].sub.0](x) = 1, the Newton process converges after 22 iterations with a residual of less than [10.sup.-10] in the discretized norm; see Figure 3.1. The final state is illustrated in Figure 3.2.

All experimental results presented in this section can be reproduced from the data pub lished with the free and open source Python packages KryPy [15] and PyNosh [16]. KryPy contains an implementation of deflated Krylov subspace methods; e.g., Algorithm 2.1. PyNosh provides solvers for nonlinear Schrodinger equations including the above test cases.

For both setups, Newton's method was used and the linear systems (3.8) were solved using MINRES to exploit self-adjointness of [J.sup.(h)]. Note that it is critical here to use the natural inner product of the system (3.11). All of the numerical experiments incorporate the preconditionei proposed in [43] that is shown to bound the number of Krylov iterations needed to reach a certain relative residual by a constant independent of the number n of unknowns in the system

Remark 3.4. Neither of the above test problems have initial guesses which sit in the cone of attraction of the solution they eventually converge to. As typical for local nonlinear solvers, the iterations which do not directly correspond with the final convergence are sensitive to effects introduced by the discretization or round-off errors. It will hence be difficult to reproduce precisely the shown solutions without exact information about the point coordinates in the discretization mesh. However, the same general convergence patterns were observed for numerous meshes and initial states; the presented solutions shall serve as examples thereof.

Figure 3.3 displays the relative residuals for all Newton steps in both the two- and the three-dimensional setup. Note that the residual curves late in the Newton process (dark gray) exhibit plateaus of stagnation which are caused by the low-magnitude eigenvalue associated with the near-null space vector i[[??].sup.(h)].

Figure 3.3b incorporates the deflation of this vector via Algorithm 2.1 with [Y.sup.(k)] = i[[psi].sup.(k,h)], where [[psi].sup.(k,h)] is the discrete Newton approximate in the kth step. The usage of the preconditioner and the customized inner product (3.11) is crucial here. Clearly, the stagnation effects are remedied and a significantly lower number of iterations is necessary to reduce the residual norm to [10.sup.-10]. While this comes with extra computational cost per step (cf. Table 2.1), this cost is negligible compared to the considerable convergence speedup.

REMARK 3.5. Note that the initial guess [[??].sub.0] is adapted according to (2.14) before the beginning of the iteration. Because of that, the initial relative residual [parallel]b - [Ax.sub.0][parallel]/[parallel]b - A[[??].sub.0][parallel] cannot generally be expected to equal 1 even if [[??].sub.0] = 0. In the particular case of U = i[psi], however, we have

[x.sub.0] = [P.sup.*][[??].sub.0] + U [<U, J([psi])U>.sup.-1.sub.R] [<U, -S([psi])>.sub.R] = [P.sup.*][[??].sub.0]

since <i[psi], S([psi])) =0 by (3.9), and the initial relative residual does equal 1 if [[??].sub.0] = 0 (cf. Figure 3.3b). Note that this is not true anymore when more deflation vectors are added (cf. Figure 3.3c).

Towards the end of the Newton process, a sequence of very similar linear systems needs to be solved. We can hence use the deflated MINRES approach described in Algorithm 2.1, where spectral information is extracted from the previous MINRES iteration and used for deflation in the present process. For the experiments, those 12 Ritz vectors from the MINRES iteration in Newton step k which belong to the Ritz values of smallest magnitude were added for deflation in Newton step k + 1. As displayed in Figure 3.3c, the number of necessary Krylov iterations is further decreased roughly by a factor of 2. Note also that in particular the characteristic plateaus corresponding to the low-magnitude eigenvalue do no longer occur. This is particularly interesting since no information about the approximate null space was explicitly specified but automatically extracted from previous Newton steps.

As outlined at the end of Section 2.3, it is a-priori unclear which choice of Ritz-vectors leads to optimal convergence. Out of the choices mentioned in Section 2.3, the smallest-magnitude strategy performed best in the present application.

Technically, one could go ahead and extract even more Ritz vectors for deflation in the next step. However, at some point the extra cost associated with the extraction of the Ritz vectors (Table 2.2) and the application of the projection operator (Table 2.1) will not justify a further increase of the deflation space. The efficiency threshold will be highly dependent on the cost of the preconditioner. Moreover, it is in most situations impossible to predict just how the deflation of a particular set of vectors influences the residual behavior in a Krylov process. For this reason, one has to resort to numerical experiments to estimate the optimal dimension of the deflation space. Figure 3.4 shows, again for all Newton steps in both setups, the wall clock time of the Krylov iterations as in Figure 3.3 relative to the solution time without deflation. The experiments show that deflation in the first few Newton steps does not accelerate the computing speed. This is due to the fact that the Newton updates are still significantly large and the subsequent linear systems are too different from each other in order to take profit from carrying over spectral information. As the Newton process advances and the updates become smaller, the subsequent linear systems come closer and deflation of a number of vectors becomes profitable. Note, however, that there is a point at which the computational cost of extraction and application of the projection exceeds the gain in Krylov iterations. For the two-dimensional setup, this value is around 12 while in the three-dimensional case, the minimum roughly stretches from 10 to 20 deflated Ritz vectors. In both cases, a reduction of effective computation time by 40% could be achieved.

REMARK 3.6. Other types of deflation vectors can be considered, e.g., harmonic Ritz vectors; see equation (2.20). In numerical experiments with the above test problems we observed that harmonic Ritz vectors resulted in a MINRES convergence behavior similar to regular Ritz vectors. This is in accordance with Paige, Parlett, and van der Vorst [36].

REMARK 3.7. Note that throughout the numerical experiments performed in this paper, the linear systems were solved up to the relative residual of [10.sup.-10]. In practice, however, one would employ a relaxation scheme as given in, e.g., [10, 39]. Those schemes commonly advocate a relaxed relative tolerance nk in regions of slow convergence and a more stringent condition when the speed of convergence accelerates toward a solution, e.g.,

[[eta].sub.k] [gamma] [([parallel][F.sub.k][parallel]/[parallel][F.sub.k-1][parallel]).sup.[alpha]]

with some [gamma] > 0, [alpha] > 1. In the specific case of nonlinear Schrodinger equations, this means that deflation of the near-null vector i[[psi].sup.(k)] (cf. Figure 3.3b) becomes ineffective if [[eta].sub.k] is larger than the stagnation plateau. The speedup associated with deflation with a number of Ritz vectors (cf. Figure 3.3c), however, is effective throughout the Krylov iteration and would hence not be influenced by a premature abortion of the process.

Remark 3.8. The numerical experiments in this paper were unavoidably affected by round-off errors. The used MINRES method is based on short recurrences and the sensitivity to round-off errors may be tremendous. Therefore, a brief discussion is provided in this remark. A detailed treatment and historical overview of the effects of finite precision computations on Krylov subspace methods can be found in the book of Liesen and Strakos [26, Sections 5.85.10]. The consequences of round-off errors are manifold and have already been observed and studied in early works on Krylov subspace methods for linear algebraic systems, most notably by Lanczos [25] and Hestenes and Stiefel [22]. A breakthrough was the PhD thesis of Paige [35], where it was shown that the loss of orthogonality of the Lanczos basis coincides with the convergence of certain Ritz values. Convergence may be delayed and the maximal attainable accuracy, e.g., the smallest attainable residual norm, may be way above machine precision and above the user-specified tolerance. Both effects heavily depend on the actual algorithm that is used. In [47] the impact of certain round-off errors on the relative residual was analyzed for an unpreconditioned MINRES variant with the Euclidean inner product. An upper bound on the difference between the exact arithmetic residual [r.sub.n]and the finite precision residual [[??].sub.n] was given [47, Formula (26)]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [epsilon] denotes the machine epsilon. The corresponding bound for GMRES [47, Formula (17)] only involves a factor of [[kappa].sub.2] (A) instead of its square. The numerical results in [47] also indicate that the maximal attainable accuracy of MINRES is worse than the one of GMRES. Thus, if very high accuracy is required, the GMRES method should be used. An analysis of the stability of several GMRES algorithms can be found in [5]. In order to keep the finite precision Lanczos basis almost orthogonal, a new Lanczos vector can be reorthogonalized against all previous Lanczos vectors. The numerical results presented in this paper were computed without reorthogonalization, i.e., the standard MINRES method. However, all experiments have also been conducted with reorthogonalization in order to verify that the observed convergence behavior, e.g., the stagnation phases in Figure 3.3a, are not caused by loss of orthogonality.

4. Conclusions. For the solution of a sequence of self-adjoint linear systems such as occurring in Newton process for a large class of nonlinear problems, the authors propose a MINRES scheme that takes into account spectral information from the previous linear systems. Central to the approach is the cheap extraction of Ritz vectors (Section 2.3) out of a MINRES iteration and the application of the projection (2.9).

Differently from similar recycling methods previously suggested [52], the projected operator is self-adjoint and is formulated for inner products other than the l2 -inner product. This allows for the incorporation of a wider range of preconditioners than what was previously possible. One important restriction that is still remaining is the fact that for the orthogonalization of the recycling vectors, the inverse of the preconditioner needs to be known. Unfortunately, this is not the case for some important classes of preconditioners, e.g., multigrid preconditioners with a fixed number of cycles. While this prevents the deflation framework from being universally applicable, the present work extends the range of treatable problems.

One motivating example for this are nonlinear Schrodinger equations (Section 3): the occurring linearization is self-adjoint with respect to a non-Euclidean inner product (3.11), and the computation in a three-dimensional setting is made possible by an AMG preconditioner. The authors could show that for the particular case of the Ginzburg-Landau equations, the deflation strategy reduces the effective run time of a linear solve by up to 40% (cf. Figure 3.3c). Moreover, the deflation strategy was shown to automatically handle the singularity of the problem that otherwise leads to numerical instabilities.

It is expected that the strategy will perform similarly for other nonlinear problems. While adding a number of vectors to the deflation will always lead to a smaller number of Krylov iterations (and thus less applications of the operator and the preconditioner), it only comes with extra computational cost in extracting the Ritz vectors and applying the projection operator; Table 2.2 gives a detailed overview of what entities would need to be balanced. The optimal number of deflated Ritz vectors is highly problem-dependent, in particular dependent upon the computational cost of the preconditioner, and can thus hardly be determined a priori.

The proposed strategy naturally extends to problems which are not self-adjoint by choosing, e.g., GMRES as the hosting Krylov method. For non-self-adjoint problems, however, the effects of altered spectra on the Krylov convergence is far more involved than in the self-adjoint case [32]. This also makes the choice of Ritz vectors for deflation difficult. However, several heuristics for recycling strategies have been successfully applied to non-self-adjoint problems, e.g., by Parks et al. [38], Giraud, Gratton, and Martin [19], Feng, Benner, and Korvink [12] as well as Soodhalter, Szyld, and Xue [49].

Acknowledgments. The authors wish to thank Jorg Liesen for his valuable feedback, Alexander Schlote for providing experimental results with harmonic Ritz vectors and the anonymous referees for their helpful remarks.

REFERENCES

[1] A. R. CHAMPNEYS AND B. SANDSTEDE, Numerical computation of coherent structures, in Numerical Continuation Methods for Dynamical Systems, B. Krauskopf, H. M. Osinga, and J. Galan-Vioque, eds., Underst. Complex Syst., Springer, Dordrecht, 2007, pp. 331-358.

[2] A. CHAPMAN AND Y. SAAD, Deflated and augmented Krylov subspace techniques, Numer. Linear Algebra Appl., 4 (1997), pp. 43-66.

[3] E. DE STURLER, Nested Krylov methods based on GCR, J. Comput. Appl. Math., 67 (1996), pp. 15-41.

[4] Z. DOSTAL, Conjugate gradient method with preconditioning by projector, Int. J. Comput. Math., 23 (1988), pp. 315-323.

[5] J. DRKOSOVA, A. GREENBAUM, M. ROZLO[]NIK, AND Z. STRAKOS, Numerical stability of GMRES, BIT, 35 (1995), pp. 309-330.

[6] Q. DU, Numerical approximations of the Ginzburg-Landau models for superconductivity, J. Math. Phys., 46 (2005), 095109 (22 pages).

[7] Q. DU, M. D. GUNZBURGER, AND J. S. PETERSON, Modeling and analysis of a periodic Ginzburg-Landau model for type-II superconductors, SIAM J. Appl. Math., 53 (1993), pp. 689-717.

[8] M. EIERMANN AND O. G. ERNST, Geometric aspects of the theory of Krylov subspace methods, Acta Numer., 10 (2001), pp. 251-312.

[9] M. EIERMANN, O. G. ERNST, AND O. SCHNEIDER, Analysis of acceleration strategies for restarted minimal residual methods, J. Comput. Appl. Math., 123 (2000), pp. 261-292.

[10] S. C. EISENSTAT AND H. F. WALKER, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci. Comput., 17 (1996), pp. 16-32.

[11] H. C. ELMAN, D. J. SILVESTER, AND A. J. WATHEN, Finite Elements and Fast Iterative Solvers: With Applications in Incompressible Fluid Dynamics, Oxford University Press, New York, 2005.

[12] L. FENG, P. BENNER, AND J. G. KORVINK, Subspace recycling accelerates the parametric macro-modeling of MEMS, Internat. J. Numer. Methods Engrg., 94 (2013), pp. 84-110.

[13] R. W. FREUND, G. H. GOLUB, AND N. M. NACHTIGAL, Iterative solution of linear systems, Acta Numer., 1 (1992), pp. 57-100.

[14] A. GAUL, M. H. GUTKNECHT, J. LIESEN, AND R. NABBEN, A framework for deflated and augmented Krylov subspace methods, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 495-518.

[15] A. GAUL AND N. SCHLOMER, KryPy: Krylov subspace methods package for Python. https://github.com/andrenarchy/krypy, August 2013.

[16]--, PyNosh: Python framework for nonlinear Schrodinger equations. https://github.com/nschloe/pynosh, August 2013.

[17] M. GEDALIN, T. SCOTT, AND Y. BAND, Optical solitary waves in the higher order nonlinear Schrodinger equation, Phys. Rev. Lett., 78 (1997), pp. 448-451.

[18] C. GEUZAINE AND J.-F. REMACLE, Gmsh: A 3-D finite element mesh generator with built-in pre- and post-processing facilities, Internat. J. Numer. Methods Engrg., 79 (2009), pp. 1309-1331.

[19] L. GIRAUD, S. GRATTON, AND E. MARTIN, Incremental spectral preconditioners for sequences of linear systems, Appl. Numer. Math., 57 (2007), pp. 1164-1180.

[20] A. GREENBAUM, Iterative Methods for Solving Linear Systems, SIAM, Philadelphia, 1997.

[21] A. GRIEWANK, On solving nonlinear equations with simple singularities or nearly singular solutions, SIAM Rev., 27 (1985), pp. 537-563.

[22] M. R. HESTENES AND E. STIEFEL, Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards, 49 (1952), pp. 409-436.

[23] H. B. KELLER, The bordering algorithm and path following near singular points of higher nullity, SIAM J. Sci. Statist. Comput., 4 (1983), pp. 573-582.

[24] M. E. KILMER AND E. DE STURLER, Recycling subspace information for diffuse optical tomography, SIAM J. Sci. Comput., 27 (2006), pp. 2140-2166.

[25] C. LANCZOS, Solution of systems of linear equations by minimized-iterations, J. Research Nat. Bur. Standards, 49 (1952), pp. 33-53.

[26] J. LIESEN AND Z. STRAKOS, Krylov Subspace Methods: Principles and Analysis, Oxford University Press, Oxford, 2013.

[27] J. LIESEN AND P. TICHY, Convergence analysis of Krylov subspace methods, GAMM Mitt. Ges. Angew. Math. Mech., 27 (2004), pp. 153-173.

[28] L. A. M. MELLO, E. DE STURLER, G. H. PAULINO, AND E. C. N. SILVA, Recycling Krylov subspaces for efficient large-scale electrical impedance tomography, Comput. Methods Appl. Mech. Engrg., 199 (2010), pp. 3101-3110.

[29] R. B. MORGAN, A restarted GMRES method augmented with eigenvectors, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 1154-1171.

[30], Restarted block-GMRES with deflation of eigenvalues, Appl. Numer. Math., 54 (2005), pp. 222-236.

[31] R. B. MORGAN AND M. ZENG, Harmonic projection methods for large non-symmetric eigenvalue problems,

[32] M. F. MURPHY, G. H. GOLUB, AND A. J. WATHEN, A note on preconditioning for indefinite linear systems, SIAM J. Sci. Comput., 21 (2000), pp. 1969-1972.

[33] R. A. NICOLAIDES, Deflation of conjugate gradients with applications to boundary value problems, SIAM J. Numer. Anal., 24 (1987), pp. 355-365.

[34] C. NORE, M. E. BRACHET, AND S. FAUVE, Numerical study of hydrodynamics using the nonlinear Schrodinger equation, Phys. D, 65 (1993), pp. 154-162.

[35] C. C. PAIGE, The computation of eigenvalues and eigenvectors of very large sparse matrices, PhD. Thesis, Institute of Computer Science, University of London, London, 1971.

[36] C. C. PAIGE, B. N. PARLETT, AND H. A. VAN DER VORST, Approximate solutions and eigenvalue bounds from Krylov subspaces, Numer. Linear Algebra Appl., 2 (1995), pp. 115-133.

[37] C. C. PAIGE AND M. A. SAUNDERS, Solutions of sparse indefinite systems of linear equations, SIAM J. Numer. Anal., 12 (1975), pp. 617-629.

[38] M. L. PARKS, E. DE STURLER, G. MACKEY, D. D. JOHNSON, AND S. MAITI, Recycling Krylov subspaces for sequences of linear systems, SIAM J. Sci. Comput., 28 (2006), pp. 1651-1674.

[39] M. PERNICE AND H. F. WALKER, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput., 19 (1998), pp. 302-318.

[40] Y. SAAD AND M. H. SCHULTZ, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856-869.

[41] Y. SAAD, M. YEUNG, J. ERHEL, AND F. GUYOMARC'H, A deflated version of the conjugate gradient algorithm, SIAM J. Sci. Comput., 21 (2000), pp. 1909-1926.

[42] N. SCHLOMER, D. AVITABILE, AND W. VANROOSE, Numerical bifurcation study of superconducting patterns on a square, SIAM J. Appl. Dyn. Syst., 11 (2012), pp. 447-477.

[43] N. SCHLOMER AND W. VANROOSE, An optimal linear solver for the Jacobian system of the extreme type-II Ginzburg-Landau problem, J. Comput. Phys., 234 (2013), pp. 560-572.

[44] J. R. SHEWCHUK, Delaunay refinement algorithms for triangular mesh generation, Comput. Geom., 22 (2002), pp. 21-74.

[45] V. SIMONCINI AND D. B. SZYLD, Recent computational developments in Krylov subspace methods for linear systems, Numer. Linear Algebra Appl., 14 (2007), pp. 1-59.

[46]--, On the superlinear convergence of MINRES, in Numerical Mathematics and Advanced Applications 2011, A. Cangiani, R. L. Davidchack, E. Georgoulis, A. N. Gorban, J. Levesley, and M. V. Tretyakov, eds., Springer, Heidelberg, 2013, pp. 733-740.

[47] G. L. G. SLEIJPEN, H. A. VAN DER VORST, AND J. MODERSITZKI, Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems, SIAM J. Matrix Anal. Appl., 22 (2000), pp. 726-751.

[48] B. K. SOM, M. R. GUPTA, AND B. DASGUPTA, Coupled nonlinear Schrodinger equation for Langmuir and dispersive ion acoustic waves, Phys. Lett. A, 72 (1979), pp. 111-114.

[49] K. M. SOODHALTER, D. B. SZYLD, AND F. XUE, Krylov subspace recycling for sequences of shifted linear systems, Appl. Numer. Math., 81 (2014), pp. 105-118.

[50] G. W. STEWART AND J. G. SUN, Matrix Perturbation Theory, Academic Press, Boston, 1990.

[51] J. M. TANG, R. NABBEN, C. VUIK, AND Y. A. ERLANGGA, Comparison of two-level preconditioners derived from deflation, domain decomposition and multigrid methods, J. Sci. Comput., 39 (2009), pp. 340-370.

[52] S. WANG, E. DE STURLER, AND G. H. PAULINO, Large-scale topology optimization using preconditioned Krylov subspace methods with recycling, Internat. J. Numer. Methods Engrg., 69 (2007), pp. 2441-2468.

* Received September 19, 2013. Accepted May 19, 2015. Recommended by D. Szyld. Published online on October 7, 2015. The work of Andre Gaul was supported by the DFG Forschungszentrum MATHEON. The work of Nico Schlomer was supported by the Research Foundation Flanders (FWO).

ANDRE GAUL ([dagger]) AND NICO SCHLOMER ([double dagger])

([dagger]) Institut fur Mathematik, Technische Universitat Berlin, StraBe des 17. Juni, D-10623 Berlin, Germany (gaul@math.tu-berlin.de).

([double dagger]) Departement Wiskunde en Informatica, Universiteit Antwerpen, Middelheimlaan 1, B-2020 Antwerpen, Belgium (nico.schloemer@ua.ac.be).

Table 2.1
Storage requirements and computational cost of the projection
operators P and P * (cf. Definition 2.3 and Lemma 2.4).
All vectors are of length N, i.e., the number of degrees
of freedom of the underlying problem. Typically,
N [much greater than] d.

(a) Storage requirements

                          vectors     other

U                            d          -
C = AU                       d          -
E = [<U, C>.sub.H] or        -      [d.sup.2]
[E.sup.-1]

(b) Computational cost

                             applications of   vector
                            A     [M.sup.-1]   updates

Construction of C and E     d         -           -
Application of P or P *     -         -           d
Application of              -         -           d
  correction

                              inner       solve
                             products    with E

Construction of C and E     d(d + 1)/2      -
Application of P or P *         d           1
Application of                  d           1
  correction

Table 2.2
Computational cost for one iteration of Algorithm 2.1
(lines 3-8) with n MINRES iterations and d deflation vectors.
The number of computed Ritz vectors also is d. Operations that
do not depend on the dimension N := dim H are neglected.

                             Applications of

                             A       [M.sup.-1]   M

Orthogonalization            -           -        d
Setup of [P.sup.*] and [x.sub.0]    d           -        -
n MINRES iterations          n + 1     n + 1      -
Comp. of Ritz vectors        -           d        -
(Comp. of Ritz res. norms)   -           -        -

                                Inner          Vector
                               products       updates

Orthogonalization             d(d + 1)/2     d(d + 1)/2
Setup of [P.sup.*] and [x.sub.0]     d(d +3)/2          d
n MINRES iterations          n(d + 2) + d   n(d + 7) + d
Comp. of Ritz vectors             -          d( d + n)
(Comp. of Ritz res. norms)    [d.sup.2]          -
COPYRIGHT 2015 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Gaul, Andre; Schlomer, Nico
Publication:Electronic Transactions on Numerical Analysis
Article Type:Report
Date:Jan 1, 2015
Words:14451
Previous Article:A two-level overlapping Schwarz method for H(curl) in two dimensions with irregular subdomains.
Next Article:Laminar-turbulent transition in pipe flow: wall effects and critical Reynolds number.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters