Printer Friendly

On a non-stagnation condition for GMRES and application to saddle point matrices.

1. Introduction. A real n x n matrix A is said to be positive definite (or positive real) if [x.sup.T] Ax > 0 for any real nonzero vector x of length n, where [x.sup.T] is the transpose of x. A similar definition holds for negative definite matrices. Large nonnormal real linear systems of the form Ax = b are known to be particularly difficult to solve by iterative Krylov sub-space methods whenever A is not definite, that is when the quantity [x.sup.T] Ax changes sign depending on x; in fact, full stagnation is possible for as many as n - 1 iterations [1, 20]. On the other hand, it was shown by H. Elman in 1982 [12] (see also [11]) that a minimal residual iteration method such as GMRES [24] cannot stagnate, as long as A is positive definite. Therefore, indefiniteness appears to be a tricky property. A new non-stagnation condition was recently proposed in [25]: under certain hypotheses on the matrix A, there are no two or more consecutive steps of stagnation in the iterative process also for indefinite problems. A few examples were proposed in [25] to support the new result. The original result by Elman in [12] has been extensively used in the context of domain decomposition: by proving that the involved quantities are, say, mesh independent, researchers have deduced that the convergence of the method, in terms of number of iterations, does not depend on the mesh either. We refer to [25] for a more detailed discussion and for pointers to the literature.

Starting from [25], in this paper we propose an enhanced strategy leading to a more general condition, which allows us to expand the set of matrices for which long-term stagnation of GMRES cannot occur. We also show that the analysis provides a good setting to derive new asymptotic convergence rate estimates for indefinite problems. Our discussion is based on GMRES, but the results hold for any other minimal residual method such as GCR [2, 24, 27]. The new condition and convergence analysis are then explored in the context of saddle point matrices, when these are preconditioned in a way so as to lead to non-symmetric and indefinite problems. We show that for these sets of matrices, our condition is readily verified when the preconditioner is effective for the problem. These matrices may thus represent an insightful training set for understanding the interaction between indefiniteness and stagnation.

Whenever it comes to nonsymmetric problems, the normal equation is a possible classical alternative. However, solving the normal equation by a Krylov subspace method for symmetric and positive definite matrices may be very inefficient. Indeed, good spectral properties of the preconditioned matrix do not necessarily imply well-conditioning of the same matrix [19, sec. 7.1]. In our experimental analysis we shall also report on such a situation.

Throughout the paper [x.sup.T] and [x.sup.*] indicate the transpose and the conjugate transpose of a vector x, respectively. The Euclidean norm for vectors and the induced matrix 2-norm will be used.

2. Non-stagnation condition. In this section we describe a generalization of the sufficient non-stagnation condition given in [25], together with a discussion on its sharpness. We first recall an important result of Grcar for a general degree [kappa] polynomial, which is the basis for further developments. Here and in the following, [x.sub.0] is an initial approximate solution, and [r.sub.0] = b - A[x.sub.0] is the associated residual. Subsequent iterates are denoted as [x.sub.k] and [r.sub.k], respectively.

THEOREM 2.1. [18] Let [[phi].sub.k] be a polynomial of degree at most k, with [[phi].sub.k] (0) = 0, and such that [[phi].sub.k] (A) is positive or negative definite. Then for every [x.sub.0], the affine space [x.sub.0] + span{[r.sub.0], A[r.sub.0], ..., [A.sup.k-1][r.sub.0]} contains a vector [??] for which [parallel]b - A[??][parallel] [less than or equal to] [eta][parallel][r.sub.0][parallel], where



The result states that if it is possible to determine [[phi].sub.k] such that [[phi].sub.k] (A) is positive definite, then the residual norm after k iterations must decrease with respect to the initial one. Elman's condition in [12] was obtained for [[phi].sub.k]([lambda]) = [lambda] (k = 1), thus requiring that A itself be positive definite. In [25] the case [[phi].sub.2]([lambda]) = [[lambda].sup.2] was discussed. Therefore, as opposed to the problem statement in [18], in the two references above the polynomial was fixed, and a corresponding condition on A was derived. Note however that Elman's condition was stated before Grcar's result.

In the following we shall expand on Theorem 2.1 using again polynomials of degree 2 but with a problem dependent coefficient. Higher degree polynomials seem to be unfeasible since testing the condition becomes computationally demanding and in most cases unrealistic; see a detailed discussion in [25].

Let H = (A + [A.sup.T])/2, S = (A - [A.sup.T])/2 and [[phi].sub.2]([lambda]) = [lambda]([lambda] - [alpha]) for some non-negative [alpha] [member of] R. Note that using a non-monic polynomial would not change the result of our analysis.

LEMMA 2.2. Let [[phi].sub.2]([lambda]) = [lambda]([lambda] - [alpha]), [alpha] [member of] R, [alpha] [greater than or equal to] 0. The matrix [[phi].sub.2](A) > 0, that is [[phi].sub.2](A) is positive real, if and only if the matrix (-[S.sup.T] S + [[phi].sub.2](H)) is also positive real.

Proof. For 0 [not equal to] x [member of] [R.sup.n] it holds


PROPOSITION 2.3. Assume that [[phi].sub.2](H) > 0. Then [[phi].sub.2] (A) > 0 if and only if [parallel]S[[phi].sub.2][(H).sup.-1/2][parallel] < 1.

Proof. Using Lemma 2.2, the proof follows the same lines as that of [25, Theorem 3.2].

If [[phi].sub.2](H) is indefinite and S is nonsingular, then a corresponding result may be stated: [[phi].sub.2](A) < 0 if and only if [parallel][([S.sup.T] S).sup.-1][[phi].sub.2](H)[parallel] < 1.

In [25], the relation of Proposition 2.3 was proved for [[phi].sub.2]([lambda]) = [[lambda].sup.2], that is for [alpha] = 0. However, better choices of [[phi].sub.2] may be possible, although in general the optimal [[phi].sub.2] is hard to find. In the following we propose a strategy towards the determination of [alpha] [greater than or equal to] 0 such that [[phi].sub.2](H) > 0, and such that the smallest eigenvalue of [[phi].sub.2](H) is as large as possible, and clearly possibly larger than that of [H.sup.2], so as to improve the choice over the polynomial [[phi].sub.2]([lambda]) = [[lambda].sup.2]. In general, fulfilling the above requirements for [alpha] does not ensure that the condition [parallel]S[[phi].sub.2][(H).sup.-1/2][parallel] < 1 be satisfied, however the proposed framework may significantly enlarge the set of matrices for which the non-stagnation condition holds.

We stress that our analysis is based on Theorem 2.1, therefore it provides a sufficient but not necessary non-stagnation condition. Moreover, it is difficult to find general a-priori sufficient conditions so as to avoid the test on [parallel]S[[phi].sub.2][(H).sup.-1/2][parallel]. Indeed, let S = [X.sub.S](i[SIGMA])[X.sup.*.sub.S], H = [X.sub.H][THETA][X.sup.*.sub.H] with [SIGMA] = diag([[sigma].sub.1], ..., [[sigma].sub.n]) and [THETA] = diag([[theta].sub.1], ..., [[theta].sub.n]), be the eigende-compositions of the normal matrices S and H, respectively. Then, setting w = [X.sup.*.sub.S]x for real x,


Since [X.sub.S], [X.sub.H] are orthonormal, the following simple condition holds:


and this has inspired us to develop the strategy for the computation of a more general polynomial [[phi].sub.2]. With no doubt the statement (2.1) yields an unnecessarily strict condition. Indeed, in many applications S is highly singular. If it is known a-priori that Range(S) has no projection onto the invariant subspace of H associated with the smallest values of [[phi].sub.2]([theta]), then it is possible to reduce the condition in (2.1) to hold only on the remaining eigenvalues of H.

We are left with the selection of [[phi].sub.2] such that [[phi].sub.2](H) > 0. To derive [alpha] in [[phi].sub.2]([lambda]) = [lambda]([lambda] - [alpha]) we start by noticing that for any [alpha] > 0 it holds


In other words, while a nonzero [alpha] does a better job than [alpha] = 0 at mapping away from zero and to the right the negative eigenvalues of H, this is not so for the positive eigenvalues of H. To balance the two effects, we thus compute [alpha] [greater than or equal to] 0 so that [[phi].sub.2] ([[lambda].sub.-]) = [[phi].sub.2]([[lambda].sub.+]), where [[lambda].sub.-] and [[lambda].sub.+] are the negative and positive eigenvalues of H closest to zero. Taking into account the positivity constraint we thus set

[alpha] := max{0, [[lambda].sub.+] + [[lambda].sub.-]}.

With this choice, a polynomial of degree two different from the simple second power is determined whenever the positive part of the spectrum of H is farther away from the origin than the negative part. It turns out that this special structure is rather natural in certain preconditioned matrices stemming from algebraic saddle point problems.

Finally, we stress that this analysis requires the knowledge of both [[lambda].sub.-] and [[lambda].sub.+], although their accurate computation is not necessary. This may be obtained by an approximate spectral computation either on the given problem, or under certain conditions, on a smaller dimensional version of the same problem.

3. Beyond non-stagnation. If [[phi].sub.2](A) is positive definite, then we can improve our understanding of the convergence rate of GMRES by adapting known bounds.

Let [P.sub.[kappa]] be the set of polynomials of degree less or equal to [kappa]. Assume A is diagonalizable and let A = X[LAMBDA][X.sup.-1], so that [[phi].sub.2](A) = X[[phi].sub.2]([LAMBDA])[X.sup.-1]. Setting [parallel][r.sub.0][parallel] = 1, the following bound for the GMRES residual norm after k iterations is well established:


where [kappa](X) = [parallel]X[parallel] [parallel][X.sup.-1][parallel] is the condition number of X in the spectral 2-norm. The bound can be generalized to the case of non-diagonalizable matrices [17]. The estimate in (3.1) is attractive when [kappa](X) is moderate, and all eigenvalues have positive real part, since it is otherwise "impossible to have a polynomial that is one at the origin and less than one everywhere on some closed curve around the origin" [19, p. 55].

Since in our framework the origin is surrounded by eigenvalues, the classical bound above needs to be modified. We next derive a result that has been used for similar purposes for A symmetric and indefinite [16], [19, p. 53]; see also the experimental evidence for A nonsymmetric in [13]. We also refer to [15] for a thorough analysis of polynomial mappings to analyze the spectral properties of preconditioned Krylov subspace methods.

PROPOSITION 3.1. The GMRES residual satisfies


for any polynomial [[phi].sub.2] ofsecond degree satisfying [[phi].sub.2](0) = 0.

Proof. We have


The bound of Proposition 3.1 shows that if [[phi].sub.2]( A) > 0, we expect the norm of the residual after 2[kappa] iterations to be effectively bounded by the solution to the min-max problem associated with a polynomial q of degree [kappa]. Therefore, because of the indefiniteness of the problem the performance of GMRES seems to be penalized, as only even numbered iterations contribute to decreasing the residual norm, whereas the residual may stagnate at every other iteration. On the other hand, stagnation does occur in practice in the indefinite case, and therefore the result cannot be improved, in this respect. In the left plot of Figure 3.1 we report the convergence history of GMRES for the problem in Example 4.2 with A = M[,aug], M and [,aug] as described in Section 4. The right plot displays the spectrum of the pre-conditioned matrix, where the two clusters are clearly visible. Although convergence is fast, the curve in the left plot shows a staircase behavior, corresponding to single iterations with (almost) full stagnation.


Proposition 3.1 indicates that it is possible to use the results available for the convergence of GMRES when the field of values of the matrix [[phi].sub.2](A) is sharply contained in some particular domain of [C.sup.+]. For instance, if the field of values of [[phi].sub.2](A) is contained in an ellipse, then the solution to the min-max matrix problem in the proof of Proposition 3.1 may be bounded from above by a scaled Chebyshev polynomial of degree [kappa] [19].

More specialized inclusion sets allow us to sharpen this result, and these will turn out to be particularly appropriate for the problems discussed in the next section. Using [8, Proposition 4.1] or [24], we can directly derive the following result.

PROPOSITION 3.2. Assume that all eigenvalues of [[phi].sub.2](A) are enclosed in the disk of center unit and radius [rho] < 1. Then

(3.3) [parallel][r.sub.2k][parallel] [less than or equal to] C [[rho].sup.k][parallel][r.sub.0][parallel],

where the constant C is independent of k but depends either on [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] or on [kappa](X).

The estimate in (3.3) can be generalized to disks other than the unit disk. The result is mostly useful when [rho] is small, and this is usually achieved with an effective preconditioning strategy. Note that the positive definiteness of [[phi].sub.2](A) ensures that all its eigenvalues have positive real part. In fact, if the result were stated only in terms of eigenvalues, it would be sufficient that all eigenvalues satisfy [absolute value of R([lambda])] > |T([lambda])| to obtain R([[lambda].sup.2]) > 0. It is not clear whether checking this condition would be inexpensive.

Starting from [parallel]q([[phi].sub.2] (A))[r.sub.0][parallel], a result similar to Proposition 3.2 maybe stated in terms of the field of values of [[phi].sub.2](A), in which case the constant C is moderate but the radius may be much larger [22].

4. Application to saddle point matrices. Structured linear systems in the form


arise in a large variety of applications, commonly stemming from constrained optimization problems associated with differential operators. Here F [member of] [R.sup.nxn] is in general non-normal (although we shall also consider the specialized symmetric case), B [member of] [R.sup.mxn] with m [less than or equal to] n is rectangular and C is symmetric and positive semidefinite.

Due to the unfavorable spectral properties of the matrix, preconditioning is usually mandatory. Preconditioners that try to reproduce the structure of the coefficient matrix while maintaining affordable computational costs are often very effective. We refer to [6] for a thorough discussion on various alternative structured preconditioners. Here we focus on some specific examples, which give rise to indefinite and nonsymmetric preconditioned matrices M [P.sup.-1], with P having the possible forms,


where [??] and W are nonsingular properly chosen matrices, with W usually symmetric and positive definite. Note that M[] is nonsymmetric even when F is symmetric. When F is nonsingular, the blocks are often taken so that [??] [approximately equal to] F and W [approximately equal to] [BF.sup.-1][B.sup.T] + [beta]C [6, 14]. On the other hand, in certain applications where F is singular or indefinite the so-called augmentation block preconditioner is particularly appealing. In this setting, the matrix W is chosen so that [??] = F + [B.sup.T] [W.sup.-1] B is positive definite; see, e.g., [3, 5, 21]. In some cases the choice W = [gamma]I for some positive values of [gamma] suffices. Moreover, in practice the exact matrix F + [B.sup.T] [W.sup.-1]B is replaced by some approximation, and depending on the application this preconditioner is actually applied to the augmentation formulation of the problem; see [6] and also Example 4.4. We shall use the notation [P.sub.d,aug] and [,aug] when we wish to emphasize the use of the augmentation version of the preconditioner. It was shown in [21] that the exact augmentation block diagonal preconditioner [P.sub.d,aug] (that is, with [??] = F + [B.sup.T] [W.sup.-1] B) with +W as (2,2) block has eigenvalues either at one, or in the neighborhood of minus one. In addition, it was recently shown in [9] that the augmentation block triangular preconditioner [,aug] (with [??] = F + [B.sup.T] [W.sup.-1] B) with +W has eigenvalues either at one, or in some negative neighborhood. On the other hand, both preconditioned problems with the choice -W have positive definite coefficient matrices; see, e.g., [14, sec. 8.1].

The occurrence of an indefinite spectrum is however not restricted to the use of augmented blocks, as an indefinite preconditioned matrix always arises when [], [P.sub.d] are used with +W in the (2,2) block; see, e.g., [14, sec. 8.1]. Unless explicitly stated, in the rest of the paper we discuss the case where the (2,2) block has the plus sign, which leads to an indefinite preconditioned matrix.

The spectral properties described above, and in particular the indefiniteness of these preconditioned matrix, do not seem to be appealing for Krylov subspace methods, especially considering that simply changing the sign in the (2,2) block completely solves the indefiniteness problem. On the other hand, practical experience has demonstrated that well selected preconditioning blocks may lead to very fast convergence in spite of the indefiniteness. In addition, the mere fact of being positive definite does not necessarily imply that a preconditioner is effective. These considerations seem to justify a deeper analysis of the performance of the indefinite problem.

The discussed clustering may be exploited for evaluating the non-stagnation condition. Using the results of the previous sections applied to A = M [P.sup.-1], we show that for several examples stemming from preconditioned saddle point matrices with [] our non-stagnation condition is satisfied. On the other hand, we should remark that our non-stagnation condition was not satisfied when using [P.sub.d] and no augmentation. This choice usually leads to three clusters, which may be viewed as perturbations of the three multiple eigenvalues obtained by the optimal preconditioner [P.sub.d] = blkdiag (F, [BF.sup.-1] [B.sup.T] + [beta]C) [14, sec. 8.1]. This different clustering may influence the test; we plan to further investigate the problem.

In case the test is passed, GMRES will not stagnate for more than one step. Moreover, it is possible to derive an estimate of the asymptotic convergence rate if spectral information of [[phi].sub.2](A) is available.

All reported data were obtained with the software IFISS [14], described as reference problems in [14, Chapters 5,7], and run in MATLAB [23].

Example 4.1. We consider the Stokes problem stemming from the simulation of a steady horizontal flow in a channel driven by a pressure difference between the two ends [14, Example 5.1.1]. All parameters except the discretization grid size were given default values (i.e., natural outflow boundary, Q1-P0 elements, stabilization parameter 1/4, uniform streamlines); the resulting matrix M is symmetric and indefinite. Table 4.1 reports all relevant parameters for our analysis for B of size 64 x 162 and for various choices of the blocks [??] and W in the block triangular preconditioner []. In particular, we used [??] = F or [??] = cholinc(F, 0.1) or an Algebraic Multigrid preconditioner [F.sub.amg] [7], whereas for the (2,2) block we used W = Q where Q is the mass matrix for the pressure space, W = [B[??].sup.-1]B or W = 10 * [B[??].sup.-1]B. The remaining columns in the table show the smallest (negative) eigenvalue of H; the largest eigenvalue of the pencils ([S.sup.T] S, [H.sup.2]) and ([S.sup.T] S, [H.sup.2] - [alpha]H), which are used to check the non-stagnation condition; the computed value of [alpha]. For completeness and to test the quality of the chosen preconditioners we also report the number of iterations for the GMRES residual norm to fall below [10.sup.-8].

We observe that in many cases the non-stagnation condition is satisfied, and in particular this is so whenever the preconditioner is effective, as shown by the low number of iterations to converge. We also observe that in several cases (underlined values) the choice of [alpha] [not equal to] 0 allowed the test to be passed whereas [alpha] = 0 failed. It is interesting that the choice W = [B[??].sup.-1]B (adding the term [beta]C would make it an approximation to the Schur complement) is not good without a proper scaling, at least for the non-stagnation condition, showing that W has an important role in the size of the symmetric part of the matrix.

We next experimentally verify that if the preconditioner is optimal, then the quantities computed to test our non-stagnation condition are constant as the problem size increases. Therefore, the non-stagnation condition may be tested inexpensively on the smallest size problem. Alternatively, by using the independence of a with respect to the mesh parameter, we could provide an upper bound of the convergence rate which does not depend on the problem size. Table 4.2 displays the relevant values as the problem dimension increases, for the preconditioner [] with [??] = F and W = Q.

EXAMPLE 4.2. We consider the data stemming from the discretization of a linearized Navier-Stokes problem, simulating a so-called flow over a step, a flow expanding in an L-shaped domain [14, Example 7.1.2]. All problem parameters were given default values, i.e. horizontal dimension L = 5, uniform outflow, Q1-P0 elements, viscosity v = 1/50, hybrid linearization with 2 and 4 Picard and Newton iterations respectively, nonlinear tolerance [10.sup.-5] and stabilization parameter [beta] = 1/4. The resulting matrix B has size 176 x 418 and F is nonsymmetric. The relevant quantities for our non-stagnation test are reported in Table 4.3 for both [P.sub.d,aug] and [,aug], using the exact augmented (1,1) block and various choices of W: W(tol) = [B[??].sup.-1] [B.sup.T] + [beta]C, with either [??] = F or its LU incomplete decomposition with dropping tolerance tol, or W = Q. The choice W([10.sup.-2]) was used to create the data of the plots in Figure 3.1. We also report the performance of [] when [??] is the incomplete LU decomposition of F (with drop tolerance [10.sup.-2]) and W is either the approximate Schur complement [W.sub.1] = [B[??].sup.-1] [B.sup.T] + [beta]C or [W.sub.2] = [B[??].sup.-1] [B.sup.T].

We see that in most cases the non-stagnation condition is satisfied. As a confirmation of the analysis of Section 3, we also compute an estimate of the asymptotic convergence rate of the method with [] and W([10.sup.-2]). The left plot of Figure 4.1 shows the spectrum of the squared preconditioned matrix [(M[]).sup.2] (circles), and the eigenvalues (crosses) of the matrix M[,-] where now [,-] has -W([10.sup.-2]) (negative sign) as (2,2) block. Although squared, the former eigenvalues are more clustered. The right plot of Figure 4.1 shows the convergence history of GMRES when using both preconditioners [] (dashed curve) and [,-] (solid curve). Once again, the dashed curve clearly indicates a staircase-like behavior. In addition, it is remarkable that in spite of the stagnation steps, the convergence rate is identical to that of the positive definite problem. Also reported (dash-dotted line) is the estimated convergence rate for M[], using [[rho].sup.[kappa]/2] in (3.3) with [rho] = 0.04, visually detected from the left plot of Figure 4.1. We note that the initial convergence phase for both preconditioned problem is well captured by the estimate, before adaptation to the spectrum takes place. Below, we linger over this issue and propose an explanation.

Table 4.4 displays the quantities computed to test our non-stagnation condition as the problem size increases for [] and W = [beta]C + [BF.sup.-1][B.sup.T], confirming the independence of these quantities with respect to the problem dimension. In all cases [alpha] = 0, that is the polynomial [[phi].sub.2]([lambda]) = [[lambda].sup.2] was successfully used for the test.

We conclude this example by noticing that we could also have considered using the normal equation associated with the problem M[P.sup.-1][??] = b and then solved with, e.g., the Conjugate Gradient method (CG), since the eigenvalues of the symmetric matrix [(M[P.sup.-1]).sup.T] M[P.sup.-1] are all positive. However, performance is usually poor. For instance, for these data (with m = 176, n = 418) and [] with [??] = luinc(F, [10.sup.-2]) and W = [beta]C + [B[??].sup.-1][B.sup.T], CG applied to the normal equation requires 498 iterations to achieve a relative residual norm below [10.sup.-9], whereas GMRES applied to M[P.sup.-1][??] = b takes 11 iterations. According to a well-known analysis [19, sec. 7.1], the obtained solutions are also quantitatively different, since the error norm for the CG solution is more than two orders of magnitude larger than for GMRES.

We next address the question of whether we expect the two preconditioners [,+] and [,-] to behave similarly, where the subscript + or - refers to the use of the plus or minus sign in the (2,2) block in (4.1). For both preconditioners we set W = [B.sup.T][[??].sup.-1]B + [beta]C and we obtain


where E = (F - [??]) [[??].sup.-1]. Therefore, both preconditioned matrices are [parallel]E[parallel] perturbations of block triangular matrices having at most two distinct eigenvalues. In addition, we have



That is, the squared matrix [(M[,+]).sup.2] is a nonsymmetric perturbation of the identity matrix. The following proposition can thus be stated, where we denote by [lambda](G) an eigenvalue of the matrix G.

PROPOSITION 4.3. With the previous notation, let T(E) = (E - [E.sup.T])/(2i). For [parallel]E[parallel] sufficiently small it holds


where with [??] we omit higher order terms.

Proof. The first set of inequalities follows from [26, Th. IV.5.1]. For the second set of inequalities, we note that M[,-] is not diagonalizable and it has Jordan blocks of size two. The estimate thus follows, e.g., from [10, Th. 4.3.6].

The result shows that the perturbation induced by [,-] may be exponentially twice as large as that induced by [,+]: if eigenvalues of [(M[,+]).sup.2] can be found in a disk centered at one and radius [rho] < 1, then eigenvalues of M[,-] could be contained approximately in a disk centered at one and radius [[rho].sup.1/2]. The estimates of Proposition 4.3 qualitatively explain the different spectral distributions of the two preconditioned problems in the left plot of Figure 4.1. On the other hand, we should keep in mind that these are usually pessimistic estimates, and that practical spectra are less perturbed than the theory predicts. As a consequence, it is possible that [,-] behave better than the worst case scenario.

Finally, in the case of a disk, the asymptotic rate of convergence for M[,+] is expected to be [[rho].sup.1/2] (cf. Proposition 3.2), which is the same as the rate obtained for the definite preconditioned matrix at each iteration. This justifies the similar convergence behavior observed in the right plot of Figure 4.1.

Example 4.4. Finally we consider the model of a boundary layer flow over a flat plate [14, Example 7.1.4]. Parameters were given default values: unit grid stretch factor, and the other values as in the previous example. The matrix B has size 64 x 162 and F is nonsymmetric. The relevant parameters for our non-stagnation test are reported in Table 4.5 for all preconditioners [P.sub.d,aug] and [,aug], using the exact augmented (1,1) block and various choices of W, and for []. The results confirm our previous findings, although in this case the condition fails a little more often.

Although the (1,1) block is definite, we also consider solving the fully augmented formulation of the algebraic problem, which consists of solving the following equivalent linear system (for C = 0):


see, e.g., [4] and references therein. To obtain this setting we used Q2-Q1 stable elements for which C = 0. In Table 4.6 we report the results obtained by applying the preconditioner [,aug] for various choices of the diagonal blocks. Several other experiments not reported here were performed, with various problem dependent scalings of the blocks, but results did not differ much. It is also important to realize that some of the analyzed preconditioning blocks do not scale well with dimensions, that is the performance of GMRES on the preconditioned problem is in general not mesh independent, therefore results may vary significantly for larger dimensions.

5. Conclusions. In this paper we have proposed an enhanced strategy to compute a second degree polynomial to test a non-stagnation condition of GMRES and other Minimum Residual methods for solving real nonsymmetric linear systems. The new setting may also be used to estimate the asymptotic convergence rate of GMRES. We have also discussed a new class of problems whose spectral properties are suitable for the condition to be fulfilled. Numerical experiments on preconditioned saddle point matrices stemming from the finite element discretization of classical Stokes and Navier-Stokes problems were reported to test our findings. In a future analysis we plan to increase our understanding of the non-stagnation condition, so as to provide a-priori devices on when the condition may be satisfied on general problems.

* Received September 2, 2009. Accepted for publication February 4, 2010. Published online June 30, 2010. Recommended by A. Klawonn.


[1] M. ARIOLI, V. PTAK, AND Z. STRAKOS, Krylov sequences of maximal length and convergence of GMRES, BIT, 38 (1998), pp. 636-643.

[2] O. AXELSSON,Iterative Solution Methods, Cambridge University Press, New York, 1994.

[3] M. BENZI AND J. LIU, Block preconditioning for saddle point systems with indefinite (1,1) block, Int. J. Comput. Math., 84 (2007), pp. 1117-1129.

[4] M. BENZI AND M. A. Olshanskii, An augmented Lagrangian-based approach to the Oseen problem, SIAM J. Sci. Comput., 28 (2006), pp. 2095-2113.

[5] M. BENZI AND M. A. Olshanskii, An augmented Lagrangian approach to linearized problems in hydro dynamic stability, SIAM J. Sci. Comput., 30 (2008), pp. 1459-1473.

[6] M. BENZI, G. H. GOLUB, AND J. LIESEN, Numerical solution of saddle point problems, Acta Numer., 14 (2005), pp. 1-137.

[7] J. BOYLE, M. D. MIHAJLOVIC, AND J. A. SCOTT, HSL_MI20: an efficient AMG preconditioner, Internat. J. Numer. Methods Engrg, to appear, 2009.

[8] S. L. CAMPBELL, I. C. F. IPSEN, C. T. KELLEY, AND C. D. MEYER, GMRES and the minimal polynomial, BIT, 36 (1996), pp. 664-675.

[9] Z.-H. CAO, Augmentation block preconditioners for saddle point-type matrices with singular (1,1) blocks, Numer. Linear Algebra Appl., 15 (2008), pp. 515-533.

[10] F. CHATELIN, Valeurs Propres de Matrices, Masson, Paris, 1988.

[11] S. C. EISENSTAT, H. C. ELMAN, AND M. H. SCHULTZ, Variational iterative methods for nonsymmetric systems oflinear equations, SIAM J. Numer. Anal., 20 (1983), pp. 345-357.

[12] H. C. ELMAN, Iterative methods for large sparse nonsymmetric systems oflinear equations, PhD thesis, Department of Computer Science, Yale University, New Haven, CT, 1982.

[13] H. C. ELMAN AND D. J. SILVESTER, Fast nonsymmetric iterations and preconditioning for Navier-Stokes equations, SIAM J. Sci. Comput., 17 (1996), pp. 33-46.

[14] H. C. ELMAN, D. J. SILVESTER, AND A. J. WATHEN, Finite Elements and Fast Iterative Solvers, with Applications in Incompressible Fluid Dynamics, Numerical Mathematics and Scientific Computation, Vol. 21, Oxford University Press, 2005.

[15] B. FISCHER AND F. PEHERSTORFER, Chebyshev approximation via polynomial mappings and the convergence behavior ofKrylov subspace methods, Electron. Trans. Numer. Anal., 12 (2001), pp. 205-215.

[16] B. FISCHER, A. RAMAGE, D. J. SILVESTER, AND A. J. WATHEN, Minimum residual methods for augmented systems, BIT, 38 (1998), pp. 527-543.

[17] R. W. FREUND, Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations, in Numerical Methods in Approximation Theory, Vol. 9, D. Braess and L. L. Schumaker, eds., Birkhauser, Basel, 1992, pp 77-95.

[18] J. F. GRCAR, Operator coefficient methods for linear equations, Technical Report SAND89-8691, Sandia National Laboratories, Livermore, CA, 1989.

[19] A. GREENBAUM, Iterative Methods for Solving Linear Systems, SIAM, Philadelphia, 1997.

[20] A. GREENBAUM, V. PtAk, and Z. StrakoS, Any nonincreasing convergence curve is possiblefor GMRES, SIAM J. Matrix Anal. Appl., 17 (1996), pp. 95-118.

[21] C. GREIF AND D. SCHOTZAU, Preconditioners for saddle point linear systems with highly singular (1,1) blocks, Electron. Trans. Numer. Anal., 22 (2006), pp. 114-121.

[22] L. KNIZHNERMAN AND V. SIMONCINI, A new investigation of the extended Krylov subspace method for matrix function evaluations, Numer. Linear Algebra Appl., 20 (2009), pp. 1-17.

[23] THE MATHWORKS, INC., MATLAB 7, September 2004.

[24] Y. SAAD AND M. H. SCHULTZ, GMRES: A generalized minimal residual algorithm forsolvingnonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856-869.

[25] V. SIMONCINI AND D. B. SZYLD, New conditions for non-stagnation of minimal residual methods, Numer. Math., 109 (2008), pp. 477-487.

[26] G. W. STEWART AND J-G. SUN, Matrix Perturbation Theory, Academic Press, New York, 1990.

[27] H. A. VAN DER VORST, Iterative Krylov Methods for Large Linear Systems, Cambridge University Press, Cambridge, New York, and Melbourne, 2003.


([dagger]) Dipartimento di Matematica, Universita di Bologna, Piazza di Porta S. Donato 5, I-40127 Bologna, Italy and CIRSA, Ravenna, Italy (

Example 4.1. Stokes problem. L: Incomplete Cholesky factor;
[F.sub.amg]: Algebraic Multigrid preconditioner; [W.sub.1] =
B[[??][B.sup.T]. Underlined are the cases where a nonzero a passes the
test, unlike [alpha] = 0, to obtain a positive definite [[PHI].sub.2]
(A), A = M[].

[??] w [[lambda].sub.min](H)

F Q -1.9106
F 10 [W.sub.1] -8.3223
F [W.sub.1] -83.223
[LL.sup.T] Q -1.9240
[LL.sup.T] 10 [W.sub.1] -8.2948
[LL.sup.T] [W.sub.1] -82.949
[F.sub.amg] Q -1.9098
[F.sub.amg] 10 [W.sub.1] -8.3232
[F.sub.amg] [W.sub.1] -83.232

[??] [[lambda].sub.max] [[lambda].sub.max] [alpha]
 ([S.sup.T] S, (S.sup.T] S,
 [H.sup.2]) [[PHI].sub.2] (H)]

F 0.4733 0.3783 0.833
F 2.8159 0.4585 0.903
F 1.8672 1.7771 0.074
[LL.sup.T] 0.6660 0.3886 0.418
[LL.sup.T] 2.5103 0.6301 0.457
[LL.sup.T] 1.9645 1.9645 0
[F.sub.amg] 0.3771 0.2634 0.780
[F.sub.amg] 2.6057 0.4378 0.847
[F.sub.amg] 1.8628 1.8299 0.026

[??] # its

F 17
F 16
F 15
[LL.sup.T] 45
[LL.sup.T] 45
[LL.sup.T] 41
[F.sub.amg] 29
[F.sub.amg] 30
[F.sub.amg] 29

Example 4.1. Quantities for testingnon-stagnation condition as the
problem dimension increases.

n m [[lambda].sub.min](H)

 162 64 -1.9106
 578 256 -1.9352
2178 1024 -1.9413

n [[lambda].sub.max] [alpha] # its
 ([S.sup.T]S, [[[PHI].sub.2](H))

 162 0.37834 0.83307 17
 578 0.35917 0.83762 18
2178 0.35040 0.83963 18

Quantities appearing in the non-stagnation condition for Example 4.2.
Underlined are the cases where a nonzero [alpha] passes the test,
unlike [alpha] = 0. For the augmentation methods: W (tol) =
B[[??].sup.-1][B.sup.T] + [beta]C, where F = luinc(F, tol). For
[]: [W.sub.1] = B[[??].sup.-1] [B.sup.T] + [beta]C,
[W.sub.2] = B[[??].su-1][B.sup.T], [??] = luinc(F, [10.sup.-2]).

Prec [[lambda].sub.min](H) [[lambda].sub.max]
 ([S.sup.T] S, [H.sup.2])


W(0) -3.5512 1.1241
W([10.sup.-1]) -2.7567 1.2134
Q -4.2339 1.5090


W(0) -3.8091 0.9672
W([10.sup.-1]) -3.0814 1.1087
W([10.sup.-2]) -3.7450 0.9709


[W.sub.1] -7.3000 0.9923
[W.sub.2] -13.818 0.9924

Prec [[lambda].sub.max] [alpha] # its


W(0) 0.9906 0.3951 16
W([10.sup.-1]) 0.9724 0.4252 19
Q 1.5620 0.3558 29


W(0) 0.9672 0 14
W([10.sup.-1]) 1.1063 0.0216 21
W([10.sup.-2]) 0.9709 0 16


[W.sub.1] 0.9923 0 11
[W.sub.2] 0.9924 0 17

Example 4.2. Quantities for testingnon-stagnation condition as the
problem dimension increases.

n m [[lambda].sub.min](H) [[lambda].sub.max]

 418 176 -3.8091 0.9672
1538 704 -3.7057 0.9662
5890 2816 -3.6710 0.9660

n [alpha] # its

 418 0 14
1538 0 15
5890 0 13

Quantities appearing in the non-stagnation condition for Example 4.4.
Underlined are the cases where a nonzero [alpha] passes the test,
unlike [alpha] = 0. For the augmentation methods: W(tol) =
B[[??].sup.-1][B.sup.T] + [beta]C, where [??] = luinc(F, tol). For
[]: [W.sub.1] = B[[??].sup.-], [B.sup.T]), [W.sub.2] =
B[[??].sup.-1] [B.sup.T] + [beta]C, [W.sub.2] =
B[[??].sup.-1][B.sup.T], [??] = luinc (F, [10.sup.-2]).

Prec [[lambda].sub.min] (H) [[lambda].sub.max]
 ([S.sup.T] S,

W(0) -3.1958 1.1265
W([10.sup.-1]) -3.1796 1.1562
Q -4.0580 3.9971


W(0) -3.4537 0.9741
W([10.sup.-1]) -3.4387 1.0109
W([10.sup.-2]) -3.4443 0.9761
Q -4.5438 5.6706


[W.sub.1] -6.8478 0.9798
[W.sub.2] -6.8487 1.0291

Prec [[lambda].sub.max] [alpha] # its
 ([S.sup.T] S,
 [[PHI].sub.2] (H))

W(0) 0.9705 0.45036 13
W([10.sup.-1]) 0.9684 0.46829 16
Q 2.7796 0.58865 27


W(0) 0.9741 0 11
W([10.sup.-1]) 1.0090 0.0317 16
W([10.sup.-2]) 0.9770 0.0046 14
Q 3.8576 0.3083 33


[W.sub.1] 0.9798 0 8
[W.sub.2] 1.0291 0 14


Quantities appearing in the non-stagnation condition for Example 4.4.
Fully augmented formulation, stable Q2 -Q1 elements. [W.sub.1](tol) =
luinc(B[[??].sup.-l] [B.sup.T](tol), [??](tol) = luinc(F +
[B.sup.T][W.sup.-1]B,tol), [W.sub.2] = 0.1 Bdiag [(F).sup.-1]

W,[??] [[lambda].sub.min] (H)

[??](5 * [l0.sup.3])
[W.sub.1] ([10.sup.2]) -3.0800
Q -1.0710
[W.sub.1] ([10.sup.2]) -3.0509
[W.sub.2] -1.0941

W,[??] [[lambda].sub.max] [[lambda].sub.max]
 ([S.sup.T] S, [H.sup.2]) ([S.sup.T]S,

[??](5 * [l0.sup.3])
[W.sub.1] ([10.sup.2]) 1.1577 0.9782
Q 5.7989 2.2705
[W.sub.1] ([10.sup.2]) 1.1728 1.0118
[W.sub.2] 0.6191 0.4980

W,[??] [alpha] # its

[??](5 * [l0.sup.3])
[W.sub.1] ([10.sup.2]) 0.4810 13
Q 0.4143 29
[W.sub.1] ([10.sup.2]) 0.4120 15
[W.sub.2] 0.1754 23
COPYRIGHT 2010 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Simoncini, Valeria
Publication:Electronic Transactions on Numerical Analysis
Date:Jan 1, 2010
Previous Article:New quadrilateral mixed finite elements.
Next Article:A weakly over-penalized symmetric interior penalty method for the biharmonic problem.

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters