# On algebraic multilevel methods for non-symmetric systems-convergence results.

AMS subject classifications. 65F10, 65F50, 65N221. Introduction. In many recent papers algebraic multigrid methods or multilevel methods were designed to solve large sparse linear systems by using only information on the matrix structure and the matrix entries. Among several algebraic methods the algebraic multigrid method (AMG) and the multilevel approximate block factorization are best-known. The pioneering work on algebraic multilevel methods was done by Brandt, McCormick, and Ruge [12] and Ruge and Staben [32, 34] by introducing the AMG method in the eighties; see also [35].

Recently, a theoretical comparison of different algebraic multigrid methods applied to symmetric positive definite systems was given by Notay in [29]. However, not many theoretical results known for algebraic multigrid methods applied to non-symmetric matrices.

Here we analyze algebraic multilevel methods applied to non-symmetric M-matrices. Algebraic multilevel methods are often used as preconditioners for Krylov subspace methods. In this paper, we focus on the convergence of these methods used as solvers.

M-matrices occur in various fields of applied mathematics such as numerical analysis, probability, economics, and operations research [9]. Moreover, Markov chain modeling became relevant in several applications from computer science, such as information retrieval [20]. Iterative solvers, such as algebraic multigrid methods, are used to compute the steady state solution of a Markov chain, i.e., algebraic multigrid methods are used to find the solution of a system with non-symmetric M-matrix structure-a non-trivial task given the size of the Google matrix for example; see, e.g., [20].

Most of the algebraic multilevel methods start with a partitioning of the unknowns into fine and coarse grid unknowns. Related to this ordering, the n x n system matrix A can be permuted into a block 2 x 2 form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here F denotes the set of fine grid unknowns, and C denotes the set of coarse grid unknowns with [absolute value of F] = [n.sub.F] and [absolute value of C] = [n.sub.C]. This process is called coarsening. There are a lot of different algorithms and strategies to perform the above partitioning; see, e.g., [32, 22, 11, 39, 30]. The choice of the partitioning has a major influence on the convergence behavior of AMG methods. Here we assume that the coarsening is done already in some way. Thus, we assume that A is given by

(1.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If the submatrix [A.sub.FF] is nonsingular, then A can be factorized as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

S := (A/[A.sub.FF]) := [A.sub.CC] - [A.sub.CF][A.sup.1.sub.FF][A.sub.FC]

is the Schur complement. If we now use an approximation [[??].sub.FF] of [A.sub.FF] and an approximation [??] of S, or approximations of the inverses of these matrices, we obtain the matrix M with

(1.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This factorization is known as an approximate two-level block factorization [29]. Various multilevel methods use this two-level block approximate factorization as a major tool; see, e.g., [4, 5, 2, 27, 7, 33] and references in [29]. One of these methods is the AMLI method introduced by Axelsson and Vassilevski in [4, 5]. With the use of the matrix M in (1.2) one can define the stationary iteration given by the iteration matrix

(1.3) [T.sub.AMLI] = I - [M.sup.1] A.

Here the subscript AMLI is chosen with respect to the first use of this approximate block factorization by Axelsson and Vassilevski in the AMLI method. We point out that the AMLI method itself contains a lot more ingredients like, e.g., polynomial acceleration. Moreover, the AMLI method is designed originally in the framework of hierarchical bases to precondition the CG method.

For the iteration matrix [T.sub.AMLI] in (1.3), we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using the restriction and prolongation operators

(1.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and the matrices

(1.5) [P.sub.1] := [[??].sup.T][[??].sup.-1][[??].sub.A] and [P.sub.2] := [[??].sup.T][A.sub.FF][[??].sub.A],

one obtains

(1.6) [T.sub.AMLI] = I - [[??].sup.T][[??].sup.-1][[??].sub.A] - [[??].sup.T][[??].sup.1.sub.FF][[??].sub.A] = I - [P.sub.1] - [P.sub.2].

Thus, [T.sub.AMLI] can be written as an additive Schwarz method with inexact local solves; see [41, 19, 36] for details about Schwarz methods.

Hence, the matrices TAMLI and M in (1.2) are constructed by an inexact block factorization, i.e., by using a product of matrices. The iteration given by [T.sub.AMLI] therefore can be regarded as a multiplicative method [4, 5]. Moreover, there are also some other versions of the original AMLI method. These methods do not use the inexact block factorization in (1.2) and are based on a block-diagonal preconditioner. Therefore, these AMLI versions are called additive methods [1, 3].

On the other hand [T.sub.AMLI] can be written as an additive Schwarz method. To avoid confusion here, we will use the terminology AMLI approach for the inexact block factorization in (1.2) and the resulting iterative method. Moreover, we consider this AMLI approach as an additive Schwarz approach.

For some problems it is known that the multiplicative Schwarz method converges faster than the additive Schwarz method [17, 26]. So it is natural to consider a related multiplicative approach. The multiplicative version, which we call the MAMLI method or MAMLI approach [24], is then given by

(1.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

There are two other methods that are closely related to the multiplicative version. The first one is the reverse MAMLI method (RMAMLI). The other one is the symmetrized MAMLI method (SMAMLI) [24]. These variants are defined in terms of their iteration matrices given by

(1.8) [T.sub.RM AMLI] = (I - [P.sub.2]) (I - [P.sub.I])

and

(1.9) [T.sub.SM AMLI] = (I - [P.sub.2]) (I - [P.sub.I]) (I - [P.sub.2]).

The multiplicative versions are closely related to certain geometric and algebraic multigrid methods. The factor I - [P.sub.2] in (1.7) can be seen as a relaxation or smoothing step, while I - [P.sub.1] in (1.7) is a coarse grid correction. In particular, the MAMLI method can be viewed as a two-level V(1,0) cycle, the RMAMLI method as a two-level V(0,1), and the SMAMLI method as a two-level V(1,1) cycle with fine-grid only relaxation and special restriction and coarse grid operators. In the AMLI approach, the smoother and the coarse grid correction operator are combined in an additive way, while in the MAMLI approach, this is done in a multiplicative way. Hence, the MAMLI approach is closer to the original multigrid schemes. Nevertheless, general additive multigrid methods have been successfully developed in the last decades [10, 18, 37].

There is also a close relationship between the MAMLI techniques and the AMGr method introduced by MacLachlan, Manteuffel, and McCormick in [21]. AMGr uses fine-grid only relaxation, similar restriction operators, and Schur complements also. In [21] convergence results and bounds for the rate of convergence are established for the AMGr method applied to symmetric positive definite matrices.

There exist a lot of theoretical results for the AMLI method used as a solver or as a preconditioner for the CG method [4, 5, 27, 2, 6, 13, 14]. However, the theoretical results can be applied to symmetric positive definite matrices only. Notay gives in [28] some results for the multilevel approximate block factorization applied to some special non-symmetric M-matrices which arise from a specific discretization of a certain PDE. However, so far a general convergence analysis of the AMLI approach for a wide class of non-symmetric matrices is still missing. In this paper we give convergence results for the above mentioned AMLI approach applied to arbitrary non-symmetric M-matrices. Moreover, we will establish a detailed convergence theory for the MAMLI, RMAMLI and SMAMLI methods applied to non-symmetric M-matrices. By a recursive use of the above described two-level techniques one can easily construct the corresponding multilevel methods. For these multilevel methods convergence results are also established in this paper.

Of course, the choice of the coarsening algorithm used to get the partitioning in (1.1) has a major influence on the convergence behavior. In this paper, we assume that this partitioning is done already in an arbitrary way. We want to focus on the convergence behavior for general partitionings and will compare different multilevel methods starting with the same partitioning. Our convergence results are independent of the coarsening technique that is used.

Recently, additive and multiplicative Schwarz methods for non-symmetric matrices were analyzed in [16, 8, 26]. In these papers an algebraic convergence theory for the additive and multiplicative Schwarz methods was introduced. This theory yields several convergence results and comparison results. However, this theory includes only special restriction and prolongation operators, which are used in domain decomposition methods. This theory can not be applied to multilevel or multigrid methods. In contrast to [16, 8, 26], we use and analyze more general restriction and prolongation operators here.

The paper is organized as follows. In the next section we give some notation and recall some well-known results. Section 3 describes the properties of the approximations that we are using in the block factorization. In Section 4 we establish convergence and comparison results for two-level methods, while in Section 5 these results are extended to multilevel methods.

2. Notation and well-known results. A matrix B is nonnegative (positive), denoted B [greater than or equal to] 0 (B > 0), if its entries are nonnegative (positive). We say that B [greater than or equal to] C if B - C [greater than or equal to] 0, and similarly with the strict inequality. These definitions carry over to vectors. A matrix A is a Z-matrix if its off-diagonal elements are non-positive. A Z-matrix A is called a nonsingular M-matrix if it is monotone, i.e., [A.sup.-1] [greater than or equal to] 0. It follows that if A and B are nonsingular M-matricesand A [greater than or equal to] B, then [A.sup.-1] [less than or equal to] [B.sup.-1] [9, 38]. By [rho](B) we denote the spectral radius of the matrix B.

We say (M, N) is a splitting of A if A = M - N and M is nonsingular. A splitting is regular if [M.sup.-1] [greater than or equal to] 0 and N > 0; it is weak regular of the first type if [M.sup.-1] [greater than or equal to] 0 and [M.sup.-1]N [greater than or equal to] 0 [9, 38, 40].

Here, we consider stationary iterative methods to solve Ax = b. These methods start with a vector [x.sup.(0)] and build a sequence of vectors [x.sup.(i+1)] such that

(2.1) [x.sup.(i+1)] = [Tx.sup.(i)] + c for i = 1, 2,....

The matrix T is called iteration matrix. If [rho](T) < 1 then, there exists a unique splitting (M, N) such that T = [M.sup.-1]N. This splitting is given by M = A[(I -T).sup.-1] and N = M - A. We say that T is induced by this splitting (M, N).

Related to the partitioning of A in (1.1), we will denote by I the n x n identity matrix and with [I.sub.F] and [I.sub.C] the [n.sub.F] x [n.sub.F] and [n.sub.c] x [n.sub.c] the identity matrix, respectively.

For A = [[a.sub.i,j]] [member of] [R.sup.nxn] we define the matrices diag(A), triu(A) and tril(A) [member of] [R.sup.nxn] by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Next we recall the definition of the weighted max-norm. Given a positive vector w [member of] [R.sup.n], denoted w > 0, the weighted max-norm is defined for any [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The corresponding matrix norm is defined as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the following lemma holds.

LEMMA 2.1. Let A [member of] [R.sup.nxn], be nonnegative, w [member of] [R.sup.n], w > 0, and [gamma] > 0 such that

(2.2) Aw [less than or equal to] [gamma]w.

Then, [parallel]A[[parallel].sub.w] [less than or equal to] [gamma]. If the inequality in (2.2) is strict, then the bound on the norm is also strict. Moreover,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. See, e.g.,[15].

Most of our estimates hold for all positive vectors w of the form w = [A.sup.-1] e, where e is any positive vector, i.e., for any positive vector w such that Aw is positive. In particular, this would hold for an M-matrix A and e = [(1, ..., 1).sup.T], i.e., with w = [A.sup.-1] e being the row sums of [A.sup.-1].

Moreover, we need the following well-known properties of M-matrices.

THEOREM 2.2. Let A [member of] [R.sup.nxn] be a nonsingular Z-matrix. A is a nonsingular M-matrix if and only if either of the following conditions holds:

* There exist two nonsingular monotone matrices [B.sub.1], [B.sub.2], such that [B.sub.1] [less than or equal to] A < [B.sub.2]

* Each principal submatrix of A is a nonsingular M-matrix.

Proof. See, e.g., [9].

THEOREM 2.3. Let A [member of] [R.sup.nxn] be a nonsingular M-matrix partitioned as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [A.sub.FF] [member of] [R.sub.s x s] and [A.sub.cc] [member of] [R.sup-s x n - s] for some s [member of] {1, ..., - 1}. Then the Schur complement

(A/[A.sub.FF]) = [A.sub.CC] - [A.sub.CF][A.sup.-1.sub.FF][A.sub.FC]

is a nonsingular M-matrix.

Proof. See, e.g., [9]. 0

LEMMA 2.4. Let A be a nonsingular M-matrix and let (M, N) be a weak regular splitting of first type. Then

[M.sup.1] [less than or equal to] [A.sup.-1].

Proof. See, e.g., [40, Theorem 3.2]. 0

LEMMA 2.5. Let A be a nonsingular M-matrix. Let M be a Z-matrix such that M [greater than or equal to] A. Then, (M, M - A) is a regular splitting, and therefore a weak regular splitting, of A.

Proof. Since M is a Z-matrix and M [greater than or equal to] A, M is an M-matrix; see [9]. The statement then follows immediately from the definitions of a weak regular and regular splittings.

3. The approximations. Of course, the quality of the approximations [[??].sub.FF] of [A.sub.FF] and [??] of S used in the inexact multilevel block factorizations will be important for the convergence behavior. Here we use the following properties of the approximations to prove convergence of the AMLI approach.

ASSUMPTION 3.1. Let A be a nonsingular (non-symmetric) M-matrix and let A be partitioned in the following 2 x 2 block structure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Furthermore, let [[??].sub.FF] and S be chosen such that the splittings ([[??].sub.FF], [[?].sub.FF] - [A.sub.FF]) and ([??] [??] - (A/[[??}.sub.FF])) are weak regular of first type, i.e.,

(3.1) [[??].sup.-1.sub.FF] [greater than or equal to] 0 and IF - [[??].sup.-1.sub.FF][A.sub.FF] [greater than or equal to] 0,

and

(3.2) [[??].sup.-1] [greater than or equal to] 0 and [I.sub.C] - [[??].sup.-1](A/[[??].sub.FF]) [greater than or equal to] 0.

Here (A/[[??].sub.FF]) is defined by (A/[[??].sub.FF]) := [A.sub.CC] - [A.sub.CF][[??].sup.-1.sub.FF] [A.sub.FC].

For the multiplicative versions we use a slightly modified set of approximations.

ASSUMPTION 3.2. Let A be a nonsingular (non-symmetric) M-matrix and let A be partitioned in the following 2 x 2 block structure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Furthermore, let [[??].sub.FF] and [??] be chosen such that the splittings ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) and ([??}, [??] - [??]A[[??].sup.T]) are weak regular of first type, i.e.,

(3.3) [[??].sup.-1.sub.FF] [greater than or equal to] 0 and [I.sub.F] - [[??].sup.-1.sub.FF][[??].sub.FF] [greater than or equal to] 0,

and

(3.4) [[??].sup.-1] [greater than or equal to] 0 and [I.sub.c] - [[??].sup.-1]([??]A[[??].sup.T]) [greater than or equal to] 0,

where [??] and [[??].sup.T] are given as in (1.4).

Here we point out that these assumptions or properties, which we require for the approximations, are very weak. We will give examples of these approximations at the end of this section. Note that no special coarsening or no special partitioning of the matrix A are needed to find these kinds of approximations.

If we compare Assumptions 3.1 and 3.2, we see that the only difference is the condition for the approximation [??].

Using the relation

[??]A[[??].sup.T] = [A.sub.CC] - [A.sub.CF] (2[[??].sup.-1.sub.FF] - [[??].sup.-1.sub.FF][A.sub.FF][[??].sup.-1.sub.FF]) [A.sub.FC]

we will see in the following proposition, that Assumption 3.2 is weaker than Assumption 3.1.

PROPOSITION 3.3. Assumption 3.1 implies Assumption 3.2. In other words, if Assumption 3.1 holds, then Assumption 3.2 is fulfilled also.

Proof. We only have to prove that equation (3.1) together with equation (3.2) imply equation (3.4). Hence, let the splittings ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) and ([??], [??]-(A/[[??].sub.FF])) be weak regular of first type. Since

[??]A[[??].sup.T] = [A.sub.CC] - [A.sub.CF] (2[[??].sup.-1.sub.FF] - [[??].sup.-1.sub.FF][A.sub.FF][[??].sup.-1.sub.FF]) [A.sub.FC]

we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using the splitting property of [[??].sub.FF] and [??] and the sign pattern of the M-matrix A, we obtain

[I.sub.C] - [[??].sup.-1] [??]RA[[??].sup.T] [greater than or equal to] 0.

Hence, the splitting (??], [??] - [??]RA[[??].sup.T]) is weak regular of first type.

Note that in both Assumptions there is a coupling between the approximations [[??].sub.FF] and [??], but it is very mild. Indeed, starting with an M-matrix A, the approximations given by the Jacobi and the Gauss-Seidel methods and the incomplete LU-factorization are admissible approximations, for example. To see this, one has to use Lemma 2.5. Thus admissible choices for [[??].sub.FF] are

[[??].sub.FF] = diag([A.sub.FF]),

[??]A.sub.FF] = tril([A.sub.FF]),

[[??].sub.FF] = triu([A.sub.FF]),

[[??].sub.FF] = [??].

Here, [??] and [??] are the factors of an incomplete LU factorization of [A.sub.FF] [23]. We will show in Theorem 5.2 that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is a nonsingular M-matrix, if ([[??].sub.FF]) [[??].sub.FF] - [A.sub.FF]) is a weak regular splitting, i.e., one of the above choices is used as an approximation of [A.sub.FF]. Hence, the following approximations [??] fulfill our Assumption 3.2,

[??] = [??]A[[??].sup.T],

[??] = diag([[??].sup.T]),

[??} = tril([[??].sup.T]),

[??]= {??],

where [??] and [??] are the factors of an incomplete LU factorization of [[??].sup.T]. Moreover (see Theorem 5.2), [??} can be chosen as

[??] = (A/[[??].sub.FF]),

[??] = [A.sub.CC],

[??] = diag([A.sub.CC]),

to fulfill Assumptions 3.1 and 3.2.

Hence, the Assumptions 3.1 and 3.2 allow a wide variety of approximations and all kinds of approximations used in practice seem to be included. We will show that all of these approximations and splittings result in convergent methods. However, specific splittings may lead to convergence bounds that are dependent on the properties of the splitting.

Next, let us compare our approximations with those used in the theoretical analysis of approximate multilevel block factorization applied to symmetric positive definite systems. A frequently used assumption is that

(3.5) [[??}.sub.FF] - [A.sub.FF] is symmetric positive semidefinite;

see, e.g., [4, 2, 29]. This assumption also can be expressed with the help of splittings as we did with our assumptions above. Equation (3.5) implies that the splitting ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) is a P-regular splitting of [A.sub.FF]. P-regular splittings are introduced by Ortega in [31]; see also [25]. A splitting A = M - N is called P-regular if [M.sup.T] +N= M + [M.sup.T]--A is positive definite. Note that a splitting of a symmetric positive definite matrix A is P-regular if and only if [parallel]I - [M.sup.-1] A[[parallel].sub.A] < 1; see [42].

Hence, the usual assumption for symmetric positive definite systems can be written in terms of P-regular splittings. Note, however, that there is in general no link between P-regular splittings and weak regular splittings.

4. Two-level convergence-results. We start this section with a fundamental proposition which is the main tool in our convergence analysis.

PROPOSITION 4.1. Let A [member of] [R.sup.n x n] be a nonsingular M-matrix. If

* C [member of] [R.sup.n x n] is nonnegative,

* I--CA is nonnegative,

* C has no zero row,

then C is nonsingular and the splitting ([C.sup.-1], [C.sup.-1] - A) of A is weak regular of first type. Moreover,

[rho][(I - CA) [less than or equal to] [parallel]I - CA[[parallel].sub.m] < 1,

where w := [A.sup.-1] e for an arbitrary positive vector e [member of] [R.sup.n]. Proof. Let e [member of] [R.sup.n] be an arbitrary positive vector. Then Ce is also positive. Since A is a nonsingular M-matrix the inverse [A.sup.-1] is nonnegative and has no zero row. Therefore the vector w := [A.sup.-1] e is positive as well. Using these properties, we get

0 [less than or equal to] (I - CA) w = w - [CAA.sup.-1] e = w - Ce < w.

Due to Lemma 2.1, this leads to

(4.1) [rho](I - CA) [less than or equal to] [parallel]I - CA[[parallel].sub.w] < 1.

Now assume, that C is singular. Then there exists a nonzero vector z with Cz = 0. Let [A.sup.-1] z = y. Then

(I - CA) y = y - Cz = y.

But this contradicts (4.1). Hence, C is nonsingular. The splitting properties then follow directly from the assumptions.

In the following, we consider [T.sub.AMLI] as given in (1.6).

LEMMA 4.2. Let Assumption 3.1 be satisfied, i.e., A is a nonsingular M-matrix partitioned as in (1.1) and the splittings ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) and ([??], {??] - (A/[[??].sub.FF])) are weak regular of first type. Then

[T.sub.AMLI] [greater than or equal to] 0.

Proof. A computation leads to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using the assumptions on the approximations, [[??].sub.FF] and [??], and the sign pattern of the M-matrix A, we get that [T.sub.AMLI] is nonnegative.

Lemma 4.2 provides sufficient conditions for the non-negativity of the AMLI iteration matrix. This powerful property will be used in the convergence analysis below.

THEOREM 4.3. Let Assumption 3.1 be satisfied, i.e., A is a nonsingular M-matrix partitioned as in (1.1) and the splittings ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) and ([??], [??] - (A/[[??].sub.FF])) are weak regular of first type. Then

[rho]([T.sub.AMLI]) [less than or equal to] [parallel][T.sub.AMLI[[parallel].sub.w] < 1,

where w = [A.sup.-1]e for an arbitrary positive vector e. Moreover, [T.sub.AMLI] is induced by a weak regular splitting of first type, i. e.,

[T.sub.AMLI] = I - [C.sub.AMLI]A,

where [C.sub.AMLI] is nonsingular and ([C.sup.-1.sub.AMLI], [C.sup.1.sub.AMLI] - A) is a weak regular splitting of first type of A.

Proof. In order to use Proposition 4.1 we first write [T.sub.AMLI] as [T.sub.AMLI] = I - [C.sub.AMLI] A. We then establish that I - [C.sub.AMLI]A satisfies the assumptions of Proposition 4.1.

By Lemma 4.2, [T.sub.AMLI] is nonnegative. Thus it suffices to show that [C.sub.AMLI] is nonnegative and [C.sub.AMLI] has no zero row.

Since

[T.sub.AMLI] = I - [[??].sup.T][[??].sup.-1][??]A - [[??].sup.T][[??].sup.-1.sub.FF][??]A = I - [P.sub.1] - [P.sub.2],

the matrix [C.sub.AMLI] is given by [[??].sup.T][[??].sup.-1][??] + [[??].sup.T][[??].sup.-1.sub.FF][??]. With

(4.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(4.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we obtain

(4.4) [T.sub.AMLI] = I - ([M.sub.S] + [M.sub.CG]) A = I - [C.sub.AMLI]A,

where [C.sub.AMLI] := [M.sub.S] + [M.sub.CG].

Using the M-matrix and splitting properties, we see that both matrices [M.sub.S] and [M.sub.CG] are nonnegative. As a sum of two nonnegative matrices, [C.sub.AMLI] is also nonnegative.

Since [[??].sup.1.sub.FF] and [[??].sup.-1] are the inverses of nonsingular matrices they do not have zero rows. Therefore, the first nF rows of

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

are not zero rows. Moreover, the last [n.sub.c] rows of

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

are not zero rows. Since [M.sub.CG] and [M.sub.S] are nonnegative and [C.sub.AMLI] = [M.sub.CG] + [M.sub.S], the matrix [C.sub.AMLI] has no zero row.

With Lemma 4.2 and Proposition 4.1, we obtain

[rho]([T.sub.AMLI]) [less than or equal to] [parallel][T.sub.AMLI][[parallel].sub.w] < 1.

Starting with an M-matrix A and approximations as in (3.1) and (3.2), we proved convergence of the AMLI approach for a wide class of non-symmetric matrices. In the convergence proof the non-negativity of the iteration matrix [T.sub.AMLI] was the major tool.

Next, we analyze the MAMLI iteration matrix as given in (1.7). Before we consider the product (I - [P.sub.1]) (I - [P.sub.2]), we take consider each factor separately. It is worth mentioning that not both factors (I - [P.sub.1]) and (I - [P.sub.2]) are nonnegative, in general. This is a major difference from the convergence analysis for special Schwarz methods given in [16, 8, 26]. While factor

I - [P.sub.2] is nonnegative the other factor I - [P.sub.1] need not to be nonnegative; see Example 4.5.

PROPOSITION 4.4. Let A be a nonsingular M-matrix partitioned as in (1.1). If ([[??].sub.FF]) [[??].sub.FF] - [A.sub.FF]) is a weak regular splitting of [A.sub.FF] of first type, then

I - [P.sub.2] [greater than or equal to] and [parallel]I - [P.sub.2][[parallel].sub.w] = 1,

where w = [A.sup.-1] e for an arbitrary positive vector e.

Proof. Using the splitting properties and the M-matrix sign pattern, we get that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is nonnegative.

Let e be an arbitrary positive vector. Since A is a nonsingular M-matrix, w = [A.sup.-l] e is also positive. With [[??].sup.-1.sub.FF] [greater than or equal to] 0, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using Lemma 2.1, we get [parallel]I - [P.sub.2][[parallel].sub.w] [less than or equal to] 1. However, since the last components of (I - [P.sub.2]) w and w are the same, it holds that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This leads to

[paralle]I - [P.sub.2][[parallel].sub.w] = 1.

EXAMPLE 4.5. Consider the matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and partition it as indicated. We use the approximations

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

These approximations fulfill (3.1) and (3.2). Then I - [P.sub.1] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Although (I - [P.sub.1]) is not nonnegative in general, we are able to establish the nonnegativity of the MAMLI iteration matrix.

LEMMA 4.6. Let Assumption 3.2 be satisfied, i.e., A is a nonsingular M-matrix partitioned as in (1.1) and the splittings ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) and ([??}, [??] - [??]A[[??].sup.T]) are weak regular of first type. Then [T.sub.MAMLI] [greater than or equal to] 0.

Proof. A computation leads to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using the assumption on the approximations [[??].sub.FF] and [??] and the sign pattern of the M-matrix A, we get that [T.sub.MAMLI] is nonnegative.

Due to Proposition 3.3, Assumption 3.1 also leads to a nonnegative matrix [T.sub.MAMLI]. Next we prove the convergence of the MAMLI method.

THEOREM 4.7. Let Assumption 3.2 be satisfied, i.e., A is a nonsingular M-matrix partitioned as in (1.1) and the splittings ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) and ([??], [??] - [??]A[[??].sup.T]) are weak regular of first type. Then

[rho]([T.sub.MAMLI]) [less than or equal to] [parallel]T.sub.MAMLI][[parallel].sub.w] < 1.

where w = [A.sup.-1] e for an arbitrary positive vector e. Moreover, [T.sub.MAMLI] is induced by a weak regular splitting of first type, i. e.,

[T.sub.MAMLI] = I - [C.sub.MAMLI] A),

where [C.sub.MAMLI] is nonsingular and ([C.sup.-1.sub.MAMLI], [C.sup.-1.sub.MAMLI] - A) is a weak regular splitting of first type of A.

Proof. As in the proof of Theorem 4.3, we use Proposition 4.1. We first write [T.sub.MAMLI] as [T.sub.MAMLI] = I -[C.sub.MAMLI]A. Then we establish that I- [C.sub.MAMLI]A satisfies the assumptions of Proposition 4.1. Since with Lemma 4.6 [T.sub.MAMLI] is nonnegative, it suffices to show that [C.sub.MAMLI] is nonnegative and [C.sub.MAMLI] has no zero row. With

[T.sub.MAMLI] = (I - [P.sub.1]) (I - [p.sub.2]),

where

Pl = [[??].sup.T][[??].sup.[1][??]A and [P.sub.2] = [[??].sup.T][[??]A.sup.-1.sub.FF][??]A,

we easily obtain

[T.sub.MAMLI] = I - [P.sub.1] - [P.sub.2] + [P.sub.1][P.sub.2]

(4.5) = I - ([M.sub.CG] + [M.sub.S] - [M.sub.CG][AM.sub.s])A

= I - [C.sub.MAMLI]A,

where

[C.sub.MAMLI] := [M.sub.CG] + [M.sub.S] - [M.sub.CG][AM.sub.s]

and [M.sub.s] and [M.sub.CG] are as in (4.2) and (4.3).

Next we show that [C.sub.MAMLI] is a nonnegative matrix. As seen in the proof of Theorem 4.3, both matrices [M.sub.s] and [M.sub.CG] are nonnegative. For - [M.sub.CC][AM.sub.s], we obtain

(4.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using the M-matrix and splitting properties, we obtain that -[M.sub.CG][AM.sub.s] is nonnegative. Consequently,

(4.7) [C.sub.MAMLI] = [M.sub.CG] + [M.sub.s] + (-[M.sub.CG][AM.sub.s])

is also nonnegative.

Next, we prove that [C.sub.MAMLI] has no zero row. We already established that all three terms of [C.sub.MAMLI] in (4.7) are nonnegative, so it suffices to prove that the term [M.sub.CG] + [M.sub.s] has no zero row. But this was already done in the proof of Theorem 4.3. Now, using Proposition 4.1, we get that

P ([T.sub.MAMLI]) [less than or equal to] [parallel][T.sub.MAMLI][[parallel]].sub.w] < 1.

Starting with an M-matrix A and approximations as in (3.1) and (3.2) or (3.3) and (3.4), respectively, we proved convergence of the AMLI and the MAMLI method. As far as we know these are the first convergence results for these methods for a wide class of nonsymmetric matrices.

Next, we consider the RMAMLI iteration matrix as given in (1.8)

[T.sub.RMAMLI] = (I - [P.sub.2]) (I - [P.sub.1])

Since

[T.sub.MAMLI] = (I - [P.sub.1]) (I - [P.sub.2]) and [T.sub.RMAMLI] = (I - [P.sub.2]) (I - [P.sub.1]),

we immediately obtain

(4.8) [rho]([T.sub.RMAMLI]) = [rho]([T.sub.MAMLI])

Thus, we have the following result.

COROLLARY 4.8. If Assumption 3.2 is satisfied, then

[rho]([T.sub.RMAMLI) < 1.

Although the spectral radii of the iteration matrices of the RMAMLI and the MAMLI method are the same, there are significant differences in the structure of the iteration matrices. The iteration matrix of the RMAMLI method is not nonnegative in general as shown in the next example.

EXAMPLE 4.9. Consider the matrix used in Example 4.5

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Again we use the approximations

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now we will analyze the SMAMLI iteration matrix as given in (1.9),

[T.sub.SMAMLI] = (I - [P.sub.2])(I - [P.sub.1])(I - [P.sub.2])

THEOREM 4.10. Let Assumption 3.2 be satisfied. Then

[T.sub.SMAMLI] [greater than or equal to] 0, and [rho]([T.sub.SMAMLI])[less than or equal to][parallel][T.sub.SMAMLI][[parallel].sub.w] < 1,

where w = [A.sup.1] e for an arbitrary positive vector e.

Proof. We easily obtain, using Lemma 4.6 and Proposition 4.4, that

(4.9) [T.sub.SMAMLI] = (I - [P.sub.2]) (I - [P.sub.1]) (I - [P.sub.2]) = (I - [P.sub.2])[T.sub.MAMLI] [greater than or equal to] 0.

With Theorem 4.7 and Proposition 4.4, we get

(4.10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In the remainder of this section, we compare the SMAMLI, MAMLI, and AMLI methods with respect to the weighted maximum norm of their iteration matrices.

THEOREM 4.11. Let Assumption 3.1 be satisfied. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where w = [A.sup.-1] e for an arbitrary positive vector e.

Proof. The inequality [parallel][T.sub.AMLI][[parallel].sub.w] < 1 was proved in Theorem 4.3. As shown in (4.4) and (4.5) we have

[T.sub.AMLI] = I - ([M.sub.S]+[M.sub.CG])A

[T.sub.MAMLI] = I - ([M.sub.S] + [M.sub.CG] - [M.sub.CG] [AM.sub.S]) A

= [T.sub.AMLI] + [M.sub.CG][AM.sub.S]A,

where [M.sub.S] and [M.sub.CG] are defined in (4.2) and (4.3).

Due to Lemmas 4.2 and 4.6, both iteration matrices are nonnegative. Since A is a M-matrix and e is a positive vector, w = [A.sup.-1] e is also positive. Therefore, both terms [T.sub.AMLI]w and [T.sub.MAMLI]w are positive.

As seen in (4.6) the term - [M.sub.CG] [AM.sub.S] is nonnegative, so we obtain that the vector [M.sub.CG][AM.sub.S]e has to be non-positive. Therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The remaining inequality [parallel][T.sub.SMAMLI][[parallel].sub.w] [less than or equal to] [parallel][T.sub.MAMLI][[parallel].sub.w] follows directly from inequality (4.10).

In the following, we establish a result for the AMLI, MAMLI, RMAMLI and SMAMLI methods. All these methods coincide if the block [A.sub.FF] is inverted exactly, i.e., [[??].sub.FF] = [A.sub.FF], independent of the quality of the approximation of the Schur complement S. This result holds for all system matrices A, symmetric or non-symmetric, and not for M-matrices only.

THEOREM 4.12. Let A be a nonsingular matrix, which is partitioned in the following 2 x 2 block structure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If [A.sub.FF] is nonsingular and [[??].sub.FF] = [A.sub.FF], then

[T.sub.AMLI] = [T.sub.MAMLI] = [T.sub.RMAMLI] = [T.sub.SMAMLI]

holds for any approximation [??] of (A/[A.sub.FF]).

Proof. We have, with (4.4) and (4.5),

[T.sub.AMLI] = I - ([M.sub.S] + [M.sub.CG]) A

[T.sub.MAMLI] = I - ([M.sub.S] + [M.sub.CG] - [M.sub.CG][AM.sub.S]) A.

Similarly, we obtain for the RMAMLI and the SMAMLI method,

[T.sub.MAMLI] = I - ([M.sub.S] + [M.sub.CG]) - [M.sub.CG][AM.sub.S]) A.

[T.sub.SMAMLI] = (2[M.sub.S] + [M.sub.CG]) - [M.sub.CG][AM.sub.S]) -[M.sub.S][AM.sub.S] - [M.sub.CG][AM.sub.s] +[M.sub.s][AM.sub.CG][AM.sub.S]) A.

Computing the products shows that

[M.sub.CG][AM.sub.S] = [M.sub.S][AM.sub.CG] = 0,

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Hence,

[T.sub.AMLI] = [T.sub.MAMLI] = [T.sub.RMAMLI] = [T.sub.SMAMLI].

5. Multilevel convergence results. In Section 4, we proved convergence for two-level methods. Here we extend the convergence results to multilevel methods, i.e., the two-level AMLI or MAMLI technique is used recursively on different levels or different coarse grids. The so-called coarse grid system, the system that involves the Schur complement, is then solved or approximated by the AMLI or MAMLI approach again.

To prove convergence for the multilevel methods, we need the following preliminary results.

LEMMA 5.1. Let ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) be a weak regular splitting of first type of [A.sub.FF].

Then

[[??].sup.-1].sub.FF] := 2[[??].sup.-1.sub.FF] - [[??].sup.-1.sub.FF][A.sub.FF][[??].sup.-1.sub.FF]

is nonsingular and the splitting

([[??].sub.FF], [[??].sub.FF] - [A.sub.FF])

Of [A.sub.FF] is weak regular of first type.

Proof. We have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus, [[??].sup.-1.sub.FF] can be written as a sum of nonnegative terms. Since [[??].sup.-1.sub.FF] is nonsingular, [[??].sup.-1.sub.FF] has no zero row. Hence, [[??].sup.-1].sub.FF] has also no zero row. Moreover,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus, we have shown that

* [[??].sup.-1.sub.FF] is nonnegative,

* I - [[??].sup.-1.sub.FF] [A.sub.FF] is nonnegative,

* [[??].sup.-1.sub.FF] has no zero row.

Hence, with Proposition 4.1 the matrix [[??].sup.-1.sub.FF] is nonsingular and ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) is a weak regular splitting of first type.

THEOREM 5.2. Let A be a nonsingular M-matrix partitioned as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Let ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) be a weak regular splitting of first type [A.sub.FF]. Then

[A.sub.CC], (A/[[??].sub.FF]), and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

are nonsingular M-matrices. Moreover,

(5.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. With Theorem 2.2, we have that [A.sub.CC] is a nonsingular M-matrix.

Next, we consider (A/[[??].sub.FF]) . Since ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) is a weak regular splitting, we obtain with Lemma 2.4 that

0 [less than or equal to] [[??].sup.-1.sub.FF] [less than or equal to] [A.sup.-1.sub.FF]

Using the M-matrix structure of A, we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Similarly,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus, we get the inequalities

(5.2) [A.sub.CC] [greater than or equal to] (A/[[??].sub.FF]) [greater than or equal to] [greater than or equal to] (A/[A.sub.FF])

By Theorem 2.2, [A.sub.CC] is a nonsingular M-matrix and with Theorem 2.3 (A/[A.sub.FF]) is a nonsingular M-matrix also. Hence, by Theorem 2.2, we obtain the desired property of (A/[[??].sub.FF]) .

Next consider [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [[??].sup.-1.sub.FF] := [[??].sup.-1.sub.FF] + [[??].sup.-1.sub.FF] - [[??].sup.-1.sub.FF][A. sup.-1.sub.FF][[??]. sup.-1.sub.FF].

With Lemma 5.1, ([[??].sub.FF]) [[??].sub.FF] - [A.sub.FF]) is a weak regular splitting of first type. Hence, replacing [[??].sub.FF] by [[??].sub.FF] we can follow the above part of the proof. Therefore, we obtain that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a nonsingular M-matrix. Thus, we get the inequalities

(5.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Next, we prove (5.1). We have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since A is an M-matrix, the blocks [A.sub.CF] and [A.sub.FC] are non-positive. Therefore

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Together with (5.2) and (5.3), we then obtain (5.1)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now, we turn to the multilevel methods. As mentioned in the beginning of this section, the coarse grid system is solved recursively with an iterative method, namely, the same method used for the original system. To describe the multilevel methods, we need also a hierarchy of matrices [A.sup.(1)], ..., [A.sup.(L)], where [A.sup.(1) = A and [A.sup.(2)] is a [n.sub.c], x [n.sub.c] matrix, etc.

First we consider the AMLI approach. Similarly to Assumption 3.1 in the two-level case, we need assumptions for the approximations in the multilevel case.

ASSUMPTION 5.3. Let A(1) [member of] [R.sup.n x n] be a nonsingular M-matrix. For l = 1, ..., L - 1, assume that

* [A.sup.(l)] is partitioned in the 2 x 2 block structure [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

* ([[??].sup.(l).sub.FF], [[??].sup.(l).sub.FF] - [A.sup.(l).sub.FF]) is a weak regular splitting of first type,

* [A.sup.(l+1)] is given by

(5.4) [A.sup.(l+1)] = (A/[[??].sup.(l).sub.FF]).

Moreover ([[??].sup.(l), [[??].sup.(l) - [A.sup.(L))] is a weak regular splitting of first type.

Note that for L = 2, Assumption 5.3 is nothing other than Assumption 3.1.

We immediately obtain the following lemma.

LEMMA 5.4. If Assumption 5.3 holds, all matrices [A.sup.(l)], l = 1, ..., L, are nonsingular

M-matrices.

Proof. The lemma can be proved by induction and Theorem 5.2.

The multilevel AMLI iteration matrix [T.sup.M.sub.AMLI] is then given by

[T.sub.AMLI] = I - [C.sup.(1).sub.AIMLI] [A.sup.(1)],

where for l = I, ..., L - 1,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We obtain the following result.

THEOREM 5.5. Let Assumption 5.3 be satisfied. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. For level L - 1, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], we can apply Theorem 4.3. Thus ([[??}.sup.(L)-1.sub.AMLI], [[[??}.sup.(L)-1.sub.AMLI] - [A.sup.(L)]) is a weak regular splitting of first type, and since [A.sup.(L-1)] is an M-matrix (see Lemma 5.4) the assumptions of Theorem 4.3 are satisfied. Thus, by Theorem 4.3, [C.sup.(L-1).sub.AMLI] is nonsingular and ([MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) is a weak regular splitting of first type. Repeating this procedure for level L - 2 and all other levels, we obtain

[rho]([T.sup.M.sub.AMLI]) [less than or equal to] [parallel]I - [C.sup.(L-1).sub.AMLI] [A.sup.(1)[[parallel].sub.w] < 1.

Next, we consider the MAMLI method. Similarly to Section 4, we can use a weaker assumption for the approximations used in the MAMLI method.

ASSUMPTION 5.6. Let [A.sup.(1)] [member of] [R.sup.n x n] be a nonsingular M-matrix. For l = 1, ..., L-1, assume that

* [A.sup.(l) is partitioned in the 2 x 2 block structure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

* ([[??].sup.(l).sub.FF]) [[??].sup.(l).sub.FF] - [A.sup.(l).sub.FF]) is a weak regular splitting of first type,

* [A.sup.([l+l]) is given by

(5.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Moreover, ([??](L), [??](L) - [A.sup.(L)]) is a weak regular splitting of first type.

Again, note that for L = 2 Assumption 5.6 is nothing other than Assumption 3.2. We immediately obtain again

LEMMA 5.7. If Assumption 5.6 holds, all matrices [A.sup.(l)(l = 1, ..., L, are nonsingular M-matrices.

Proof. The lemma can be proved by induction and Theorem 5.2.

The multilevel MAMLI method is then given by the iteration matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where, for [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We now are able to prove the convergence of the multilevel MAMLI method.

THEOREM 5.8. Let Assumption 5.6 be satisfied. Then

[rho]([T.sup.M.sub.MAMLI]) [less than or equal to] [parallel]I - [C.sup.(1).sub.MAMLI][A.sup.(1)][[parallel].sub.w] < 1.

Proof. The proof is similar to the proof of Theorem 5.5. Note, that with Theorem 5.2 all [A.sup.(l+1)] are M-matrices.

In our two level convergence theorems, the assumptions for the approximations of the coarse grid system, i.e., the Schur complement, are expressed with the use of the original matrix A. In detail, we used (A/[[??].sub.FF]) for the AMLI method and [??]A[]??].sup.T] for the MAMLI method; see (3.2) in Assumption 3.1 and (3.4) in Assumption 3.2.

Thus for the multilevel convergence theorems above, we used the corresponding formulation for building the coarse matrices on each level, i.e.,

[A.sup.(l+1)] (A/[[??].sup.(l).sub.FF])

for the AMLI method and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for the MAMLI method.

However, with similar arguments as in the proofs of the two-level and the multilevel convergence theorems, one can prove the following results.

THEOREM 5.9. Let Assumption 5.3 be satisfied, except that (5.4) is replaced by

[A.sup.(l+1)] = [A.sup.(l).sub.CC].

Then

[rho]([T.sup.M.sub.AMLI]) [less than or equal to] [parallel]I - [C.sup.(1).sub.AMLI][A.sup.(1)][[parallel].sub.w] < 1.

Proof. Obviously, all matrices [A.sup.(l+1)] are nonsingular M-matrices.

The proof for the multilevel method is based on the proof for the two-level method. Thus, for the two-level method we assume, that [[??].sub.FF] and [??] are chosen such that the splitting ([[??].sub.FF], [[??].sub.FF] - [A.sub.FF]) is weak regular of first type and that the splitting

(5.6) ([??], {??] -[A.sub.CC]) is weak regular of first type.

We omit the level indices here.

Using I - [[??].sup.-1][A.sub.CC] [greater than or equal to] 0, we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus, ([??], [??] - (A/[[??].sub.FF])) is a weak regular splitting. Therefore, the assumptions of Theorems 4.2 and 4.3 are fulfilled and we can follow their proofs. Hence, we obtain that the new iteration matrix [T.sub.AMLI], that is built by using the approximations (5.6), is nonnegative. Moreover, we also have [rho]([[??].sub.AMLI]) < 1 and [[??].sub.AMLI] is induced by a weak regular splitting [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We can then follow the proof of Theorem 5.5 to establish the convergence of the multilevel method.

Moreover, we have for the MAMLI method.

THEOREM 5.10. Let Assumption 5.6 be satisfied, except that (5.5) is replaced by

[A.sup.(l+1)] = [A..sup.(l).sub.cc]

or

[A.sup.(l+1)] - (A/[[??].sup.(l).sub.FF]).

Then

[rho]([T.sup.M.sub.MAMLI]) [less than or equal to] [parallel]I - [C.sup.(1).sub.AMLI][[A.sup.(1)][[parallel].sub.w] < 1.

Proof. The proof is similar to the proof of Theorem 5.9.

6. Conclusion. In this paper we analyzed algebraic multilevel methods applied to nonsymmetric M-matrices. We considered two types of AMG methods, the AMLI approach and the MAMLI method. We established convergence results for these methods used as solvers applied to non-symmetric M-matrices. Moreover, we gave some algebraic properties of these methods and the corresponding iteration matrices and splittings.

Aknowledgment. We would like to thank the referees for helpful comments.

* Received June 21, 2007. Accepted for publication July 9, 2008. Published online on December 8, 2008. Recommended by Y Saad.

REFERENCES

[1] O. AXELSSON,Stabilization ofalgebraic multilevel iteration methods, Numer. Algorithms, 21 (1999), pp. 2347.

[2] O. AXELSSON AND M. NEYTCHEVA, Algebraic multilevel iteration method for Stieltjes matrices, Numer. Linear Algebra Appl., 1 (1994), pp. 213-236.

[3] O. AXELSSON AND A. PADIY, On the additive version of the algebraic multilevel iteration method for anisotro[P.sub.1] c elliptic problems, SIAM J. Sci. Comput., 20 (1999), pp. 1807-1830.

[4] O. AXELSSON AND P. VASSILEVSKI, Algebraic Multilevel Preconditioning Methods, I, Numer. Math., 56 (1989), pp. 157-177.

[5] --, Algebraic Multilevel Preconditioning Methods, II, SIAM J. Numer. Anal., 27 (1990), pp. 1569-1590.

[6] --, Variable-step multilevel preconditioning methods, I. self-adjoint and positive definite elliptic problems, Numer. Linear Algebra Appl., 1 (1994), pp. 75-101.

[7] R. BANK AND C. WAGNER, Multilevel ILUdecomposition, Numer. Math., 82 (1999), pp. 543-576.

[8] M. BENZI, A. FROMMER, R. NABBEN, AND D. SZYLD, Algebraic theory of multiplicative Schwarz methods, Numer. Math., 89 (2001), pp. 605-639.

[9] A. BERMAN AND R. J. PLEMMONS, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979.

[10] J. BRAMBLE AND J. PASCIAK, New estimates for multilevel algorithms including the V-cycle, Math. Comp, 202 (1993), pp. 447-471.

[1l] A. BRANDT, General highly accurate algebraic coarsening, Electron. Trans. Numer. Anal., 10 (2000), pp. 120, http://etna.math.kent.edu/vol.10.2000/ppl-20.dir/ppl-20.html.

[12] A. BRANDT, S. MCCORMICK, AND J. RUGS, Algebraic multigrid (AMG) for sparse matrix equations, in Sparsity and Its Applications (Loughborough, 1983), Cambridge Univ. Press, Cambridge, 1985, pp. 257284.

[13] R. FALGOUT AND P. VASSILEVSKI, On generalizing the AMGframework, SIAM J. Numer. Anal., 42 (2004), pp. 1669-1693.

[14] R. FALGOUT, P. VASSILEVSKI, AND L. ZIKATANOV, On two-grid convergence estimates, Numer. Linear Algebra Appl., 12 (2005), pp. 471-494.

[15] A. FROMMER, H. SCHWANDT, AND D. SZYLD, Asynchronous weighted additive Schwarz methods, Electron. Trans. Numer. Anal., 5 (1997), pp. 48-61. http://etna.math.kent.edu/vol.5.1997/pp48-6l.dir/pp48-6l.html.

[16] A. FROMMER AND D. SZYLD, An algebraic convergence theory for restricted additive Schwarz methods using weighted max norms, SIAM J. Numer. Anal., 39 (2001), pp. 463-479.

[17] M. GRIEBEL AND P. OSWALD, On the abstract theory of additive and multiplicative Schwarz algorithms, Numer. Math., 70 (1995), pp. 163-180.

[18] W. HACKBUSCH, Multigrid Methods and Applications, Vol. 4 of Springer Series in Computational Mathematics, Springer, Berlin, 1985.

[19] W. HACKBUSCH, Iterative Solution of Large Sparse Systems of Equations, Springer, New York, 1994.

[20] A. N. LANGVILLE AND C. D. MEYER, Google's PageRank and Beyond: the Science of Search Engine Rankings, Princeton University Press, Princeton, NJ, 2006.

[21] S. MACLACHLAN, T. MANTEUFFEL, AND S. MCCORMICK, Adaptive reduction-based AMG, Numer. Linear Algebra Appl., 13 (2006), pp. 599-620.

[22] S. MARGENOV L. XANTHIS, AND L. ZIKATANOV, On the optimality of the semi-coarsening AMLI algorithm, in Iterative Methods in Linear Agebra II, Vol. 3 of IMACS Series in Computational and Applied Mathematics, NJ, 1995, pp. 254-269.

[23] J. MEIIERINK AND H. V. DER VORST, An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix, Math. Comp., 31 (1977), pp. 148-162.

[24] C. MENSE, Algebraic Schwarz methods using Schur complements, Diploma thesis, Fakultdt fur Mathematik, Universitdt Bielefeld, 2002.

[25] R. NABBEN, A note on comparison theorems for splittings and multisplittings of hermitian positive definite matrices, Linear Algebra Appl., 233 (1996), pp. 67-80.

[26] --, Comparisons between multiplicative and additive Schwarz iterations in domain decompositon methods, Numer. Math., 95 (2003), pp. 145-162.

[27] M. NEYTCHEVA, O. AXELSSON, AND K. GEORGIEV, An application of the AMLI method for solving convection-diffusion problems with potential velocity field, in Algebraic Multilevel Iteration Methods with Applications, Vol. I, II (Nijmegen, 1996), Katholieke Univ. Nijmegen, Dept. Math., Nijmegen, 1996, pp. 197-210.

[28] Y. NOTAY, A robust algebraic multilevel preconditioner for non-symmetric M-matrices, Numer. Linear Algebra Appl., 7 (2000), pp. 243-267.

[29] --, Algebraic multigrid and algebraic multilevel methods: a theoretical comparison, Numer. Linear Algebra Appl., 12 (2005), pp. 419-451.

[30] --, Aggregation-based algebraic multilevel preconditioning, SIAM J. Matrix Anal. Appl., 27 (2006), pp. 998-1018 (electronic).

[31] J. M. ORTEGA, Numerical Analysis - a Second Course, Acaddemic Press, New York, 1972.

[32] J. W. RUGS AND K. STUBEN, Algebraic multigrid, in Multigrid Methods, Vol. 3 of Frontiers Appl. Math., SIAM, Philadelphia, PA, 1987, pp. 73-130.

[33] Y. SAAD AND B. SUCHOMEL, ARMS: an algebraic recursive multilevel solver for general sparse linear systems, Numer. Linear Algebra Appl., 9 (2002), pp. 359-378.

[34] K. STUBEN, Algebraic multigrid (AMG): experiences and comparisons, Appl. Math. Comput., 13 (1983), pp. 419-451.

[35] --, An introduction to algebraic multigrid, in Multigrid, U. Trottenberg, C. W. Oosterlee, and A. Schiiller, eds., Academic Press, London, 2001, pp. 413-528.

[36] A. TOSELLI AND O. WIDLUND, Domain Decomposition Methods Algorithms and Theory, Vol. 34 of Springer Series in Computational Mathematics, Springer, Berlin, 2005.

[37] U. TROTTENBERG, C. OOSTERLEE, AND A. SCHULER, Multigrid, Academic Press, London, 2001.

[38] R. S. VARGA, Matrix Iterative Analysis, Second expanded edition, Vol. 27 of Springer Series in Computational Mathematics, Springer, Berlin, 2000.

[39] C. WAGNER, On the algebraic construction of multilevel transfer operators, Computing, 65 (2000), pp. 7395.

[40] Z. I. WOLNICKI, Nonnegative splitting theory, Japan J. Indust. Appl. Math., 11 (1994), pp. 289-342.

[41] J. XU, Iterative methods by space decomposition and subspace correction, SIAM Rev., 34 (1992), pp. 581613.

[42] D. M. YOUNG, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971.

CHRISTIAN MENSE ([dagger]) AND REINHARD NABBEN ([dagger])

([dagger]) Institut fur Mathematik, MA 3-3, Technische Universitat Berlin, Strasse des 17. Juni 136, D-10623 Berlin, Germany ({mense,nabben}@math.tu-berlin.de).

Printer friendly Cite/link Email Feedback | |

Author: | Mense, Christian; Nabben, Reinhard |
---|---|

Publication: | Electronic Transactions on Numerical Analysis |

Article Type: | Report |

Geographic Code: | 4EUGE |

Date: | Jan 1, 2008 |

Words: | 8906 |

Previous Article: | Stability analysis of fast numerical methods for Volterra integral equations. |

Next Article: | Parameter-uniform fitted operator B-spline collocation method for self-adjoint singularly perturbed two-point boundary value problems. |

Topics: |