# A quadratically convergent Bernoulli-like algorithm for solving matrix polynomial equations in Markov chains.

Abstract. A quadratically convergent algorithm is developed for solving matrix polynomial equations arising in M/G/1 and G/M/1 type Markov chains. The algorithm is based on the computation of generalized block eigenvalues/vectors of a suitable pair of matrices by means of a Bernoulli-like method. The use of the displacement structure allows one to reduce the computational cost per step. A shifting technique speeds up the rate of convergence.

Key words. polynomial matrix equations, Markov chains, generalized eigenvalues/eigenvectors, displacement structure.

AMS subject classifications. 15A24, 60J22, 65F15.

1. Introduction. We develop a quadratically convergent algorithm for computing the component-wise minimal nonnegative solution of the matrix polynomial equation

G = [n.summation over (i=0)] [A.sub.i][G.sup.i], (1.1)

where [A.sub.i], i = 0, 1, ..., n, are all nonnegative m x m matrices and A = [[summation].sup.n.sub.i=0] [A.sub.i] is irreducible and stochastic. The computation of G is fundamental in the numerical solution of M/G/1 type Markov chains. In fact, Markov chains of M/G/1 type, introduced by M.F. Neuts in , are characterized by block upper Hessenberg transition probability matrices, which are "almost" block Toeplitz. Due to the structure of the probability transition matrix, the computation of the steady state vector, and of other important performance measures, is ultimately reduced to the computation of the matrix G .

We also consider the dual problem

R = [n.summation over (i=0)] [R.sup.i] [A.sub.i], (1.2)

under the same conditions on [A.sub.i]. Such a problem arises in G/M/1 type Markov chains  having a transition probability matrix in block lower Hessenberg form, which is "almost" block Toeplitz. Also, for this class of Markov chains, the computation of the steady state vector, as well as of other important performance measures, is ultimately reduced to solving (1.2).

We assume that in both problems the associated Markov chain is irreducible and positive recurrent. Under this assumption there exists a componentwise minimal nonnegative solution of (1.1) and (1.2), and these solutions are such that [rho](G) = 1, [rho](R) < 1, respectively, where the symbol [rho](x) denotes the spectral radius. Our algorithm will compute such minimal solutions.

In the last years, several algorithms for solving the above nonlinear matrix equations have been designed. Besides the classical fixed point iterations [28, 22, 25], quadratically convergent algorithms have been developed, based on Newton's method , on cyclic reduction [24, 7, 8, 6], and on the computation of invariant subspaces . Here we propose a different approach, based on the following observation: by setting

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.3)

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.4)

we find that (1.1) implies

Cg = DgG. (1.5)

As pointed out in , equation (1.5) means that G solves (1.1) if and only if the columns of the block vector g span a deflating subspace  for the pair (C, D). We can also say that G is a block generalized eigenvector of the pair (C, D), with corresponding block eigenvector 9.

Observe that C is a block Frobenius matrix, possibly singular. Moreover, also the matrix D could be singular.

Our algorithm provides an approximation of the generalized block eigenvalue G in this way: we generate two sequences of matrices [{[C.sup.(k)]}.sub.k [greater than or equal to] 0], [{[D.sup.(k)]}k.sup. [greater than or equal to] 0] satisfying

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Due to the spectral properties of G, we show that a suitable submatrix of [C.sup.(k)] quadratically converges to zero, as k [right arrow] [infinity]. This allows one to compute a finite number of matrices [C.sup.(k)], [D.sup.(k)], k = 1, ..., K, for a suitable K, and then to recover an approximation of G by solving an m x m linear system. We show that the matrices [D.sup.(k)], k [greater than or equal to] 0, are sparse, and that their computation requires only O([m.sup.3]n) arithmetic operations tops). The structure of the matrices [C.sup.(k)], k [greater than or equal to] 0, is less evident, since they are full matrices. However, we show that the block displacement rank (see ) of [C.sup.(k)] is at most 3. Thus, the concept of displacement rank allows one to exploit the structure of C, and to represent [C.sup.(k)] by means of a few block vectors. Such vectors can be computed by means of Fast Fourier Transforms, with a computational cost of O([m.sup.2]nlog n + [m.sup.3] n) ops.

The resulting algorithm is quadratically convergent, and the computational cost of each step is O([m.sup.2]n log n + [m.sup.3]n) ops. Finally, we increase the speed of convergence by means of the shifting technique introduced in .

For the dual matrix equation (1.2) we propose a similar algorithm.

The idea of solving polynomial matrix equations by computing a block eigenvalue/ eigenvector, or a deflating subspace, is not new. In [12, 14, 16, 19] a block Bernoulli iteration is applied to compute a block eigenvalue of the block Frobenius matrix associated with the matrix polynomial equation. In the Markov chains framework, the matrix G is approximated by computing the invariant subspace of a suitable block Frobenius matrix . More generally, in [19, 2] the solution of the polynomial matrix equation is expressed in terms of a generalized Schur decomposition of C and D, and a Schur method is applied to compute such decomposition. In particular, in  this approach is applied to several classes of polynomial and rational matrix equations; however, the authors write that, for polynomial matrix equations of degree greater than 2, they don't know how the structure of the block Frobenius matrix can be exploited to compute the generalized Schur decomposition.

The paper is organized as follows. In Section 2 we describe our Bernoulli-like algorithm. In Section 3 we analyze the displacement structure of the matrices [C.sup.(k)], k [greater than or equal to] 0. In Section 4 the algorithm is adapted for solving (1.2). In Section 5 we propose a shifting technique to speed up the convergence. Finally, in Appendix A we recall the concept of displacement rank and its main properties.

2. The basic algorithm for G. In the following, for a positive integer h, we will denote by [I.sub.h] the h x h identity matrix. Moreover, we will denote by el the m(n - 1) x m matrix made up by the first m columns of [I.sub.m(n-1)], i.e.,

[e.sub.1] = [[[I.sub.m] O, ..., O].sup.T]

The matrix [I.sub.m] - [A.sub.1] is a nonsingular M-matrix , thus we will assume without loss of generality that [I.sub.m] - [A.sub.1] = [I.sub.m],, i.e., [A.sub.1] = 0. Indeed, in the general case, we may multiply the matrix equation (1.1), on the left, by [([I.sub.m], - [A.sub.1]).sup.-1].

Let G be the minimal nonnegative solution of the matrix equation (1.1), and let C and D be the n x n block matrices defined in (1.4) such that

Cg = DgG, (2.1)

where g is the n-block dimensional vector defined in (1.3). Let us denote by [C.sub.n-1,n-1] the (n - 1) x (n - 1) block trailing principal submatrix of C. Since we are assuming [A.sub.1] = 0, it is a simple matter to verify that [C.sub.n-1,n-1] is nonsingular and that its inverse is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In particular, by defining

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we obtain, from (2.1), that

[??]g = [??]gG.

The matrices [??] and [??] are explicitly given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Set [C.sup.(0)] = C, [D.sup.(0)] = [??], and partition these matrices into 2 x 2 block matrices as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2)

Define the sequences of matrices [{[C.sup.(k)]}.sub.k[greater than or equal to]0], [{[D.sup.(k)]}.sub.k[greater than or equal to]0],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.3)

such that

[T.sup.(k)] = [I.sub.m(n-1)] + [d.sup.(k)] [e.sup.T.sub.1],

and [d.sup.(k)], [W.sup.(k)], [V.sup.(k)], [s.sup.(k)] are defined by means of the following recursions, starting from (2.2),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.4)

where

(2.5) [y.sup.(k)] = [I.sub.m(n-1)] + [d.sup.(k)] [e.sup.T.sub.1] + [e.sub.1] [A.sub.0][s.sup.(k)T] = [T.sup.(k)] + [e.sub.1] [A.sub.0] [s.sub.(k)T],

provided that [y.sup.(k)] is nonsingular for any k [greater than or equal to] 0.

THEOREM 2.1. Assume that the matrix [Y.sup.(k)] of (2.5) is nonsingular for any k [greater than or equal to] 0. Then the matrices [C.sup.(k)], [D.sup.(k)], k [greater than or equal to] 1, are well defined and satisfy the following properties:

1. [C.sup.(k+1)] = [L.sup.(k)][C.sup.(k)], [D.sup.(k+1)] = [U.sup.(k)][D.sup.(k)], k [greater than or equal to] 0, where [L.sup.(k)], [U.sup.(k)] are matrices with the structure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

such that [U.sup.(k)][C.sup.(k)] = [L.sup.(k)][D.sup.(k)];

2. [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], for k = 0,1 ...

Proof. Let us prove the first part of the theorem by using an induction argument. For k = 0 let the matrices [L.sup.(0)], [U.sup.(0)] have the structure stated in the theorem, and satisfy [U.sup.(0)][C.sup.(0)] = [L.sup.(0)][D.sup.(0)]. After equating the block entries in the latter equality we obtain that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

whence we deduce that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.6)

If we define the matrices H = [L.sup.(0)][C.sup.(0)], K = [U.sup.(0)][D.sup.(0)], from the structure of [C.sup.(0)], D.sup.(0)], [L.sup.(0)], [U.sup.(0)], we find that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

By substituting relations (2.6) in the above equations, we deduce from (2.4) that H = [C.sup.(1)], K = [D.sup.(1)]. The inductive step can be proved by using the same arguments. Concerning the second part of the theorem, observe that by using the property [C.sup.(1)] = [L.sup.(0)][C.sup.(0)] and [D.sup.(1)] = [U.sup.(0)][D.sup.(0)] we have

[C.sup.(1)]g = [L.sup.(0)][C.sup.(0)]g = [L.sup.(0)][D.sup.(0)]gG = [U.sup.(0)][C.sup.(0)]gG = [U.sup.(0)][D.sup.(0)]g[G.sup.2] = [D.sup.(1)]g[G.sup.2].

Thus, by induction, it follows that [C.sup.(k)]g = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for k = 0,1, ...

REMARK 2.2 (Relation with cyclic reduction). We observe that the matrices [V.sup.(k)], [Y.sup.(k)] can be viewed as the blocks generated at the k-th step of the cyclic reduction applied to a suitable block tridiagonal infinite matrix [13, 7]. For this purpose, let us define the matrices

[F.sup.(k)] = [e.sub.1] [A.sub.0] [W.sup.(k)] [e.sup.T.sub.1].

Then, from (2.4) it follows that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Moreover, we can write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and we can easily observe that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The above recurrences, together with the relation for the matrices [V.sup.(k)] in (2.4), allow one to conclude that the matrices [T.sup.(k)], [V.sup.(k)], [F.sup.(k)], [Y.sup.(k)] are the blocks obtained at the k-th step of cyclic reduction applied to the infinite matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.7)

Consider now the relation [C.sup.(k)]g = [[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], k = 0,1, ....By deleting the first block row in this equality, we obtain that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.8)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If the right hand side of equation (2.8) converges to zero, we may approximate G with the first block entry [X.sub.1] of the solution of the linear system

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.9)

for a sufficiently large value of k. Moreover, due to the special structure of [T.sup.(k)], in order to compute [X.sub.1] in (2.9), it is sufficient to solve the m x m linear system

([I.sub.m] + [d.sup.(k).sub.1]) [X.sub.1] = [A.sub.0], (2.10)

where [d.sup.(k).sub.1] denotes the first block component of the block vector [d.sup.(k)]. In fact, such convergence property holds, as we will show in the next theorem. Before stating this convergence result, we need to introduce the polynomial of degree at most mn

a(z) = det(zI - [n.summation over (i=0)] [z.sup.i][A.sup.i]). (2.11)

We may assume, without loss of generality, that 1 is the only zero of a(z) having modulus one . Under this assumption, since the M/G/1 Markov chain is positive recurrent, the function a(z) has m zeros in the closed unit disk (including a zero equal to 1) and m(n - 1) zeros outside the unit disk, where we put zeros equal to infinity if [A.sub.n] is singular; moreover, the matrix G is stochastic and its eigenvalues are the zeros of a(z) in the closed unit disk (see [27, 15]).

THEOREM 2.3. Let [sigma] = 1/ min{[absolute value z] : [absolute value z] > 1, a(z) = 0}. Then for any matrix norm and for any [epsilon] > 0 such that [epsilon] + [sigma] < 1, one has [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]).

Proof. Consider the polynomial q(z) = det(zI - [F.sup.(0)] - [z.sup.2] [V.sup.(0)]), having degree at most 2m(n - 1). Since the rank of [F.sup.(0)] is at most m, it follows that [xi] = 0 is a zero of q(z) of multiplicity at least (n - 2)m. Moreover, for the properties of block Frobenius matrices, if [xi] is a zero of a(z), then [xi] is a zero of q(z). Thus, [xi] is a zero of q(z) if and only if [xi] is a zero of [z.sup.(n-2)m] a(z). From this property, since the M/G/1 Markov chain is positive recurrent, it follows that q(z) has exactly m(n - 1) zeros inside the closed unit disk (see ). Hence, from the results of , the matrix equations X = [V.sup.(0)] + [X.sup.2] [F.sup.(0)] and X = [F.sup.(0)] + [V.sup.(0)] [X.sup.2] have a minimal nonnegative solution [??] and [??], respectively, such that [rho]([??]) = [sigma], and [??] is stochastic. From Remark 2.2 and from the results of , it follows that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

From the results on cyclic reduction applied to quadratic matrix polynomials of [7, 4], one has that the sequence [{F.sup.(k)}.sub.k] is uniformly bounded and that [{V.sup.(k)}.sub.k] converges to zero as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], for any [epsilon] > 0 such that [sigma] + [espilon] < 1. 0

Since G is a stochastic matrix, the entries of [g.sub.1] and of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are nonnegative and bounded from above by a constant. Therefore, for the right hand-side of (2.8), we have that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As a corollary of the above theorem we have also that, if the norm of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] remains bounded from above by a constant for any k, then the sequence [{[d.sup.(k)]}.sub.k] is convergent. The resulting algorithm is the following:

ALGORITHM 2.4.

INPUT: Nonnegative matrices [A.sub.i], i = 0, ..., n, a small real number [epsilon] > 0 for the stopping condition, a matrix norm [parallel] x [parallel].

OUTPUT: An approximation [??] of the minimal nonnegative solution of (1.1).

COMPUTATION:

1. Set [A.sub.0] = [(I - [A.sub.1]).sup.-1] [A.sub.0], [A.sub.i] = [(I - [A.sub.1]).sup.-1] [A.sub.i], i = 2, ..., n, [A.sub.1] = 0.

2. Set k = 0and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

3. Compute

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

4. If [parallel][d.sup.(k+1).sub.1] - [d.sup.(k).sub.1] [parallel] > [epsilon], where [d.sup.(k).sub.1] is the first block component of [d.sup.(k)], then set k = k + 1 and repeat step 3. Otherwise solve the linear system (2.10) and output G = [X.sub.1].

From (2.5) it follows that the matrix [Y.sup.(k)], k [greater than or equal to] 0, is the identity matrix plus a correction in the first block row and in the first block column. In particular, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Hence the computation of [d.sup.(k)],[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [W.sup.(k)] and [s.sup.(k)] can be reduced to operations between block vectors, with a computational cost of O([m.sup.3]n) arithmetic operations. The matrix [V.sup.(k)], k > 1, does not have any evident structure, since it is a full matrix. For this reason the computation of this matrix can amount to O([m.sup.3][n.sup.2]) arithmetic operations, which is a very high cost. In section 3 we will show that also the matrix [V.sup.(k)] has a structure, which relies on the concept of displacement rank. By using this concept we will show that the matrix [V.sup.(k)] is represented by at most 4 block vectors, and these block vectors can be computed by means of FFTs with O([m.sup.3]n + [m.sup.2]n log n) ops.

Concerning the nonsingularity of [Y.sup.(k)], from Remark 2.2 and from the results of  we have that [Y.sup.(k)] is nonsingular if and only if the ([2.sup.k] - 1) x ([2.sup.k] - 1) block leading principal submatrix of the matrix of (2.7) is nonsingular. We were not able to show that these submatrices are nonsingular. In the numerical experiments we have never encountered singularity. In any case, if we would encounter singularity, we could apply a doubling strategy, as described in , until nonsingularity is obtained. We refer the reader to  for details. The same technique can be used to overcome a possible ill-conditioning of the matrices [Y.sup.(k)]. It can be shown (see ) that the sequence [{y.sub.(k)}.sup.k] converges to a nonsingular matrix. However, it is not clear what the conditioning of the limit matrix is.

In principle, we could directly apply our algorithm to the pair (C, D), instead of to the pair ([??], [??]). In other words, we could choose [C.sup.(0)] = C and [D.sup.(0)] = D, instead of [C.sup.(0)] = C and [D.sup.(0)] = [??]. However, by starting with [C.sup.(0)] = [??] and [D.sup.(0)] = [??] the structure of the matrices [{C.sup.(k)}.sub.k[greater than or equal to]1], [{[D.sup.(k)]}.sub.k[greater than or equal to]1] that we generate is simplified, compared to the case [C.sup.(0)] = C and [D.sup.(0)] = D. Thus we have chosen to use the pair ([??], [??]).

3. The algorithm revised with displacement properties. In this section we show that the matrices [V.sup.(k)], k [greater than or equal to] 0, have a displacement structure. In particular, we show that they have block displacement rank at most 3 and we provide recursive formulas for the generators. These formulas allow us to implement a version of our algorithm with low computational cost, which fully exploits the Toeplitz structure of the involved matrices. We recall in Appendix A the concept and the properties of displacement rank, the definition of block Toeplitz-like matrices, and the fast algorithms for computing the product of a block Toeplitz matrix and a block vector.

For any k [greater than or equal to] 0, let us define the quadratic matrix polynomial

[[phi].sup.(k)] (z) = -[V.sup.(k)] + z[Y.sup.(k)] - [z.sup.2][e.sup.1][A.sub.0][W.sup.(k)][e.sup.T.sub.1].

The next theorem shows that, for any z and for any k [greater than or equal to] 0, the matrix [[phi].sup.(k)] (z) has displacement rank at most 3, with respect to the block displacement operator [DELTA] (A) = ZA - AZ, where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

THEOREM 3.1. For any z and for any k [greater than or equal to] 0 we have

[DELTA]([[phi].sup.(k)](z)) = ([u.sup.(k)] + [ze.sub.1] [Q.sup.(k)])[e.sup.T.sub.1] [[phi].sup.(k)] (z) + [[phi].sup.(k)] (z)[e.sub.1] ([r.sup.(k)T] + z[W.sup.(k)] [e.sup.T.sub.1]) - [e.sub.1] [[gamma].sup.T] [[phi].sup.(k)](z), (3.1)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.2)

Proof. Let us prove the theorem by induction:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It is a simple calculation to show that

-[e.sub.2]z[e.sup.T.sub.1] - [e.sub.1][e.sup.T.sub.1] [[phi].sup.(0)] (z)Z = -[e.sub.1]z[e.sup.T.sub.1][[phi].sup.(0)] (z) + [[phi].sup.(0)] (z)[e.sub.1]z[e.sup.T.sub.1] - [e.sub.1][[gamma].sup.T][[phi].sup.(0)](z).

Hence

[DELTA]([[phi].sup.(0)](z)) = ([e.sub.2] - z[e.sub.1])[e.sup.T.sub.1][[phi].sup.(0)])(z) + [[phi].sup.(0)](z)[e.sub.1]z[e.sup.T.sub.1] - [e.sub.1][[gamma].sup.T][[phi].sup.(0)](z).

Now, let us assume that (3.1) holds for a k [greater than or equal to] 0, and let us show it for k + 1. For Remark 2.2 and for the results of  [[phi].sup.(k+1)]([z.sup.2]) = 2zB[(z).sup.-1], where B(z) = [([[phi].sup.(k)](z)).sup.-1] - [([[phi].sup.(k)](-z)).sup.-1]. Thus, since (compare )

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we obtain, after some algebraic manipulations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

From the above theorem, since [V.sup.(k)] = -[[phi].sup.(k)] (0) we immediately obtain the following property of the matrices [V.sup.(k)], k [greater than or equal to] 0:

THEOREM 3.2. For any k [greater than or equal to] 0 the matrices [V.sup.(k)] have the following displacement structure:

[DELTA]([V.sup.(k)]) = [u.sup.(k)][e.sup.T.sub.1][V.sup.(k)] + [V.sup.(k)][e.sub.1][r.sup.(k)T] - [e.sub.1][[gamma].sup.T][V.sup.(k)],

where [u.sup.(k)], [r.sup.(k)], [gamma] are defined in (3.2).

From the above theorem, the matrix [V.sup.(k)] is only defined in terms of the block vectors [u.sup.(k)], [r.sup.(k)], [e.sup.T.sub.1][V.sup.(k)], [V.sup.(k)][e.sub.1], [[gamma].sup.T][V.sup.(k)]. The computation of these vectors can be performed by using the fast techniques recalled in Appendix A, which rely on the displacement structure of [V.sup.(k)]. In particular, from (2.4), we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus, given [e.sup.T.sub.1][V.sup.(k)], the vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is computed by means of O(n[m.sup.3]) ops, by using the structure of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The product [y.sup.T][V.sup.(k)] is computed by exploiting the displacement structure of [V.sup.(k)], by means of O([m.sup.2]n log n + [m.sup.2]n) ops.

4. Algorithm for R. Our algorithm can be used to solve the dual problem, that is, computing R which satisfies the polynomial equation (1.2).

Since the Markov chain is positive recurrent, the polynomial a(z) of (2.11) has m zeros inside the open unit disk and m(n - 1) eigenvalues outside the open unit disk. Moreover, we may assume, without loss of generality, that z = 1 is the only zero of modulus one. The matrix R has spectral radius less than one, and its eigenvalues are the zeros of a(z) in the open unit disk.

Note that the equation (1.2) is equivalent to

[R.sup.T] = [A.sup.T.sub.0] + [A.sup.T.sub.1][R.sup.T] + ... + [A.sup.T.sub.n][([R.sup.T]).sup.n]

and [R.sup.T] satisfies Cr = Dr[R.sup.T], where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and C, D are as defined in (1.4), where the [A.sub.i] are replaced with [A.sup.T.sub.i].

Thus, we may apply exactly the same algorithm. As for the algorithm for G, [xi] is a zero of q(z) = det(zI - [F.sup.(0)] - [z.sup.2][V.sup.(0)]) if and only if [xi] is a zero of a(z)[z.sup.m(n-2)]. In particular, since a(z) has exactly m zeros inside the open unit disk, then q(z) has exactly m(n -1) zeros inside the open unit disk.

The matrix [V.sup.(0)] + [F.sup.(0)] is not stochastic, but it is a simple calculation to show that the matrix [V.sup.(0)] + [F.sup.(0)] is stochastic, where [V.sup.(0)] = (I [cross product] D([pi])).sup.-1][V.sup.(0)](I [cross product] D([pi])), [F.sup.(0)] = [(I [cross product] D([pi])).sup.-1][F.sup.(0)](I [cross product]D([pi])), D([pi]) is the m x m diagonal matrix whose diagonal elements are the entries of [pi], and [pi] is the steady state vector of A = [summation].sup.n.sub.i=0] [A.sub.i].

From the results of , the matrix equations X = [V.sup.(0)] + [X.sup.2][F.sup.(0)] and X = [F.sup.(0)] + [V.sup.(0)][[??].sup.2] have a minimal nonnegative solution [??] and [??], respectively, such that [rho]([??]) = [theta], where [theta] = max{[absolute value of z] : [absolute value of z] < 1, [alpha](z) = 0}, and [rho]([??]) = 1. Let us define [[??].sup.(k)] = (I [cross product]D[([pi])).sup.-1][F.sup.(k)](I [cross product] D([pi])), [V.sup.(k)] = [(I [cross product] D([pi])).sup.-1][V.sup.(k)] (I [cross product] D ([pi])). As for the algorithm for G, from Remark 2.2 and from the results of [7, 4], the sequence [{[V.sup.(k)]}.sub.k] is uniformly bounded and [{[??].sup.(k)]}.sub.k] converges to zero of order [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], for any [epsilon] > 0 such that [theta] + [epsilon] < 1. In particular, the same convergence properties hold for [{[V.sup.(k)]}.sub.k] and [{[F.sup.(k)]}.sub.k]. Since [rho](R) = [theta], for the right hand-side of (2.8), we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for any matrix norm.

5. Shifting technique. The speed of convergence of our algorithm can be improved by applying the shifting technique introduced in  for Quasi-Birth-Death problems, and expressed in [5, 10] in functional form as follows. Let u be a nonnegative vector of length k such that [u.sup.T] e = 1. Define the rank one matrix E = e[u.sup.T] and the matrix Laurent polynomial E(z) = I - [z.sup.-1]E. Observe that, by setting

[??](z) = [[??].sub.0] + z[[??].sub.1] + [z.sup.2][[??].sub.2] + ... + [z.sup.n][[??].sub.n],

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5.1)

since [[??].sub.0]E = 0 and [E.sup.i] = E for any i > 0, we have

zI - A(z) = (zI - [??](z))E(z).

Now, for the sake of simplicity assume that An is nonsingular (the case where det [A.sub.n] = 0 can be treated by means of a continuity argument). Since [[??].sub.n] = An the polynomials a(z) = det(zI - A(z)) and [??](z) = det(zI - A(z)) have the same degree, and, therefore the same number of roots.

Since the matrix I - [z.sup.-1]E is defined for z [not equal to] 0 and is singular only for z = 1, we deduce that if [lambda] is a zero of a(z) and [lambda] [not equal to] 0, 1, then it is a zero of [??](z) and vice-versa. Moreover, a(1) = 0. Thus, we obtain that [??](z) has the same zeros as a(z) except for z = 1 which is replaced in the case of [??](z) by z = 0.

Let us now consider the problem of computing G. Since we have assumed that 1 is the only zero of a(z) having modulus 1, we may define, as in , the matrix H = G - e[u.sup.T]. Then, it is easy to see that the eigenvalues of H are those of G except that in the case of H the eigenvalue 1 of G is replaced by 0. Moreover, e is an eigenvector of H corresponding to the eigenvalue 0, and hence He = 0. So we have

[G.sup.i] = [(H + e[u.sup.T]).sup.i] = [H.sup.i] + e[u.sup.T][H.sup.i-1] + ... + e[u.sup.T] H + e[u.sup.T], i = 1, 2 ..., n.

So replacing G by H + e[u.sup.T] in (1.1), we obtain the following shifted equation

H = [B.sub.0] + [B.sub.1]H + ... + [B.sub.n][H.sub.n] (5.2)

where [B.sub.i] = [[??].sub.i]. As before, the solution H of the equation (5.2) satisfies (2.1), where in C and D the matrices [A.sub.i] are replaced with [B.sub.i], and G with H. Thus, assuming that all the intermediate matrices [Y.sup.(k)] are non-singular, we may apply the same algorithm. By using the same arguments of the proof of Theorem 2.3 we deduce that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [gamma] = [rho](H) < 1, for any [epsilon] > 0 such that [gamma] + [epsilon] < 1, [sigma] + [epsilon] < 1.

As before we can apply a shifting technique to the equation (1.2), where in this case the zero that is equal to 1 is moved to [infinity], instead of to 0. Let

[A.sub.0] = [A.sub.0],

[[??].sub.1] = [A.sub.1] + [A.sub.0]E,

[[??].sub.i] = [A.sub.i] + ([i-1.summation over (j=0)][A.sub.j] - I) E = [A.sub.i] - ([n.summation over (j=i)][A.sub.j]) E, for i = 2, ..., n.

It turns out that R is also the solution of the shifted equation

R = [[??].sub.0] + R[[??].sub.1] + ... + [R.sup.n][[??].sub.n]

as a direct result of the following identity

[A.sub.0]e = [n-1.summation over (i=1)][R.sup.i]([n.summation over (j=i+1)][A.sub.j]e),

and that

zI - A(z) = (zi - A(z)) (I - zE).

If [mu] is the smallest zero in modulus of the polynomial a (z) among the zeros which are outside the unit circle, we obtain that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for any [epsilon] > 0 such that [theta] + [epsilon] < 1, 1/[mu] + [epsilon] < 1.

6. Numerical Examples. We have implemented in MATLAB our algorithm for the computation of G and R, with and without the shifting technique. We have not used the displacement structure, thus in our results we report only the number of iterations and the residual error, and we do not report the execution time.

6.1. Computing G. We use the norm of the residual of the equation

Res. = [[parallel][Asub.0] + [A.sub.1]G + ... + [A.sub.n][G.sup.n] - G[parallel].sub.[infinity]]

to check the accuracy of the computed solution. The stopping criterion for the original and the shifted algorithms is

[parallel][d.sup.(k+1).sub.1] - [d.sup.(k).sub.1][parallel].sub.[infinity]] < [10.sup.-12].

EXAMPLE 1. We construct [A.sub.0] = W + [delta]I and [A.sub.1] = [A.sub.2] = W, where W is the matrix having null diagonal entries and constant off-diagonal entries, and 0 < [delta] < 1. Note that the rate [rho] = [[pi].sup.T] ([A.sub.1] + 2[A.sub.2])e = 1 - [delta]. Thus, as [delta] approaches zero, the problem becomes more unstable. Table 6.1 and 6.2 report the results obtained with size m = 16.

EXAMPLE 2. We solve [[summation].sup.10.sub.i=0] [A.sub.i][G.sub.i] = G. The matrices are [A.sub.i] = [D.sup.-1]([s.sub.i][[A.bar].sub.i]), for i = 0, 1, 2, ..., 10, where [[A.bar].sub.i] are random matrices generated by the MATLAB command rand. The matrix size m is 10. The scalars [s.sub.k] are respectively [s.sub.0] = 1, [s.sub.1] = 1, [s.sub.2] = 0.5, [s.sub.3] = 0.0025, [s.sub.4] = 0.125, [s.sub.5] = 0.001, [s.sub.6] = 0.0005, [s.sub.7] = 0.0001, [s.sub.8] = 0.00005, [s.sub.9] = 0.00001, [s.sub.10] = 0.00005. The matrix D is a diagonal matrix whose entries are the row sums of [[summation].sup.n.sub.i=0] [s.sub.i][A.sub.i] so that ([[summation].sup.n.sub.i=0][A.sub.i])e = e. In this example [gamma] = 0.097488 and 1/[sigma] = 1.0099. Table 6.3 reports the results.

We observe that in both the examples the residual errors are very small, and that the shifting technique provides a much faster convergence rate.

6.2. Computing R. We use the norm of the residual of the equation

Res. = [[parallel][A.sub.0] + R[A.sub.1] + ... + [R.sup.n][A.sub.n] - R[parallel].sub.[infinity]].

to measure how accurate the solution is. The stopping criterion for the original and the shifted algorithms is

[[parallel][d.sup.(k+1).sub.1] - [d.sup.(k).sub.1][parallel].sub.[infinity]] < [10.sup.-12].

EXAMPLE 3. We construct [A.sub.1] = [A.sub.1] = W and [A.sub.2] = W + [delta]I, where W is the matrix having null diagonal entries and constant off-diagonal entries, and 0 < [delta] < 1. Note that the rate [rho] = [[pi].sup.T]([A.sub.1] + 2[A.sub.2])e = 1 + [delta]. Thus, as [delta] approaches zero, the problem becomes more unstable. Table 6.4 and 6.5 report the results obtained with size m = 16.

EXAMPLE 4. We solve R = [[summation].sup.10.sub.i=0] [R.sup.i][A.sub.j]. Here the matrices [A.sub.i] are generated as in the Example 2. In this example, [rho] = 1.0006, [theta] = 0.99892, and [mu] = 2.7410. Table 6.6 reports the results.

Also for the computation of the matrix R the residual errors are very small, and the shifting technique allows one to considerably reduce the number of iterations.

Appendix A. Displacement rank and fast Toeplitz computations. In this section we recall the concept of displacement rank, introduced by Kailath et al. , and elaborated in many other papers (see, for instance, [18, 11, 21] and the references cited therein). This concept is fundamental in devising and analyzing algorithms related to Toeplitz matrices, and will allow us to design an effective version of our algorithm that fully exploits the structure of the matrices [V.sup.(k)]. Finally, we will recall the fast algorithm based on the use of FFTs for computing the product between a block Toeplitz matrix and a block vector.

Define the h x h block down-shift matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where the blocks have dimension m, and consider the following block displacement operator

[DELTA](A) = ZA - AZ

defined for any h x h block matrix A.

We say that the block matrix A has block displacement rank r if r is the minimum integer such that there exist block vectors [u.sup.(i)], [v.sup.(i)], i = 1, ..., r, satisfying the equation [DELTA](A) = [[summation].sup.r.sub.1][u.sup.(i)][v.sup.(i)T].

If [DELTA](A) = [[summation].sup.r.sub.i=1][u.sup.(i)][v.sup.(i)T], then the matrix A can be represented in terms of block Toeplitz matrices defined by the block vectors [u.sup.(i)], [v.sup.(i)], i = 1, ..., r, and by its first block column, according the following formula (see ):

A = L(A[e.sub.1]) - [r.summation over (i=1)]L([u.sup.(i)])U([v.sup.(i)T][Z.sup.T]),

where L(w)(U([w.sup.T])) denotes the h x h block lower (upper) triangular block Toeplitz matrix defined by its first block column w (row [w.sup.T]). The block vectors A[e.sub.1], [u.sup.(i)], [v.sup.(i)], i = 1, ..., r, will be called generators of the block Toeplitz-like matrix A.

From the above results, it follows that a matrix with small block displacement rank can be expressed by means of a sum of a few products of block lower triangular and block upper triangular block Toeplitz matrices. In particular, the product of one such matrix and a block vector can be reduced, by means of formula (3.2), to a few products of block Toeplitz matrices and a block vector. This computation can be performed by means of a fast algorithm based on the polynomial evaluation/interpolation at the roots of 1 with FFTs. More specifically, let A = [([A.sub.i-j+h-1]).sub.i,j=1], ..., h be a block Toeplitz matrix, b a block vector with block entries [B.sub.0], [B.sub.1], ..., [B.sub.k-1], and c = Ab, where c has block entries [C.sub.0], [C.sub.1], ..., [C.sub.k-1]. Then, c can be efficiently computed by means of the following scheme [11, 7, 8]:

1. Evaluate the matrix polynomial [alpha](z) = [A.sub.h-1] + [A.sub.h]z + ... + [A.sub.2h-2][z.sup.h-1] + [A.sub.0][z.sup.h+1] + ... + [A.sub.h-2][z.sup.2h-1] at the 2h roots of 1 [[omega].sup.j], j = 0, ..., 2h - 1, where [omega] is a primitive 2h-th root of 1, by means of [m.sup.2] DFT's of order 2h, and obtain the matrices [alpha]([[omega].sup.j]), j = 0, ..., 2h - 1.

2. Evaluate the matrix polynomial [beta](z) = [B.sub.0] + [B.sub.1]z + ... + [B.sub.h-1][z.sup.h-1] at the 2h roots of 1 [[omega].sup.j], j = 0, ..., 2h - 1, by means of [m.sup.2] DFT's of order 2h, and obtain the matrices [beta]([[omega].sup.j]), j = 0, ..., 2h - 1.

3. Compute the products [gamma]([[omega].sup.j]) = [alpha]([[omega].sup.j])[beta]([[omega].sup.j]), j = 0, ..., 2h - 1.

4. Interpolate [gamma]([[omega].sup.j]) by means of [m.sup.2] IDFT's of order 2h and obtain the coefficients [[gamma].sub.0], [[gamma].sub.1], ..., [[gamma].sup.2h-1] such that [gamma](z) = [[summation].sup.2h-1.sub.i=0] [[gamma].sub.i] [z.sup.i] and set [C.sub.i] = [[gamma].sub.i], i = 0, ... , h - 1.

The computational cost of the above algorithm is O([m.sup.3]h + [m.sup.2]h log h) arithmetic operations. Thus, the computation of the product of a block Toeplitz-like matrix and a block vector can be performed at the same cost.

Acknowledgments. The authors wish to thank the anonymous referees for the helpful suggestions which improved the presentation of this paper.

REFERENCES

 N. AKAR AND K. SOHRABY, An invariant subspace approach in M/G/1 and G/M/1 type Markov chains, Comm. Statist. Stochastic Models, 13 (1997), pp. 381-416.

 P. BENNER, R. BYERS, V. MEHRMANN, AND H. XU, A unified deflating subspace approach for classes of polynomial and rational matrix equations, Preprint SFB393/00-05, Technische Universitat Chemnitz, Jan.2000.

 D. A. BINI, L. GEMIGNANI, AND B. MEINI, Factorization of analytic functions by means of Koenig's theorem and Toeplitz computations, Numer. Math., 89 (2001), pp. 49-82.

 D. A. BINI, L. GEMIGNANI, AND B. MEINI, Computations with infinite Toeplitz matrices and polynomials, Linear Algebra Appl., 343/344 (2002), pp. 21-61. Special issue on structured and infinite systems of linear equations.

 D. A. BINI, L. GEMIGNANI, AND B. MEINI, Solving certain matrix equations by means of Toeplitz computations: algorithms and applications, in Fast algorithms for structured matrices: theory and applications, vol. 323 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2003, pp. 151-167.

 D. A. BINI, G. LATOUCHE, AND B. MEINI, Solving matrix polynomial equations arising in queueing problems, Linear Algebra Appl., 340 (2002), pp. 225-244.

 D. A. BINI AND B. MEINI, On the solution of a nonlinear matrix equation arising in queueing problems, SIAM J. Matrix Anal. Appl., 17 (1996), pp. 906-926.

 --, Improved cyclic reduction for solving queueing problems, Numer. Algorithms, 15 (1997), pp. 57-74.

 --, Effective methods for solving banded Toeplitz systems, SIAM J. Matrix Anal. Appl., 20 (1999), pp. 700-719.

 --, Non-skip free M/G/1-type Markov chains and Laurent matrix Power series, Linear Algebra Appl., 386 (2004), pp. 187-206.

 D. A. BINI AND V. PAN, Matrix and Polynomial Computations, Vol. 1: Fundamental Algorithms, Birkhduser, Boston, 1994.

 R. C. BUSBY AND W. FAIR, Iterative solution of spectral operator polynomial equations and a related continued fraction, J. Math. Anal. Appl., 50 (1975), pp. 113-134.

 B. L. BUZBEE, G. H. GOLUB, AND C. W. NIELSON, On direct methods for solving Poisson's equation, SIAM J. Numer. Anal., 7 (1970), pp. 627-656.

 J. E. DENNIS, JR., J. F. TRAUB, AND R. P. WEBER, Algorithms for solvents of matrix polynomials, SIAM J. Numer. Anal., 15 (1978), pp. 523-533.

 H. R. GAIL, S. L. HANTLER, AND B. A. TAYLOR, Spectral analysis of M/G/1 and G/M/1 type Markov chains, Adv. in Appl. Probab., 28 (1996), pp. 114-165.

 I. GOHBERG, P. LANCASTER, AND L. RODMAN, Matrix polynomials, Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1982. Computer Science and Applied Mathematics.

 C. HE, B. MEINI, AND N. H. RHEE, A shifted cyclic reduction algorithm for quasi-birth-death problems, SIAM J. Matrix Anal. Appl., 23 (2001/02), pp. 673-691.

 G. HEINIG AND K. ROST, Algebraic Methods for Toeplitz-like Matrices and Operators, Akademie-Verlag, Berlin, and Birkhauser, Boston, 1984.

 N. J. HIGHAM AND H.-M. KIM, Numerical analysis of a quadratic matrix equation, IMA J. Numer. Anal., 20 (2000), pp. 499-519.

 T. KAILATH, S. KUNG, AND M. MORF, Displacement ranks of matrices and linear equations, J. Math. Anal. Appl., 68 (1979), pp. 395-407.

 T. KAILATH AND A. H. S AYED, eds., Fast reliable algorithms for matrices with structure, SIAM, Philadelphia, 1999.

 G. LATOUCHE, Algorithms for infinite Markov chains with repeating columns, in Linear Algebra, Queueing Models and Markov Chains, C. Meyer and R. Plemmons, eds., Springer-Verlag, New York, 1993, pp.231-265.

 --, Newton's iteration for non-linear equations in Markov chains, IMA J. Numer. Anal., 14 (1994), pp.583-598.

 G. LATOUCHE AND V. RAMAS WAMI, A logarithmic reduction algorithm for Quasi-Birth-Death processes, J. Appl. Probab., 30 (1993), pp. 650-674.

 B. MEINI, New convergence results on functional iteration techniques for the numerical solution of M/G/1 type Markov chains, Numer. Math., 78 (1997), pp. 39-58.

 M. F. NEUTS, Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach, The Johns Hopkins University Press, Baltimore, MD, 1981.

 --, Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker, New York, 1989.

 V. RAMASWAMI, Nonlinear matrix equations in applied probability--Solution techniques and open problems, SIAM Rev., 30 (1988), pp. 256-263.

 G. W. STEWART AND J. G. SUN, Matrix perturbation theory, Computer Science and Scientific Computing, Academic Press Inc., Boston, MA, 1990.

* Received May 16, 2003. Accepted for publication March 2, 2004. Recommended by Martin Gutknecht.

C. HE ([dagger]), B. MEINI ([double dagger]), N.H. RHEE ([section]), AND K. SOHRABY ([paragraph])

([dagger]) Sprint Corporation, Network Planning and Design, 7171 West 95th Street Overland Park, KS 66212, U.S.A.

([double dagger]) Dipartimento di Matematica, Universita di Pisa, via Buonarroti 2, 56127 Pisa, Italy.

([section]) Dept. of Mathematics and Statistics, University of Missouri-Kansas City, Kansas City, MO 64110, U.S.A.

([paragraph]) Dept. of Computer Science and Telecommunications, University of Missouri-Kansas City, Kansas City, MO 64110, U.S.A. This research was supported by DARPA under Grant F0316.
```TABLE 6.1
[delta], [gamma], [sigma], and the number of iterations for Example 1

C. He, B. Meini, N.H. Rhee, and K. Sohraby

[delta] [gamma] 1/[sigma]

[10.sup.-1] 0.07831112 1.33333333
[10.sup.-2] 0.01174465 1.03030303
[10.sup.-3] 0.02074893 1.00300300
[10.sup.-4] 0.02164936 1.00030003
[10.sup.-5] 0.02173941 1.00003000
[10.sup.-6] 0.02174841 1.00000300
[10.sup.-7] 0.02174931 1.00000030
[10.sup.-8] 0.02174940 1.00000003

[delta] Iter. of Iter. of
Orig. Alg. Shifted Alg.

[10.sup.-1] 8 5
[10.sup.-2] 11 4
[10.sup.-3] 14 4
[10.sup.-4] 17 4
[10.sup.-5] 21 4
[10.sup.-6] 24 5
[10.sup.-7] 27 4
[10.sup.-8] 29 15

TABLE 6.2
The residuals of the solutions in Example 1

[delta] Res. of Orig. Alg. Res. of Shifted Alg.

[10.sup.-1] 1.6 x [10.sup.-15] 3.3 x [10.sup.-16]
[10.sup.-2] 1.0 x [10.sup.-15] 5.8 x [10.sup.-16]
[10.sup.-3] 1.3 x [10.sup.-15] 2.7 x [10.sup.-16]
[10.sup.-4] 1.4 x [10.sup.-15] 3.0 x [10.sup.-16]
[10.sup.-5] 1.3 x [10.sup.-15] 1.9 x [10.sup.-16]
[10.sup.-6] 9.9 x [10.sup.-16] 2.6 x [10.sup.-16]
[10.sup.-7] 1.3 x [10.sup.-15] 2.4 x [10.sup.-16]
[10.sup.-8] 1.3 x [10.sup.-15] 2.6 x [10.sup.-16]

TABLE 6.3
The number of iterations and residuals of the solutions in Example 2

Orig. Alg. Shifted Alg.

Iter. 13 5
Res. 2.9 x [10.sup.-16] 5.0 x [10.sup.-16]

TABLE 6.4
[delta], [theta], [mu], and the number of iterations for Example 3

[delta] [theta] [mu]

[10.sup.-1] 0.75000000 12.769578
[10.sup.-2] 0.97058824 85.145135
[10.sup.-3] 0.99700599 48.195253
[10.sup.-4] 0.99970006 46.190730
[10.sup.-5] 0.99997000 45.999410
[10.sup.-6] 0.99999700 45.980366
[10.sup.-7] 0.99999970 45.978462
[10.sup.-8] 0.99999997 45.978272

[delta] Iter. of Iter. of
Orig. Alg. Shifted Alg.

[10.sup.-1] 8 5
[10.sup.-2] 11 4
[10.sup.-3] 14 4
[10.sup.-4] 17 4
[10.sup.-5] 21 5
[10.sup.-6] 24 4
[10.sup.-7] 27 4
[10.sup.-8] 29 4

TABLE 6.5
The residuals of the solutions in Example 3

[delta] Res. of Orig. Res. of Shifted
Alg. Alg.

[10.sup.-1] 3.9 x [10.sup.-16] 2.0 x [10.sup.-16]
[10.sup.-2] 4.9 x [10.sup.-16] 2.7 x [10.sup.-16]
[10.sup.-3] 6.6 x [10.sup.-16] 2.7 x [10.sup.-16]
[10.sup.-4] 5.4 x [10.sup.-16] 4.2 x [10.sup.-16]
[10.sup.-5] 3.7 x [10.sup.-16] 3.1 x [10.sup.-16]
[10.sup.-6] 5.8 x [10.sup.-16] 1.7 x [10.sup.-16]
[10.sup.-7] 4.0 x [10.sup.-16] 2.9 x [10.sup.-16]
[10.sup.-8] 4.9 x [10.sup.-16] 3.2 x [10.sup.-16]

TABLE 6.6
The number of iterations and residuals of the solutions in Example 4

Orig. Alg. Shifted Alg.

Iter. 16 6
Res. 2.8 x [10.sup.-16] 6.3 x [10.sup.-16]
```
COPYRIGHT 2004 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters