Printer Friendly

Matrix exponentials and inversion of confluent Vandermonde matrices.

Abstract. For a given matrix A we compute the matrix exponential [e.sup.tA] under the assumption that the eigenvalues of A are known, but without determining the eigenvectors. The presented approach exploits the connection between matrix exponentials and confluent Vandermonde matrices V. This approach and the resulting methods are very simple and can be regarded as an alternative to the Jordan canonical form methods. The discussed inversion algorithms for V as well as the matrix representation of [V.sup.-1] are of independent interest also in many other applications.

Key words. matrix exponential, Vandermonde matrix, fast algorithm, inverse.

AMS subject classifications. 34A30, 65F05, 15A09, 15A23.

1. Introduction. Given a (complex) matrix A of order n, the problem of evaluating its matrix exponential [e.sup.tA] is important in many applications, e.g., in the fields of dynamical systems or control theory.

While the matrix exponential is often represented in terms of an infinite series or by means of the Jordan canonical form our considerations have been inspired by papers like [7] and [6], where alternative methods are discussed. In an elementary way we here develop a representation of [e.sup.tA] which involves only the eigenvalues (but not the eigenvectors), the first (n - 1) powers of A, and the inverse of a corresponding confluent Vandermonde matrix V. Such a representation was already given in [4], where an important motivation was to arrange the approach and the proofs simple enough to be taught to beginning students of ordinary differential equations. We here make some slight simplifications by proving such a representation, but we also concentrate our attention to the problem of developing fast recursive algorithms for the inversion of the confluent Vandermonde matrix V in the spirit of [4].

There is a large number of papers dealing with algorithms for nonconfluent and confluent Vandermonde matrices. They mainly utilize the well-known connection of Vandermonde systems with interpolation problems (see, e.g., [3] and the references therein). Moreover, in [10], [9] the displacement structure of V (called there the principle of UV-reduction) is used as a main tool.

In the present paper we want to stay within the framework of ordinary differential equations. Together with some elementary facts of linear algebra, we finally arrive at a first inversion algorithm which requires the computation of the partial fraction decomposition of [(det([lambda]I - A)).sup.-1]. This algorithm is in principle the algorithm developed in a different way in [11]. We here present a second algorithm which can be considered as an improvement of the first one since the preprocessing of the coefficients of the partial fraction decomposition is not needed. Both algorithms are fast, which means that the computational complexity is O([n.sup.2]). As far as we know the second algorithm gives a new version for computing [V.sup.-1]. For the sake of simplicity, let us here roughly explain the main steps of this algorithm for the nonconfluentcase V = [([[lambda].sup.j-1.sub.i].sup.n.sub.i,j=1]):

1. Start with the vector [h.sub.n-1] = [(1, 1, ..., 1).sup.T] [member of] [C.sup.n] and do a simple recursion to get vectors [h.sub.n-2], ...., [h.sub.0] and form the matrix H = ([h.sub.0] [h.sub.1] ... [h.sup.n-1]).

2. Multiply the jth row of V with the jth row of H to obtain numbers [q.sub.j] and form the diagonal matrix Q = diag[([q.sub.j]).sup.n.sub.1].

3. Multiply H from the left by the diagonal matrix P = [Q.sup.-1] to obtain [V.sup.-T].

In the confluent case the diagonal matrix P becomes a block-diagonal matrix with upper triangular Toeplitz blocks ([t.sub.i-j]), [t.sub.l] = 0 for l > 0.

Moreover, we show how the inversion algorithms described above lead to a matrix representation of [V.sup.-1], the main factor of which is just [V.sup.T]. The other factors are diagonal matrices, a triangular Hankel matrix ([s.sub.i+j]), [s.sub.l] = 0 for l > n , and a block diagonal matrix with triangular Hankel blocks. For the nonconfluent case such a representation is well known (see [13] or, e.g., [10]) and is also a straightforward consequence of a formula proved in [5]. In [12] a generalization of the result [5] to the confluent case is presented, which leads directly to a formula for [V.sup.-1] of the form just described. Generalizations of representations of this kind to other classes of matrices such as Cauchy-Vandermonde matrices or generalized Vandermonde matrices can be found in [2] and [8]. Fast inversion algorithms for Vandermonde-like matrices involving orthogonal polynomials are designed in [ 1].

The paper is organized as follows. In Section 2 we discuss the connection of [e.sup.tA] with confluent Vandermonde matrices V and prove the corresponding representation of [e.sup.tA]. Section 3 is dedicated to recursive inversion algorithms for V. A matrix representation of [V.sup.-1] is presented in Section 4. Finally, in Section 5 we give some additional remarks concerning alternative possibilities for proving results of Section 3, modified representations of [e.sup.tA] in terms of finitely many powers of A, and the determination of analytical functions of A.

2. Connection between matrix exponentials and confluent Vandermonde matrices. Let A be a given n x n complex matrix and let

p([lambda]) = det([lambda]I - A) = [[lambda].sup.n] + [a.sub.n-1][[lambda].sup.n-1] + ... + [a.sub.1][lambda] + [a.sub.0] (2.1)

be its characteristic polynomial, [[lambda].sub.1], [[lambda].sub.2], ..., [[lambda].sub.m] its eigenvalues with the (algebraic) multiplicities [[upsilon].sub.1], [[upsilon].sub.2], ..., [[upsilon].sub.m], [[summation].sup.m.sub.i=1 [[upsilon].sub.i] = n. In other words, we associate the polynomial

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2)

with the pairs ([[lambda].sub.i], [[upsilon].sub.i]) (i = 1, 2, ..., m). We are going to demonstrate in a very simple way that the computation of [e.sup.tA] can be reduced to the inversion of a corresponding confluent Vandermonde matrix. The only fact we need is well known from a basic course on ordinary differential equations: Each component of the solution x(t) = [e.sup.tA]V of the initial value problem

x' = Ax, x(0) = v (2.3)

is a linear combination of the functions

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.4)

Note that these functions are just n fundamental solutions of the ordinary differential equation

[y.sup.(n)](t) + [a.sub.n-1] [y.sup.(n-1)](t) + ... + [a.sub.0] y(t) = 0, (2.5)

the constant coefficients of which are given in (2.1). To be more precise, there is an n x n matrix C such that

x(t) = Ce(t), (2.6)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now we use x(t) = Ce(t) as an ansatz and determine the unknown matrix C by comparing the initial values [x.sup.(k)](0) = [Ce.sup.(k)] (0) with the given initial values [x.sup.(k)](0) = [A.sup.k]v, k = 0,l, ..., n - 1:

[Ce.sup.(k)](0) = [A.sup.k]v, k = 0,1, ..., n - 1. (2.7)

Considering the vectors e(0), e'(0), ..., [e.sup.(n-1)](0) and v, Av, ..., [A.sup.n-1]v as columns of the matrices V and [A.sub.v], respectively,

V = (e(0) e'(0) ... [e.sup.(n-1)](0)), [A.sub.v] = (v Av ... [A.sup.n-1]v), (2.8)

the equalities (2.7) can be rewritten in matrix form

CV = [A.sub.v]. (2.9)

We state that V has the structure of a confluent Vandermonde matrix,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.10)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It is easy to see that V is nonsingular and together with (2.6) and (2.9) we obtain the following. LEMMA 2.1. The solution x(t) of the initial value problem (2.3) is given by

x(t) = [e.sup.tA]v = [A.sub.v][V.sup.-1]e(t), (2.11)

where V, [A.sub.v] are defined in (2.8).

Now, taking into account that [A.sub.v][V.sup.-1]e(t) = [[summation].sup.n-1.sub.i=0] [y.sub.i](t)[A.sup.i]v, where

[([y.sub.i](t)).sup.n-1.sub.i=0] = [V.sup.-1]e(t), (2.12)

(2.11) leads to the following expression of [e.sup.tA]. THEOREM 2.2 ([4], (1.10) and (1.15)). The matrix exponential [e.sup.tA] can be represented as

[e.sup.tA] = [Y.sub.0](t) I + [y.sub.1](t) A + ... + [y.sub.n-1](t) [A.sup.n-1], (2.13)

where [y.sub.i](t), i = 0, 1, ..., n - 1, are defined in (2.12).

Let us define the multiplication of a row vector ([A.sub.1], ..., [A.sub.n]) of matrices [A.sub.i] [member of] [C.sup.n x n] and a column vector v = [([v.sub.1], ..., [v.sub.n]).sup.T] of scalars [v.sub.i] [member of] C by

([A.sub.1], [A.sub.2], ...., [A.sub.n]) V = [v.sub.1][A.sub.1] + [v.sub.2][A.sub.2] + ... + [v.sub.n][A.sub.n]. (2.14)

Then (2.12), (2.13) can be rewritten in a more compact form,

[e.sup.tA] = (I, A, [A.sup.2], ..., [A.sup.n-1]) y(t), where y(t) = [V.sup.-1]e(t). (2.15)

Representation (2.15) is already known (see [4] and the references therein). The proof given in [4] is nice and, as promised there, it can be appreciated by students in a course on ordinary differential equations. In this paper the known initial values of [e.sup.tA] are compared with the initial values of the ansatz

[e.sup.tA] = ([C.sub.1], [C.sub.2], ..., [C.sub.n]) f(t), (2.16)

where f(t) is an arbitrary vector of n fundamental solutions of (2.5) and [C.sub.i] are n x n matrices. But this requires the application of a formal vector-matrix product defined for matrices [A.sub.i] i = 1, ..., n, and U of order n as follows:

([A.sub.1], ..., [A.sub.n])U = (([A.sub.1], ..., [A.sub.n])[u.sub.1], ..., ([A.sub.1], ..., [A.sub.n])[u.sub.n]), (2.17)

where [u.sub.i] denotes the ith column of U.

Possibly, the derivation given above is more natural and easier to understand. In particular, the special choice f(t) = e(t) is also a simplification and does not mean any loss of generality. Of course, instead of the vector e(t), one can use any other vector f(t) of n fundamental solutions of (2.5). But, denoting by [W.sub.f] the Wronski matrix of f, [W.sub.f] = (f f' ... [f.sup.(n-1)]), the equality (2.11) is replaced by [e.sup.tA]V = [A.sub.v][V.sup.-1.sub.v]f(t), where [V.sub.f] = [W.sub.f](0). This leads to the same representation (2.15) of [e.sup.tA], since [V.sup.-1.sub.f]f(t) does not depend on the choice of f (compare the proof of Lemma 3.1).

REMARK 2.3. Denoting by [e.sub.k](t) the kth component of e(t) and by [V.sub.k] the kth column of [V.sup.-1], then [V.sup.-1]e(t) = [[summation].sup.n.sub.k=1] [e.sub.k](t) [V.sub.k], and we obtain, as a consequence of (2.15), [e.sup.tA] = ([C.sub.1], [C.sub.2], ..., [C.sub.n]) e(t), where

[C.sub.k] = (I, A, [A.sub.2], ..., [A.sup.n-1]) [V.sub.k]. (2.18)

In the sense of (2.17) this can be written as

([C.sub.1], [C.sub.2], ..., [C.sub.n]) = (I, A, [A.sup.2], ..., [A.sup.n-1]) [V.sup.-1]. (2.19)

Let us summarize: For an n x n matrix A having the eigenvalues [[lambda].sub.i] with the algebraic multiplicities [[upsilon].sub.i] the matrix exponential [e.sup.tA] is given by (2.15), where [V.sup.-1] is the inverse of the confluent Vandermonde matrix corresponding to the pairs ([[lambda].sub.i], [[upsilon].sub.i]), i = 1, ..., m, [ summation] [[upsilon].sub.i] = n.

The special structure of the matrix V can be used to compute its inverse [V.sup.-1] in a much more efficient way than, e.g., Gaussian eliminations do. Thus we will proceed with designing inversion algorithms the computational complexity of which is O([n.sup.2]). Such fast algorithms have been already presented and discussed in a large number of papers (see e.g. [3] and references therein).

On one hand, we want to develop here a new version of such an algorithm; on the other hand, we intend to follow the spirit of the authors of [4], namely to be simple enough for a presentation to students of an elementary course on ordinary differential equations. Thus, we will discuss inversion algorithms exploiting elementary results of Linear Algebra, but, as far as it is possible, all within the framework of ordinary differential equations. Consequently, numerical aspects and criteria such as, e.g., stability will be beyond the scope of this paper, but will be discussed in a forthcoming paper.

3. Recursive algorithms for the inversion of V. Hereafter, let [e.sup.k] be the kth unit vector and [w.sup.T.sub.k-1] the kth row of [V.sup.-1], [w.sup.T.sub.k-1] = [e.sup.T.sub.k] [V.sup.-1]. We start with developing a recursion for the rows [w.sup.T.sub.i](i = n - 2,..., 0) of [V.sup.-1] from its last row [w.sup.T.sub.n-1]. As a basis we use the following fact.

LEMMA 3.1. The vector y(t) = [V.sup.-1]e(t) is the solution of the initial value problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.1)

and [a.sub.0],..., [a.sub.n-1] are the coefficients of the characteristic polynomial (2.1).

Proof. The matrix B is just the companion matrix of p([lambda]). Moreover, since ([e.sub.1] [B.sub.e1] ... [B.sup.n-1][e.sub.1]) = [I.sub.n] we conclude from (2.11) that [e.sup.tB][e.sub.1] = [V.sup.-1]e(t) = y(t), which completes the proof.

COROLLARY 3.2. The components of y(t) = [(yk(t)).sup.n-1.sub.k=0] can be recurrently determined from the last component [y.sub.n-1](t):

[y.sub.k-1](t) = [y.sub.k](t) + [a.sub.k][y.sub.n-1](t), k = n - 1,...,1. (3.2)

(For k = 0 this is also true if we set [y.sub.-1] = 0.)

Let us introduce the block-diagonal matrix

[??] = diag([[??].sub.1],..., [[??].sub.m]), (3.3)

where [[??].sub.i] = [[lambda].sub.i] in case [v.sub.i] = 1, otherwise

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then, taking into account that [y.sub.k](t) = [w.sup.T.sub.k]e(t) and e(t) = [[??].sup.T] e(t), we obtain the following reformulation of (3.2).

LEMMA 3.3. The rows [w.sup.T.sub.0],..., [w.sup.T.sub.n-1] of [V.sup.-1] satisfy the recursion

[w.sub.k-1] = [??][w.sub.k] + [a.sub.k][w.sub.n-1], k = n - 1,..., 0. (3.4)

(Fork = 0 set [w.sub.-1] = 0.)

Now we are left with the problem how to compute the last row [w.sup.T.sub.n-1] of [V.sup.-1]. To solve this we decompose p([[lambda]).sup.-1] into partial fractions

1/p([lambda]) = m [summation]i=1 [v.sub.i] [summation]j=1 [p.sub.ij]/([lambda] - [[[lambda].sub.i].sup.j] (3.5)

THEOREM 3.4. The last row [W.sup.T.sub.n-1] of [V.sub.-1] is given, by the coefficients of (3.5), as follows

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.6)

Proof. Since the function y(t) = ([[y.sub.i](t)).sup.n-1.sub.i=0] satisfies [y.sup.(k)] (0) = [B.sup.k][e.sub.1] = [e.sub.k+l] for k = 0, 1,..., n - 1 (see Lemma 3.1) we obtain, in particular, that the last component [y.sub.n-1] (t) = [w.sup.T.sub.n-1]e(t) is the solution of the initial value problem

[y.sup.(n)] + [a.sub.n-l][y.sup.(n-1)] + ... + [a.sub.0]y = 0 (3.7)

y(0) = ... = [y.sup.(n-2)] (0) = 0 , [y.sup.(n-1)] (0) = 1. (3.8)

We consider now the Laplace transform of [y.sub.n-1](t) defined by

(L[y.sub.n-1]) (s) = [[integral].sup.[infinity].sub.0] [e.sup.-st][y.sub.n-1](t) dt.

In view of (3.8), L applied to (3.7) yields (L[y.sub.n-1])(s) = p[(s).sup.-1] which can be decomposed into partial fractions as in (3.5). By applying the back transform we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and (3.6) is proved.

Now we are in the position to propose a first inversion algorithm for V.

Algorithm 1:

1) Compute the coefficients [p.sub.ij] of the partial fraction expansion (3.5) and form the vector [w.sub.n-1] as in (3.6).

2) Compute the remaining rows of [V.sup.-1] via the recursion (3.4).

It is well known that the coefficients in the partial fraction decomposition can be computed by means of an ansatz and the solution of a corresponding linear system of equations. This can be organized in such a way that the computational complexity of Algorithm I is O([n.sup.2]).

We propose now a further possibility the advantages of which seem to be that the pre-computing of the coefficients [p.sub.ij] is not necessary and that the recursion starts always with a "convenient" vector. To that aim let us adopt some notational convention. Introduce the following block-diagonal matrices with upper triangular Toeplitz blocks,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and define the diagonal matrix

D = diag([[D.sub.k]).sup.k.sub.k=1] with [D.sub.k] = diag ( 1/0!, 1/1!,..., 1/([v.sub.k] - 1)!).

Obviously, D and P are nonsingular matrices.

It is easy to see that

[w.sub.n_1] = DP[h.sub.n-1], (3.9)

where [h.sub.n-1] is the sum of the unit vectors [e.sub.v1], [e.sub.v1]+[v.sub.2],..., [e.sub.n], i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.10)

We consider the recursion

[k.sub.n-1] = J[h.sub.k] + [a.sub.k][h.sub.n-1], k = n - 1,..., 0. (3.11)

LEMMA 3.5. The recursion (3.11) produces the rows [w.sup.T.sub.k-1] of [V.sup.-1] as follows

[w.sub.k-1] = DP[h.sub.k-1], k = n,..., 1. (3.12)

(Putting [w.sub.-1] = 0 the equality (3.12) is also true for k = 0.)

Proof. We have, due to (3.11) and (3.9),

DP[h.sub.k-1] = DPJ [h.sub.k] + [a.sub.k][w.sub.n-1], k = n - I, ..., 0. (3.13)

Now, it is easy to verify that [??]D = DJ and JP = PJ . Hence, (3.13) shows that

DP [h.sub.k-1] = [??]DP [h.sub.k] + [a.sub.k][w.sub.n-1], k = n - 1,..., 0.

We compare this with (3.4) and state that DP [h.sub.-1] = 0 implies [h.sub.-1] = 0, which completes the proof.

The following fact has been already stated in [11].

COROLLARY 3.6. The transpose of the matrix [V.sup.-1] is given by [V.sup.-T] = DPH, where H = ([h.sub.0] [h.sub.1] ... [h.sub.n-1]).

From Corollary 3.6 it follows that [P.sup.-1] = H[V.sup.T]D. On the other hand we have [P.sup.-1] = diag([[Q.sub.k]).sup.m.sub.k=1] with [Q.sub.k] = [P.sub.k.sup.-1]. One can easily see that the inverse of a (nonsingular) upper triangular Toeplitz matrix is again an upper triangular Toeplitz matrix, i.e., that [Q.sub.k] has the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.14)

where [q.sup.(k)] = ([q.sub.k1],..., [[q.sub.kvk]).sup.T] is the last column of [P.sup.-1.sub.k], i.e., the solution of the equation [P.sub.k][q.sup.(k)] = [e.sub.vk]. Thus, the matrix Q = [P.sup.-1] is completely given by that n elements of H[V.sup.T] = Q[D.sup.-1] which stand in the last columns of the diagonal blocks of H[V.sup.T],

H[V.sup.T] = diag ([Q.sub.k][D.sup.-l.sub.k]).sup.m.sub.k=1] (3.15)

With the entries of the vector [q.sup.(k)] we form the matrix [Q.sub.k] and compute the solution [P.sup.(k)] = ([pk.sub.1]),...)[[p.sub.kvk]).sup.T] of

[Q.sub.k][P.sup.(k)] = [e.sub.vk],

which gives the matrix [P.sub.k].

Now we propose the following second inversion algorithm for V.

Algorithm II:

1) Compute recurrently the columns [h.sub.j-1]; j = 1; ... ; n of H by (3.11).

2) Compute that n elements [q.sub.kj] ([[upsilon].sub.k] - 1)! of the product [HV.sup.T] (see (3.15)) which determine the blocks [Q.sub.k] (see (3.14)) of the matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

3) Compute the upper triangular Toeplitz matrices [P.sub.k] by solving (3.16) and form the matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

4) Compute [V.sup.-1] = [(DPH).sup.T] :

In the case that the multiplicities [[upsilon].sup.k] are small compared with n the computational complexity of the algorithm is again O([n.sup.2]):

Our next aim is to develop the matrix representation of [V.sup.-1]: A nice algebraic proof was given in [12]. The only motivation for us to add a second proof is to be more self-contained within the framework of ordinary differential equations.

4. Matrix representation of [V.sup.-1]. Let us start with the nonconfluent case. Since in this case [??] = diag([[lambda].sub.1]).sup.n-1], the recursion (3.4) together with (3.6) leads directly to the representation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(Note that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].) This means that the inverse of a Vandermonde matrix V is closely related to its transpose [V.sup.T].

Such a representation is also possible in the confluent case. Recall that p([lambda]) = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]; where [a.sub.n] = 1: Then the matrix H can be computed as follows.

LEMMA 4.1.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.1)

This lemma is a consequence of the observation that the columns of the right hand side of (4.1) satisfy the recursion (3.11).

Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be the [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] counteridentity matrix defined as having ones on the antidiagonal and zeros elsewhere. Obviously, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are upper upper triangular Hankel matrices of order [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We obtain the following matrix representation of [V.sup.-1].

THEOREM 4.2.

(4.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [G.sub.k] = [D.sub.k][P.sub.k][MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII][D.sub.k]:

Proof. From the obvious equality [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] it follows

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Furthermore, U(a) = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where a(i) = ([a.sub.i+1], ... , [a.sub.n]; 0, ..., 0): Consequently, (4.2) is equivalent to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] In view of Corollary 3.6 and Lemma 4.1 it remains to show that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.3)

For this end we apply the formula for [e.suptJ] which is well know from the theory of ordinary differential equations. This formula yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.4)

If we compare (4.4) with [e.sup.tJ] [h.sub.n-1] = ([h.sub.n-1] J [h.sub.n-1] ... [J.sup.n-1] [h.sub.n-1]) [V.sup.-1] e(t) (Lemma 2.1), then we obtain ([h.sub.n-1] J [h.sub.n-1] ... [J.sup.n-1] [h.sub.n-1] = ZDV and (4.3) is proved.

5. Final remarks.

REMARK 5. 1. The assertion [w.sub.k-1] - [??] [W.sub.k] = [a.sub.k] [w.sub.n-1] (k = 0,..., n - 1) of Lemma 3.3, written in matrix form [V.sup.-T] [S.sub.n] - [??] [V.sup.-T] = [w.sub.n-1] ([a.sub.0], [a.sub.1], ..., [a.sub.n-1]), where [S.sub.n] is the forward shift of order n, is the so-called displacement equation for [V.sup.-T] (see [9]). This equation can also be deduced from the well known equality [B.sup.T][V.sup.T] = [V.sup.T][??], where B is defined in (3.1).

REMARK 5.2. To prove the assertion of Theorem 3.4 one can use, instead of the Laplace transformation, the Laurent series expansion at infinity of [([lambda] - [[lambda].sub.i]).sup.-j] (see [2]).

REMARK 5.3. In Section 2 we have only used that each component of [e.sup.tA] is a solution of (2.5). This fact is a consequence of the Cayley-Hamilton theorem p(A) = 0. Hence, if q([lambda])=[[lambda].sup.N] + [b.sub.N-1] [[lambda].sup.N-1] + ... + [b.sub.1] [lambda] + [b.sub.0] is any other polynomial with q(A) = 0, for example the minimal polynomial of A, then the components of [e.sup.tA] are solutions of [y.sup.(N)] (t) + [b.sub.N-1] [y.sup.(N-1)] (t) + ... + [b.sub.0] y(t) = 0 (since the matrix valued function Y(t) = [e.sup.tA] solves this equation). So we obtain, in the same way as in Section 2, [e.sup.tA] V = (V AV [A.sup.2]V ... [A.sup.N-1] V) [V.sup.-1.sub.q][e.sub.q](t), where [V.sub.q] is the confluent vandermonde matrix corresponding to the zeros of q([lambda]) and [e.sub.q](t) denotes the vector of the standard fundamental solutions of the above differential equation of order N. The resulting representation of [e.sup.tA] is [e.sup.tA] = (I, A, [A.sup.2], ... , [A.sup.N-1]) [y.sub.q] (t), where [y.sub.q] (t) = [V.sup.-1.sub.q] [e.sub.q] (t).

REMARK 5.4. Assume for sake of simplicity that all [v.sub.k] are equal to v, where v is small compared with n. Then using representation (4.2) the matrix-vector multiplication with the inverse of a (confluent) vandermonde matrix V can be done with O(n [log.sup.2] n) computational complexity. Indeed, utilizing FFT techniques then the complexity of the multiplication of an n x n triangular Hankel matrix and a vector is O(n log n). In particular, this leads to a complexity of O(n) for the matrix vector multiplication with diag [([G.sub.k]).sup.M.sub.k=1]. Now, the rows of the matrix [V.sup.T] can be reordered in such a way that the first m rows form a nonconfluent m x n vandermonde matrix, the (km + 1)th up to ((k + 1)m)th rows a nonconfluent vandermonde matrix multiplied by a diagonal matrix, k = 1, ..., v. With the ideas of [2] concerning the nonconfluent case this leads to a complexity of O(n [log.sup.2] n) to multiply [V.sup.T] with a vector.

REMARK 5.5. From Lemma 3.1 one obtains a nice formula for the exponential of the companion matrix B, [e.sup.tB] = (y(t) y'(t) ... [y.sup.(n-1)] (t)). If A is nonderogatory, i.e., if there exists a nonsingular matrix U such that AU = UB, then we obtain the representation [e.sup.tA] = [A.sub.v] (y(t) y'(t) ... [y.sup.(n-1)] (t)) [A.sup.-1.sub.v], where v = U[e.sub.1] (which implies [A.sub.v] = U). We remark that A is nonderogatory if and only if there exists a v such that [A.sub.v] is invertible. This follows from [A.sub.v] B = A[A.sub.v] which can be easily proved.

REMARK 5.6. For f (z) = [e.sup.z] we have f (A) = (I, A, ..., [A.sup.n-1]). f (B)[e.sub.1] (since y(t) = [e.sup.tB] [e.sub.1]). This is also true for other power series f (z) = [[summation].sup.[infinity].sub.m=0] [[alpha].sub.m][(z-[z.sub.0]).sup.m] with convergence radius R > 0 and its corresponding matrix valued function f (A) (A [member of] [C.sup.n x n] with [max.sub.i] [absolute value of [[lambda].sub.i] (A) - [z.sub.0]] < R). Indeed, if we compare the initial values of both sides of the equation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] = (I, A, ..., [A.sup.n-1]) x [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], then we obtain [(A - [z.sub.0]I).sup.m] = (I, A, ..., [A.sup.n-1]) x [(B - [z.sub.0] I).sup.m] [e.sub.1] which yields the assertion. For the application to an vector this means f (A)v = [A.sub.v]b, where b = f (B)[e.sub.1]. We remark that b is just the coefficient vector of the Hermite interpolation polynomial of f (z) with respect to the eigenvalues [[lambda].sub.i] of A and their algebraic multiplicities [v.sub.i]. This follows from the well known equality V B[V.sup.-1] = [[??].sup.T]: Vb = V f (B)[V.sup.-1] V[e.sub.1] = f ([[??].sup.T])V[e.sub.1] = (f ([[lambda].sub.1]), f'([[lambda].sub.1]), ..., [f.sup.(vl-1)] ([[lambda].sub.1]), ... , f ([[lamba].sub.m]), ..., [f.sup.([v.sub.m]-1)] [([[lambda].sub.m])).sup.T].

REFERENCES

[1] D. CALVETTI AND L. REICHEL, Past inversion of Vandermonde-like matrices involving orthogonal polynomials, BIT, 33 (1993), pp. 473-484.

[2] T. FINCK, G. HEINIG, AND K. ROST, An inversion formula and fast algorithms for Cauchy-Vandermonde matrices, Linear Algebra Appl., 183 (1993), pp. 179-192.

[3] G. H. GOLUB AND C. F. VAN LOAN, Matrix Computations, Third Edition, The Johns Hopkins University Press, Baltimore, 1996.

[4] W. A. HARRIS, JR., J. P. FILLMORE, AND D. R. SMITH, Matrix exponentials - another approach, SIAM Rev., 43 (2001), pp. 694-706.

[5] F. I. LANDER, The Bezoutian and the inversion of Hankel and Toeplitz matrices, Mat. Issled., 9 (1974), pp. 69-87.

[6] I. E. LEONARD, The matrix exponential, SIAM Rev., 38 (1996), pp. 507-512.

[7] C. MOLER AND C. F. VAN LOAN, Nineteen dubious ways to compute the exponential of a matrix, SIAM Rev., 20 (1978), pp. 801-836.

[8] G. HEINIG AND F. AL-MUSALLAM, Hermite's formula for vector polynomial interpolation with applications to structured matrices, Appl. Anal., 70 (1999), pp. 331-345.

[9] G. HEINIG, W. HOPPE, AND K. ROST, Structured matrices in interpolation and approximation problems, Wiss. Z. Tech. Univ. Karl-Marx-Stadt, 31 (1989), no. 2, pp. 196-202.

[10] G. HEINIG AND K. ROST, Algebraic Methods for Toeplitz-like Matrices and Operators, Akademie-Verlag, Berlin, Birkhauser Verlag, Basel, 1984.

[11] S.-H. HOU AND W.-K. PANG, Inversion of confluent Vandermonde matrices, Comput. Math. Appl., 43 (2002), pp. 1539-1547.

[12] G. SANSIGRE AND M. ALVAREZ, On Bezoutian reduction with the Vandermonde matrix, Linear Algebra Appl., 121 (1989), pp. 401-408.

[13] J. F. TRAUB, Associated polynomials and uniform methods for the solution of linear problems, SIAM Rev., 8 (1966), pp. 277-301.

* Received August 4, 2003. Accepted for publication June 28, 2004. Recommended by L. Reichel.

UWE LUTHERT ([dagger]) AND KARLA ROSTT ([double dagger])

([dagger]) Fakultat fur Mathematik, Technische Universitat Chemnitz, D-09107 Chemnitz, Germany. E-mail: uwe.Luther@mathematik.tu-chemnitz.de

([double dagger])Fakultat fur Mathematik, Technische Universitdt Chemnitz, D-09107 Chemnitz, Germany. E-mail: karla.rost@mathematik.tu-chemnitz.de
COPYRIGHT 2004 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Luthert, Uwe; Rost, Karla
Publication:Electronic Transactions on Numerical Analysis
Date:Jul 1, 2004
Words:5337
Previous Article:Some theoretical results derived from polynomial numerical hulls of Jordan blocks.
Next Article:Quadrature over the sphere.


Related Articles
A parallel path for speedy solutions.
AccelChip Inc. Extends Leadership in Algorithmic Synthesis with New IP.
LifeCell Appoints Martin P. Sutter to the Board of Directors.
Frames for undergraduates.
Time and memory requirements of the nonequispaced FFT.
Matrix computations and semiseparable matrices; v.1: Linear systems.
Solving linear systems with a Levinson-like solver.
A new source of structured singular value decomposition problems.
On the calculation of approximate Fekete points: the univariate case.

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters