Printer Friendly

Algorithm 767: a Fortran 77 package for column reduction of polynomial matrices.

1. INTRODUCTION

Polynomial matrices, i.e., matrices of which the elements are polynomials (or equivalently, polynomials with matrix coefficients), play a dominant role in modern systems theory. Consider for instance a set of n variables w = ([w.sub.1], . . ., [w.sub.n])[prime]. Suppose that these variables are related by a set of m higher-order linear differential equations with constant coefficients,

[P.sub.0]w + [P.sub.1] d/dt w + [P.sub.2] [d.sup.2]/d[t.sup.2] w + . . . + [P.sub.q] [d.sup.q]/d[t.sup.q] w = 0, (1)

where [P.sub.k], k = 0, . . ., q, are m x n constant matrices. Define the m x n polynomial matrix P by

P(s) = [summation of] [P.sub.k][s.sup.k] where k = 0 to q;

then this set of equations (from Eq. (1)) can be written concisely as

P(d/dt) w = 0. (2)

In the behavioral approach to systems theory, Eq. (2), together with a specification of the function class to which the solutions should belong, describes a system. For example ([C.sup.[infinity]]([R.sub.+]), P(d/dt)w = 0) describes a system. The behavior of this system consists of all w [element of] [R.sup.n], of which the components are [C.sup.[infinity]] functions on [0, [infinity]), that satisfy Eq. (2). Many properties of the behavior of the system can be derived immediately from properties of the polynomial matrix P (see Willems [1986a; 1986b; 1987; 1988; 1991]).

The information contained in P can be redundant, in which case the polynomial matrix can be replaced by another polynomial matrix of lower degree or smaller size without changing the set of trajectories satisfying the equations. For several reasons a minimal description can be convenient. This could be purely a matter of simplicity, but there are also cases in which a minimal description has additional value, for instance, if one looks for state space or input-output representations of the system. Depending on the goal one has in mind, minimality can have different meanings, but in important situations a row- or column-reduced description supplies a minimal description. For instance, in the work of Willems on behaviors one finds the search for a row-reduced [Mathematical Expression Omitted] that describes the same behavior as Eq. (2).

Other examples that display the importance of column- or row-reduced polynomial matrices may be found in articles on inversion of polynomial matrices [Inouye 1979; Lin et al. 1996; Zhang 1987], or in papers on a polynomial matrix version of the Euclidean algorithm [Codenotti and Lotti 1989; Wang and Zhou 1986; Wolovich 1984; Zhang 1986; Zhang and Chen 1983] where a standing assumption is that one of the matrices involved is column reduced.

Since P is row reduced if and only if its transpose P[prime] is column reduced, the problems of row reduction and column reduction are equivalent.

In this article a subroutine is presented that determines a column-reduced matrix unimodularly equivalent to a given polynomial matrix. The algorithm underlying the subroutine has been developed in several stages: the part of the algorithm in which a minimal basis of the right null space of a polynomial matrix is calculated is an adaptation of the algorithm described in Beelen [1987]. The original idea of the algorithm has been described in Beelen et al. [1988], and successive improvements have been reported in Neven [1988] and Praagman [1985; 1991]. The paper by Neven and Praagman [1993] gives the algorithm in its most general form, exploiting the special structure of the problem in the calculation of the kernel of a polynomial matrix. The subroutine we describe here is an implementation of the algorithm in the latter paper.

The algorithm was intended to perform well in all cases. But it turned out that the routine has difficulties in some special cases, that in some sense might be described as singular. We give examples in Section 8 and discuss these difficulties in detail in Section 9. A crucial role is played by a tolerance, TOL, used in rank-determining steps, which has to be set properly.

2. PRELIMINARIES

Let us start by introducing some notations and recalling some definitions.

Let R[s] denote the ring of polynomials in s with real coefficients; let [R.sup.n][s] be the set of column vectors of length n with elements in R[s]. Clearly [R.sup.n][s] is a free module over R[s].

Remark. Recall that, loosely speaking, a set M is called a module over a ring R if any two elements in M can be added, and any element of M can be multiplied by an arbitrary element from R. A set [m.sub.1], . . ., [m.sub.r] [element of] M is called a set of generators of M if any element m [element of] M can be written as a linear combination of the generators: m = [Sigma][r.sub.i][m.sub.i] for some [r.sub.i] [element of] R. The module is free if there exists a set of generators such that the [r.sub.i] are unique. Note that if R is a field, then a module is just a vector space.

Let [R.sub.m x n][s] be the set of m x n matrices with elements in R[s]. Let P [element of] [R.sup.m x n][s]. Then d(P), the degree of P, is defined as the maximum of the degrees of the elements of P, and [d.sub.j](P), the jth column degree of P, as the maximum of the degrees in the jth column. [Delta](P) is the array of integers obtained by arranging the column degrees of P in nondecreasing order.

The leading column coefficient matrix of P, [[Gamma].sub.c](P), is the constant matrix, of which the jth column is obtained by taking from the jth column of P the coefficients of the term with degree [d.sub.j](P). Let [P.sub.j](s) = [summation of] [P.sub.jk][s.sup.k] where k = 0 to [d.sub.j](P) be the jth column of P; then the jth column of [[Gamma].sub.c](P) equals

[[Gamma].sub.c][(P).sub.j] = [P.sub.jdj(P)].

Definition 2.1. Let P [element of] [R.sup.m x n][s]. P is called column reduced, if there exists a permutation matrix T, such that P = (0, [P.sub.1])T, where [[Gamma].sub.c]([P.sub.1]) has full column rank.

Remark. Note that we do not require [[Gamma].sub.c](P) to be of full column rank. In the literature there is some ambiguity about column properness and column reducedness. We follow here the definition of Willems [1986a; 1986b; 1987]: P is column proper if [[Gamma].sub.c](P) has full column rank and is column reduced if the conditions in the definition above are satisfied.

A square polynomial matrix U [element of] [R.sup.n x n][s] is unimodular if det(U) [element of] R\{0} or, equivalently, if [U.sup.-1] exists and is also polynomial. Two polynomial matrices P [element of] [R.sup.m x n][s] and R [element of] [R.sup.m x n][s] are called unimodularly equivalent if there exists a unimodular matrix U [element of] [R.sup.n x n][s] such that PU = R. It is well known that every regular polynomial matrix is unimodularly equivalent to a column-proper matrix; see Wolovich [1978]. Kailath [1980] states that the above result can be extended to polynomial matrices of full column rank without changing the proof. In fact Wolovich's proof is sufficient to establish that any polynomial matrix is unimodularly equivalent to a column-reduced matrix. Furthermore, Wolovich's proof implies immediately that the column degrees of the column-reduced polynomial matrix do not exceed those of the original matrix; see Neven and Praagman [1993].

THEOREM 2.2. Let P [element of] [R.sup.m x n][s]; then there exists a U [element of] [R.sup.n x n][s], unimodular, such that R = PU is column reduced. Furthermore [Delta](R) [less than or equal to] [Delta](P) totally.

Although the proof of this theorem in the above sources is constructive, it is not suited for practical computations, for reasons explained in a paper by Van Dooren [1979]. Example 8.3 (later) illustrates this point. In Neven and Praagman [1993] an alternative, constructive proof is given on which the algorithm, underlying the subroutine we describe here, is based.

The most important ingredient of the algorithm is the calculation of a minimal basis of the right null space of a polynomial matrix associated with P.

Definition 2.3. Let M be a submodule of [R.sup.n][s]. Then Q [element of] [R.sup.n x r][s] is called a minimal basis of M if Q is column proper and if the columns of Q span M.

In the algorithm a minimal basis is calculated for the module

ker(P, -[I.sub.m]) = {v [element of] [R.sup.n + m][s] [where] (P, -[I.sub.m])v = 0}.

See Beelen et al. [1988]. Here and in the sequel, [I.sub.m] will denote the identity matrix of size m.

The first observation is that if (U[prime], R[prime])[prime] is such a basis, with U [element of] [R.sup.n x n][s], then U is unimodular; see Beelen et al. [1988] and Neven and Praagman [1993]. Of course, R = PU, but although (U[prime], R[prime])[prime] is minimal and hence column reduced, this does not necessarily hold for R. Take for example the following:

[Mathematical Expression Omitted].

Then

[Mathematical Expression Omitted]

is a minimal basis for ker(P, -[I.sub.m]), but R is clearly not column reduced.

The next observation is that if (U[prime], R[prime])[prime] is a minimal basis for ker(P, -[I.sub.m]), then (U[prime], [s.sup.b]R[prime])[prime] is a basis for ker([s.sup.b]P, -[I.sub.m]), but not necessarily a minimal basis. Especially, if b [greater than] d(U), (U[prime], [s.sup.b]R[prime])[prime] is minimal if and only if R[prime] is column reduced. On the other hand, for any minimal basis ([U[prime].sub.b], [R[prime].sub.b])[prime] of ker([s.sup.b]P, -[I.sub.m]), [R[prime].sub.b] is divisible by [s.sup.b], and P[U.sub.b] = [s.sup.-b][R[prime].sub.b]. Neven and Praagman [1993] proved that for b [greater than] (n - 1)d(P), the calculation of a minimal basis of ker([s.sup.b]P, -[I.sub.m]) yields a pair ([U.sub.b], [R.sub.b]) in which [R.sub.b] is column reduced.

3. LINEARIZATION

The calculation of ker([s.sup.b]P, -[I.sub.m]) is done in the spirit of the procedure explained in Beelen [1987]: we calculate a minimal basis of the kernel of the following linearization of ([s.sup.b]P, -[I.sub.m]).

Let P be given by P(s) = [P.sub.d][s.sup.d] + [P.sub.d - 1][s.sup.d - 1] + . . . + [P.sub.0]. Define the pencil [H.sub.b] by

[Mathematical Expression Omitted],

where [A.sub.b], [E.sub.b] [element of] [R.sup.[m.sub.a] x [n.sub.a]], with [m.sub.a] = (d + b)m, and [n.sub.a] = n + [m.sub.a].

With

[Mathematical Expression Omitted]

we see that

[Mathematical Expression Omitted],

so if V is a basis of ker([H.sub.b]), then

[Mathematical Expression Omitted]

is a basis for ker([s.sup.b]P, -[I.sub.m]) [Beelen 1987]. Neven and Praagman [1993] proved that [Delta](V) = [Delta]((U[prime], R[prime])[prime]) and that V is minimal if and only if (U[prime], R[prime])[prime] is minimal. So the problem is to calculate a minimal basis for ker([H.sub.b]).

4. THE CALCULATION OF A MINIMAL BASIS

A minimal basis is calculated by transforming the pencil [H.sub.b] by orthogonal pre- and posttransformations to a form where [E.sub.b] has not changed and where [A.sub.b] is in upper staircase form.

Definition 4.1. A matrix A [element of] [R.sup.[m.sub.a] x [n.sub.a]] is in upper staircase form if there exists an increasing sequence of integers [s.sub.i], 1 [less than or equal to] [s.sub.1] [less than] [s.sub.2] [less than] . . . [less than] [s.sub.r] [less than or equal to] [n.sub.a], such that [A.sub.i,[s.sub.i]] [not equal to] 0 and [A.sub.ij] = 0 if i [greater than] r or j [less than] [s.sub.i]. The elements [A.sub.i,[s.sub.i]] are called the pivots of the staircase form.

Note that i [less than or equal to] [s.sub.i] and that the submatrix of the first i rows and the first [s.sub.i] (or more) columns of A is right invertible for 1 [less than or equal to] i [less than or equal to] r.

Beelen [1987] and Neven and Praagman [1993] have shown that there exist orthogonal, symmetric matrices [Q.sub.1], [Q.sub.2], . . ., [Q.sub.r] such that

[Mathematical Expression Omitted]

is in upper staircase form. The matrices [Q.sub.k] are elementary Householder reflections, and we define [Mathematical Expression Omitted]. Note that [E.sub.b] is not changed by this transformation:

[Mathematical Expression Omitted].

We partition [Mathematical Expression Omitted] as follows:

[Mathematical Expression Omitted],

with [A.sub.jj] [element of] [R.sup.[m.sub.j]] x [m.sub.j - 1] right invertible and in upper staircase form, j = 1, . . ., l, and [A.sub.j,j+1] a square matrix, for j = 1, . . ., l + 1. We take [m.sub.0] = n. Then the dimensions [m.sub.j], j = 1, . . ., l, are uniquely determined and

[Mathematical Expression Omitted].

Let [Mathematical Expression Omitted] be the submatrix of [Mathematical Expression Omitted] . . . [Mathematical Expression Omitted] . . . [Mathematical Expression Omitted] obtained by deleting the first k - 1 rows, and let [s.sub.k] be the column index of the first nonzero column in [Mathematical Expression Omitted]. Then [Q.sub.k] transforms this (sub)column into the first unit vector and lets the first k - 1 rows and the first [s.sub.k] - 1 columns of [Mathematical Expression Omitted] be invariant. Consequently, postmultiplication with [Mathematical Expression Omitted] leaves the first n + k - 1 columns of [Mathematical Expression Omitted] unaltered.

Because of the staircase form of [Mathematical Expression Omitted] it is easy to see that the equation [Mathematical Expression Omitted] has [m.sub.i - 1] - [m.sub.i] independent solutions of the form

[Mathematical Expression Omitted],

where [y.sub.ii] [element of] [R.sup.[m.sub.i - 1]] is a null vector of [A.sub.ii], and [y.sub.jk] [element of] [R.sup.[m.sub.j - 1]], k = j, . . ., i. It is clear that [Mathematical Expression Omitted] is then a null vector of [H.sub.b], and taking the top and bottom part of v yields a column of [U.sub.b] and [R.sub.b], respectively, of degree i - 1. Note that this implies that l [less than or equal to] b + d + 1, since ([I.sub.n], [s.sup.b]P[prime])[prime] is a basis of ker([s.sup.b]P, -[I.sub.m]) and since the degrees of the minimal bases of ker([H.sub.b]) and ker([s.sup.b]P, -[I.sub.m]) are the same; see the end of Section 3.

5. INCREASING b

The computational effort to calculate a minimal basis for ker([s.sup.b]P, -[I.sub.m]) increases quickly with the growth of b. From experiments, however, we may assume that in many cases a small b already leads to success. Therefore the algorithm starts with b = 1 and increases b by one until a column-reduced [R.sub.b] has been found. With the transition from b to b + 1 the computations need not start from scratch, as we will explain in this section.

The transformation of [A.sub.b] into [Mathematical Expression Omitted] is split up into steps, where in the jth step the diagonal block matrix [A.sub.jj] is formed, j = 1, . . ., l. Let [[Mu].sub.j] denote the row index of the first row of [A.sub.jj]. Then [A.sub.jj] is formed by the Householder reflections [Q.sub.[[Mu].sub.j]], . . ., [Q.sub.[[[Mu].sub.j+1].sup.-1]]. Let [N.sub.k] [element of] [R.sup.m x (n + (d + k)m)] denote the matrix (0, . . ., 0, [I.sub.m]). Then

[Mathematical Expression Omitted],

as well as

[Mathematical Expression Omitted].

Observe that the first n columns of [A.sub.b+1] are just the first n columns of [A.sub.1] augmented with zeros. Since postmultiplication with any [Q.sup.<n>] does not affect the first n columns, clearly the reflection vectors of the first [[Mu].sub.2] - 1 Householder reflections of [A.sub.b+1] (the ones involved in the computation of [A.sub.11]) are the reflection vectors of the first [[Mu].sub.2] - 1 Householder reflections of [A.sub.1] augmented with zeros. Let [K.sub.1;1] = [Q.sub.[Mu]2 - 1] . . . [Q.sub.1] be the orthogonal transformation of the first step of the transformation of [A.sub.1]. Then [K.sub.1;b+1] = diag([K.sub.1;1], [I.sub.m]) is the corresponding transformation for [A.sub.b+1], and the first step can be described by

[Mathematical Expression Omitted],

where [Mathematical Expression Omitted] has the obvious meaning. From this we see that after the first step the second block column of width [[Mu].sub.2] - 1, i.e., the block column from which [A.sub.22] will be acquired, is exactly the corresponding block column of [A.sub.2] after the first step augmented with zeros, if b [greater than or equal to] 2. In the second step, postmultiplication with the Householder reflections [Mathematical Expression Omitted], for k = [[Mu].sub.2], . . ., [[Mu].sub.3] - 1, does not affect the first n + [[Mu].sub.2] - 1 columns. Therefore, the argumentation for the first step applies, mutatis mutandis, for the second step, if b [greater than or equal to] 3. As a consequence, it can be concluded by induction that the transformations for the jth step for [A.sub.b+1] and [A.sub.b] are related by

[K.sub.j;b+1] = diag([K.sub.j;b], [I.sub.m]), j = 1, . . ., b

and that we can start the algorithm for [A.sub.b+1] with [A.sub.b] transformed by [K.sub.j,b], for j = 1, . . ., b, augmented with [Mathematical Expression Omitted] at the bottom and m zero columns on the right.

6. DESCRIPTION OF THE ALGORITHM

The algorithm consists of the following:

* An outer loop in b, running from 1 to (n - 1)d + 1 at most. The termination criterion is that the calculated Re is column reduced.

* An intermediate loop in i, running from b to b + d + 1 at most, in which [A.sub.ii] is determined and in which the null vectors of [H.sub.b] of degree i - 1 are calculated. The precondition is that [A.sub.jj] and the null vectors of [H.sub.b] of degree j - 1, for j = 1, . . ., i - 1, have been computed and are available. The termination criterion is that the submatrix of computed columns of [R.sub.b] is not column reduced, or that all the null vectors of [H.sub.b] have been found.

* An inner loop in k, running from n + [[Mu].sub.i-1] to n + [[Mu].sub.i] - 1, ([[Mu].sub.0] = -n) indexing the columns of [A.sub.ii]. For each k, either a Householder reflection is generated and applied, or a null vector of degree i - 1 is computed. If a null vector has been found, then [U.sub.b] and [R.sub.b] are extended with one column, and [R.sub.b] is checked on column reducedness. The loop is terminated after k = n + [[Mu].sub.i] - 1 or if the extended [R.sub.b] is not column reduced.

The algorithm uses a tolerance to determine the rank of the diagonal blocks [A.sub.ii] from which the null vectors of [H.sub.b] and consequently the computed solution are derived. An appropriate tolerance is needed to find the solution. But the value of the tolerance does not influence the accuracy of the computed solution.

7. THE IMPLEMENTATION

The algorithm has been implemented in the subroutine COLRED. The subroutine is written according to the standards of SLICOT (see WGS [1990]) with a number of auxiliaries. It is based on the linear algebra packages BLAS [Dongarra et al. 1988; Lawson et al. 1979] and LAPACK [Anderson 1992]. The complete code of an earlier (slightly different) version of the routines and an example program are included in a report by the authors [Geurts and Praagman 1994].

COLRED takes a polynomial matrix P as its input and yields on output a unimodular matrix U and a column-reduced polynomial matrix R such that PU - R is near zero. Rank determination plays an important role in the algorithm, and the user is asked to specify on input a parameter TOL, which sets a threshold value below which matrix elements, that are essential in the rank determination procedure, are considered zero. If TOL is set too small, it is replaced by a default value in the subroutine.

Concerning the numerical aspects we like to make the following remark. The algorithm used by the routine involves the construction of a special staircase form of a linearization of ([s.sup.b]P(s), -I) with pivots considered to be nonzero when they are greater than or equal to TOL. These pivots are then inverted in order to construct the columns of ker([s.sup.b]P(s), -I). The user is recommended to choose TOL of the order of the relative error in the elements of P(s). If TOL is chosen too small, then a very small element of insignificant value may be taken as pivot. As a consequence, the correct null vectors, and hence R(s), may not be found. In the case that R(s) has not been found and in the case that the elements of the computed U(s) and R(s) are large relative to the elements of P(s) the user should consider trying several values of TOL.

8. EXAMPLES

In this section we describe a few examples. All examples were run on four computers, a VAX-VMS, a VAX-UNIX, a SUN, and on a 486 personal computer. The numerical values that we present here were produced on the VAX-VMS computer. Its machine precision is [2.sup.-56] [approximately equal to] 1.4 [multiplied by] [10.sup.-17]. Unless explicitly mentioned, the examples were run with TOL = 0, forcing the subroutine to take the default value (see Section 7) for the tolerance.

The first example is taken from the book of Kailath [1980] and has been discussed before in Beelen et al. [1988] and Neven and Praagman [1993].

Example 8.1. The polynomial matrix P is given by

[Mathematical Expression Omitted].

In Kailath [1980, p. 386], we can find (if we correct a small typo) that P[U.sub.0] = [R.sub.0], with

[Mathematical Expression Omitted],

[Mathematical Expression Omitted].

Clearly [R.sub.0] is column reduced, and [U.sub.0] is unimodular. This example was also treated in Beelen et al. [1988]. The program yields the following solution:

[Mathematical Expression Omitted],

[Mathematical Expression Omitted],

with [Alpha] = 7.302027, [Beta] = 37.43234, [Gamma] = 31.87083, and [Delta] = 2[Beta] + [Gamma].

If we define [absolute value of P] as the maximal absolute value of all coefficients of the polynomial matrix P and introduce the notation P [similar to] [10.sup.p] if [10.sup.p - 1] [less than] [absolute value of P] [less than] [10.sup.p], then it is easily checked that PU - R [similar to] [10.sup.-13]. Furthermore U is clearly unimodular:

[Mathematical Expression Omitted].

This solution has been found without iterations, i.e., for b = 1, and equals the solution found in Beelen et al. [1988].

As already mentioned in Beelen et al. [1988], one of the main motivations for the iterative procedure, i.e., starting with b = 1 and increasing b until the solution is found, is the (experimental) observation that in most examples increasing b is not necessary. The following example, also treated in Beelen et al. [1988] and Neven and Praagman [1993], is included to show that sometimes a larger b is required.

Example 8.2

[Mathematical Expression Omitted].

Note that this matrix is unimodular and hence unimodularly equivalent to a constant, invertible matrix. The program yields no column-reduced R for b [less than or equal to] 4. For b = 5 the resulting U and R are

[Mathematical Expression Omitted]

[Mathematical Expression Omitted],

where [Alpha] = 1.702939. . . . The residual matrix satisfies PU - R [similar to] [10.sup.-15].

The above examples behave very well; in fact they are so "regular" that also the algorithm based on Wolovich's proof of Theorem 2.2 (from now on called the Wolovich algorithm) yields reliable answers. In a forthcoming paper we will compare the results of both algorithms to a greater extent. Here we restrict ourselves to two more examples, for which the Wolovich algorithm is likely to meet numerical difficulties.

Example 8.3. We take the following for P:

[Mathematical Expression Omitted],

with [Epsilon] a small parameter. Calculation by hand immediately shows that, taking U equal to

[Mathematical Expression Omitted],

with [Delta] = [[Epsilon].sup.-1], yields an R (equal to PU) given by

[Mathematical Expression Omitted].

Clearly R is column reduced, and U is unimodular. Note that U contains large elements as [Epsilon] is small, a feature that will also be seen in the answers provided by the program.

If we take [Epsilon] = [10.sup.-2] and set the tolerance to [10.sup.-14], COLRED yields

[Mathematical Expression Omitted],

[Mathematical Expression Omitted],

with [Alpha] = 0.4082483 and [Beta] = 0.1414214.

With a simple additional unimodular transformation the large elements in R can be removed; namely, with

[Mathematical Expression Omitted],

the corresponding column-reduced R becomes

[Mathematical Expression Omitted].

It is not possible to get rid of the large elements of U as well.

For smaller e the default value of TOL, which is actually 0.6 [multiplied by] [10.sup.-14], is too small, and the program ends with IERR = 2. To get results the tolerance has to be chosen approximately [10.sup.-16][[Epsilon].sup.-1]. In all cases PU - R [similar to] [10.sup.-16] [absolute value of P][absolute value of U]. In the next section we will analyze this example in more detail.

An obvious thought is that the occurrence of the small parameter [Epsilon] is the cause of the problem, but the next example shows that this is not necessarily the case.

Example 8.4. In this example we take for P

[Mathematical Expression Omitted],

with e a small parameter. By hand calculation we find the following solution:

[Mathematical Expression Omitted],

[Mathematical Expression Omitted].

We observe that neither U, nor R, contains large elements. It turns out that the same holds for U and R computed by the routine. For instance, with [Epsilon] = [10.sup.-8] the computed U and R are

[Mathematical Expression Omitted]

and

[Mathematical Expression Omitted].

For other choices of [Epsilon] the results are approximately the same. In all cases the residual matrix PU - R [similar to] [10.sup.-16]. In the next section we also revisit this example.

We also investigated the sensitivity of the algorithm to perturbations in the data. The conclusion, based on a number of experiments, is that if the tolerance is chosen such that the perturbations lie within the tolerance, then the program retrieves the results of the unperturbed system. This is well in agreement with our expectation. We give one example.

Example 8.5

[Mathematical Expression Omitted].

Perturbing P with quantities in the order of [10.sup.-8] leads to the result that the perturbed P is column reduced if the tolerance is less than [10.sup.-8]. Setting the tolerance to [10.sup.-7] gives the following solution:

[Mathematical Expression Omitted]

[Mathematical Expression Omitted],

in accordance with the unperturbed case.

9. DISCUSSION

First we examine Example 8.3, namely

[Mathematical Expression Omitted].

The [U.sub.[Epsilon]] and [R.sub.[Epsilon]] that will result from the algorithm, if calculations are performed exactly, are

[Mathematical Expression Omitted],

[Mathematical Expression Omitted],

with [Alpha] = 1/6 [square root of 6], [Beta] = 1/10 [square root of 2], and [Delta] = [[Epsilon].sup.-1].

In Section 8 we saw that the tolerance for which this result is obtained by the routine is proportional to [[Epsilon].sup.-1]. Close examination of the computations reveals that the computed [Mathematical Expression Omitted] (see Section 4) gets small pivots, which cause growing numbers in the computation of the right null vectors until overflow occurs and a breakdown of the process if the tolerance is too small. Scaling of a null vector, which at first sight suggests itself, may suppress the overflow and thereby hide the problem at hand. In this example the effect of scaling is that, if [Epsilon] tends to zero, then [U.sub.[Epsilon]] tends to a singular matrix and [R.sub.[Epsilon]] to a constant matrix of rank 1.

Is there any reason to believe that there exists an algorithm which yields an R continuously depending on [Epsilon] in a neighborhood of 0? Observe that

* det([P.sub.[Epsilon]])(s) = -5[Epsilon][s.sup.3], so [P.sub.[Epsilon]] is singular for [Epsilon] = 0;

* [Delta]([R.sub.[Epsilon]]) = (0, 1, 2) if [Epsilon] [not equal to] 0 and [Delta]([R.sub.0]) = (-1, 0, 3) (we use the convention that the zero polynomial has degree -1).

We conclude that the entries of [U.sub.[Epsilon]] and [R.sub.[Epsilon]] do not depend continuously on [Epsilon] in a neighborhood of [Epsilon] = 0. An even stronger conclusion is that there do not exist families {[V.sub.[Epsilon]]}, {[S.sub.[Epsilon]]}, continuous in [Epsilon] = 0 such that for all [Epsilon], [V.sub.[Epsilon]] is unimodular, [S.sub.[Epsilon]] column reduced, and [P.sub.[Epsilon]][V.sub.[Epsilon]] = [S.sub.[Epsilon]]. This is what we call a singular case.

Example 8.4, though at first sight similar, is quite different from Example 8.3. Due to the fact that the third column of P minus s times its fourth column equals [(2s + 1, -1, -1, -1).sup.t], the term [Epsilon][s.sup.2] in the element [P.sub.12] is not needed to reduce the first column. As a consequence, the elements of [U.sub.[Epsilon]] and [R.sub.[Epsilon]] depend continuously on [Epsilon], and no large entries occur. This is also shown in the solution found by hand calculations. This feature is not recognized by the Wolovich algorithm. That is why we suspect the Wolovich algorithm to behave badly on this type of problem. Perturbation of [P.sub.[Epsilon]] - for instance, changing the (4,4) entry from 3 to 3 + [Eta] - destroys this property. The resulting matrix behaves similarly to Example 8.3. To compare Examples 8.3 and 8.4 we observe that in Example 8.4

* det([P.sub.[Epsilon],[Eta]]) [not equal to] 0 for all values of [Epsilon] and [Eta]; so this property is not characteristic;

* in the unperturbed case, [Eta] = 0, [Delta]([R.sub.[Epsilon]]) = (1, 1, 1, 2) for all [Epsilon]. If [Eta] [not equal to] 0, then [Delta]([R.sub.[Epsilon]]) = (1, 1, 2, 2) for [Epsilon] [not equal to] 0 and (1, 1, 1, 3) if [Epsilon] = 0. Here again we conclude that in this case no algorithm can yield [U.sub.[Epsilon],[Eta]], [R.sub.[Epsilon],[Eta]] continuous in [Epsilon] = 0. So for [Eta] = 0 the problem is regular, and for [Eta] [not equal to] 0 the problem is singular.

Remark. Singularity in the sense just mentioned may appear in a more hidden form. For instance, if in Example 8.4 the third column is added to the second, resulting in [P.sub.12] = (1 + [Epsilon])[s.sup.2] + 3s + 4, we get a similar behavior, depending on the values of [Eta] and [Epsilon]. Though [Epsilon] in [P.sub.12] is likely to be a perturbation, its effect is quite different from the effects of the perturbations in Example 8.5.

For perturbations as in Example 8.5 the tolerance should at least be of the magnitude of the uncertainties in the data to find out whether there is a non-column-reduced polynomial matrix in the range of uncertainty. In cases like Example 8.5 it may be wise to run the algorithm for several values of the tolerance.

10. CONCLUSIONS

In this article we described a subroutine which is an implementation of the algorithm developed by Neven and Praagman [1993]. The routine asks for the polynomial matrix P to be reduced and a tolerance. The tolerance is used for rank determination within the accuracy of the computations. Thus the tolerance influences whether or not the correct solution is found, but does not influence the accuracy of the solution.

We gave five examples. In all cases the subroutine performs satisfactorily, i.e., the computed solution has a residual matrix PU - R, that satisfies [absolute value of PU - R] [less than] K[absolute value of P][absolute value of U]EPS, where EPS is the machine precision, and K = [((d(P) + 1)m).sup.2]. Normally the tolerance should be chosen in agreement with the accuracy of the elements of P, with a lower bound (default value) of the order of EPS times K[absolute value of P]. In some cases the tolerance has to be set to a larger value than the default value in order to get significant results. Therefore, in case of failure, or if there is doubt about the correctness of the solution, the user is recommended to run the program with several values of the tolerance.

At this moment we are optimistic about the performance of the routine. The only cases for which we had some difficulties to get the solution were what we called singular cases. As we argued in the last section, the nature of this singularity will frustrate all algorithms. We believe, although we cannot prove it at this moment, that the algorithm is numerically stable in the sense that the computed solution satisfies [absolute value of PU - R] [less than] K[absolute value of P][absolute value of U]EPS.

ACKNOWLEDGMENTS

We would like to thank Herman Willemsen (TUE) for testing the routines on different computers.

REFERENCES

ANDERSON, E., BAI, Z., BISCHOF, C., DEMMEL, J., DONGARRA, J., DU CROZ, J., GREENBAUM, A., HAMMARLING, S., McKENNEY, A., OSTROUCHOV, S., AND SORENSEN, D. 1992. LAPACK User's Guide. SIAM, Philadelphia, Pa.

BEELEN, TH. G. J. 1987. New algorithms for computing the Kronecker structure of a pencil with applications to systems and control theory. Ph.D, thesis, Eindhoven Univ. of Technology, Eindhoven, The Netherlands.

BEELEN, TH. G. J., VAN DEN HURK, G. J. H. H., AND PRAAGMAN, C. 1988. A new method for computing a column reduced polynomial matrix. Syst. Contr. Lett. 10, 217-224.

CODENOTTI, B. AND LOTTI, G. 1989. A fast algorithm for the division of two polynomial matrices. IEEE Trans. Autom. Contr. 34, 446-448.

DONGARRA, J. J., DU CROZ, J. J., HAMMARLING, S. J., AND HANSON, R. J. 1988. An extended set of Fortran Basic Linear Algebra Subprograms. ACM Trans. Math. Softw. 14, 1, 1-17.

GEURTS, A. J. AND PRAAGMAN, C. 1994. A Fortran subroutine for column reduction of polynomial matrices. EUT Rep. 94-WSK-01, Eindhoven Univ. of Technology, Eindhoven, The Netherlands.

INOUYE, I. 1979. An algorithm for inverting polynomial matrices. Int. J. Contr. 30, 989-999.

KAILATH, T. 1980. Linear Systems. Prentice-Hall, Englewood Cliffs, N.J.

LAWSON, C. L., HANSON, R. J., KINCAID, D. R., AND KROGH, F. T. 1979. Basic Linear Algebra Subprograms for Fortran usage. ACM Trans. Math. Softw. 5, 308-323.

LIN, C.-A., YANG, C.-W., AND HSIEH, T.-F. 1996. An algorithm for inverting rational matrices. Syst. Contr. Lett. 26. To be published.

NEVEN, W. H. L. 1988. Polynomial methods in systems theory. Master's thesis, Eindhoven Univ. of Technology, Eindhoven, The Netherlands.

NEVEN, W. H. L. AND PRAAGMAN, C. 1993. Column reduction of polynomial matrices. Lin. Alg. Appl. 188-189, 569-589.

PRAAGMAN, C. 1988. Inputs, outputs and states in the representation of time series. In Analysis and Optimization of Systems, A. Bensoussan and J. Lions, Eds. Lecture Notes in Control and Information Sciences, vol. 111. Springer, Berlin, 1069-1078.

PRAAGMAN, C. 1991. Invariants of polynomial matrices. In Proceedings of the 1st ECC, I. Landau, Ed. INRIA, France, 1274-1277.

VAN DOOREN, P. 1979. The computation of Kronecker's cononical form of a singular pencil. Lin. Alg. Appl. 27, 103-140.

WANG, Q. G. AND ZHOU, C. H. 1986. An efficient division algorithm for polynomial matrices. IEEE Trans. Autom. Contr. 31, 165-166.

WGS. 1990. SLICOT: Implementation and documentation standards. WGS Rep. 90-01. Working Group on Software, Eindhoven, The Netherlands.

WILLEMS, J. C. 1986a. From time series to linear systems. Part 1. Automatica 22, 561-580.

WILLEMS, J. C. 1986b. From time series to linear systems: Part 2. Automatica 22, 675-694.

WILLEMS, J. C.1987. From time series to linear systems: Part 3. Automatica 23, 87-115.

WILLEMS, J. C. 1988. Models for dynamics. Dynam. Rep. 2, 171-269.

WILLEMS, J. C.1991. Paradigms and puzzles in the theory of dynamical systems. IEEE Trans. Autom. Contr. 36, 259-294.

WOLOVICH, W. A. 1978. Linear Multivariable Systems. Springer-Verlag, Berlin.

WOLOVICH, W. A. 1984. A division algorithm for polynomial matrices. IEEE Trans. Autom. Contr. 29, 656-658.

ZHANG, S. Y. 1986. The division of polynomial matrices. IEEE Trans. Autom. Contr. 31, 55-56.

ZHANG, S. Y. 1987. Inversion of polynomial matrices. Int. J. Contr. 46, 33-37.

ZHANG, S. Y. AND CHEN, C.-T. 1983. An algorithm for the division of two polynomial matrices. IEEE Trans. Autom. Contr. 28, 238-240.
COPYRIGHT 1997 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Geurts, A.J.; Praagman, C.
Publication:ACM Transactions on Mathematical Software
Date:Mar 1, 1997
Words:6356
Previous Article:Algorithm 766: experiments with a weakly stable algorithm for computing Pade-Hermite and simultaneous Pade approximants.
Next Article:Practical experience in the numerical dangers of heterogeneous computing.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |