# Estimation of Parameters and Eigenmodes of Multivariate Autoregressive Models.

1. INTRODUCTIONDynamical characteristics of a complex system can often be inferred from analyses of a stochastic time series model fitted to observations of the system [Tiao and Box 1981]. In the geosciences, for example, oscillations of a complex system are sometimes characterized by what are known as principal oscillation patterns, eigenmodes of a multivariate autoregressive model of first order (AR(1) model) fitted to observations [Hasselmann 1988; von Storch and Zwiers 1999, Chapter 15]. Principal oscillation patterns possess characteristic frequencies (or oscillation periods) and damping times, the frequencies being the natural frequencies of the AR(1) model. By analyzing principal oscillation patterns of an oscillatory system, one can identify components of the system that are associated with characteristic frequencies and damping times. Xu and von Storch [1990], for example, use a principal oscillation pattern analysis to identify the spatial structures of the mean sea level pressure that are associated with the conglomerate of climatic phenomena collectively called El Nino and the Southern Oscillation. In a similar manner, Huang and Shukla [1997] distinguish those spatial structures of the sea surface temperature that oscillate with periods on the order of years from those that oscillate with periods on the order of decades. More examples of such analyses can be found in the bibliographies of these papers.

Since the principal oscillation pattern analysis is an analysis of eigenmodes of an AR(1) model, dynamical characteristics of a system can be inferred from principal oscillation patterns only if an AR(1) model provides an adequate fit to the observations of the system. The applicability of the principal oscillation pattern analysis is therefore restricted. Generalizing the analysis of eigenmodes of AR(1) models to autoregressive models of arbitrary order p (AR(p) models), we will render the analysis of eigenmodes of AR models applicable to a larger class of systems.

An m-variate AR(p) model for a stationary time series of state vectors [v.sub.v] [element of] [R.sup.m], observed at equally spaced instants v, is defined by

(1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where the m-dimensional vectors [[Epsilon].sub.v] = noise(C) are uncorrelated random vectors with mean zero and covariance matrix C [element of] [R.sup.mxm], and the matrices [A.sub.1], ..., [A.sub.p] [element of] [R.sup.mxm] are the coefficient matrices of the AR model. The parameter vector w [element of] [R.sup.m] is a vector of intercept terms that is included to allow for a nonzero mean of the time series,

(2) <[v.sub.v]> = [(I - [A.sub.1] - ... - [A.sub.p]).sup.-1] w,

where <*> denotes an expected value. (For an introduction to modeling multivariate time series with such AR models, see Lutkepohl [1993].) In this paper, we will describe the eigendecomposition of AR(p) models of arbitrary order p. Since the analysis of eigenmodes of AR models is of interest in particular for high-dimensional systems such as the ones examined in the geosciences, we will also discuss how the order p of an AR model and the coefficient matrices [A.sub.1], ..., [A.sub.p], the intercept vector w, and the noise covariance matrix C can be estimated from high-dimensional time series data in a computationally efficient and stable way.

In Section 2, it is shown that an m-variate AR(p) model has mp eigenmodes that possess, just like the m eigenmodes of an AR(1) model, characteristic frequencies and damping times. The excitation is introduced as a measure of the dynamical importance of the modes. Section 3 describes a stepwise least squares algorithm for the estimation of parameters of AR models. This algorithm uses a QR factorization of a data matrix to evaluate, for a sequence of successive orders, a criterion for the selection of the model order and to compute the parameters of the AR model of the optimum order. Section 4 discusses the construction of approximate confidence intervals for the intercept vector, for the AR coefficients, for the eigenmodes derived from the AR coefficients, and for the oscillation periods and damping times of the eigenmodes. Section 5 contains results of numerical experiments with the presented algorithms. Section 6 summarizes the conclusions.

The methods presented in this paper are implemented in the Matlab package ARFIT, which is described in a companion paper [Schneider and Neumaier 2001]. We will refer to modules in ARFIT that contain implementations of the algorithms under consideration.

Notation. [A.sub.:k] denotes the kth column of the matrix A. [A.sup.T] is the transpose, and [A.sup.[dagger]] the conjugate transpose of A. The inverse of [A.sup.[dagger]] is written as [A.sup.-[dagger]], and the superscript * denotes complex conjugation. In notation, we do not distinguish between random variables and their realizations; whether a symbol refers to a random variable or to a realization can be inferred from the context.

2. EIGENDECOMPOSITION OF AR MODELS

The eigendecomposition of an AR(p) model is a structural analysis of the AR coefficient matrices [A.sub.1], ..., [A.sub.p]. The eigendecomposition of AR(1) models is described, for example, by Honerkamp [1994, pp. 426 ff.]. In what sense and to what extent an eigendecomposition of an AR(1) model can yield insight into dynamical characteristics of complex systems is discussed by von Storch and Zwiers [1999, Chapter 15]. In Section 2.1, a review of the eigendecomposition of AR(1) models introduces the concepts and notation used throughout this paper. In Section 2.2, the results for AR(1) models are generalized to AR(p) models of arbitrary order p. Throughout this section, we assume that the mean (2) has been subtracted from the time series of state vectors [v.sub.v], so that the intercept vector w can be taken to be zero.

2.1 AR Models of First Order

Suppose the coefficient matrix A of the m-variate AR(1) model

(3) [v.sub.v] = A[v.sub.v-1] + [[Epsilon].sub.v], [[Epsilon].sub.v] = noise(C),

is nondefective, so that it has m (generally complex) eigenvectors that form a basis of the vector space [R.sup.m] of the state vectors [v.sub.v]. Let S be the nonsingular matrix that contains these eigenvectors as columns [S.sub.:k], and let [Lambda] = Diag([[Lambda].sub.k]) be the associated diagonal matrix of eigenvalues [[Lambda].sub.k] (k = 1, ..., m). The eigendecomposition of the coefficient matrix can then be written as A = S[Lambda][S.sup.-1]. In the basis of the eigenvectors [S.sub.:k] of the coefficient matrix A, the state vectors [v.sub.v] and the noise vectors [[Epsilon].sub.v] can be represented as linear combinations

(4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with coefficient vectors [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Substituting these expansions of the state vectors [v.sub.v] and of the noise vectors [[Epsilon].sub.v] into the AR(1) model (3) yields, for the coefficient vectors [v'.sub.v], an AR(1) model

(5) [v'.sub.v] = [Lambda][v'.sub.v-1] + [[Epsilon]'.sub.v], [[Epsilon]'.sub.v] = noise(C'),

with a diagonal coefficient matrix [Lambda] and with a transformed noise covariance matrix

(6) C' = [S.sup.-1][CS.sup.-[dagger]].

The m-variate AR(1) model for the coefficient vectors represents a system of m univariate models

(7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which are coupled only via the covariances [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the noise coefficients (where [[Delta].sub.[Mu]v] = 1 for [Mu] = v and [[Delta].sub.[Mu]v] = 0 otherwise).

Since the noise vectors are assumed to have mean zero, the dynamics of the expected values of the coefficients

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

decouple completely. In the complex plane, the expected values of the coefficients describe a spiral

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with damping time

(8) [[Tau].sub.k] [equivalent] -1/log|[[Lambda].sub.k]|

and period

(9) [T.sub.k] [equivalent] 2[Pi] / |arg [[Lambda].sub.k]|,

the damping time and the period being measured in units of the sampling interval of the time series [v.sub.v]. To render the argument arg z = Im(log z) of a complex number z unique, we stipulate - [Pi] [is less than or equal to] arg z [is less than or equal to] [Pi], a convention that ensures that a pair of complex conjugate eigenvalues is associated with a single period. For a stable AR model with nonsingular coefficient matrix A, the absolute values of all eigenvalues [[Lambda].sub.k] lie between zero and one, 0 [is less than] |[[Lambda].sub.k]| [is less than] 1, which implies that all damping times [[Tau].sub.k] of such a model are positive and bounded.

Whether an eigenvalue is real and, if it is real, whether it is positive or negative determines the dynamical character of the eigenmode to which the eigenvalue belongs. If an eigenvalue [[Lambda].sub.k] has a nonzero imaginary part or if it is real and negative, the period [T.sub.k] is bounded, and the AR(1) model (7) for the associated time series of coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is called a stochastically forced oscillator. The period of an oscillator attains its minimum value [T.sub.k] = 2 if the eigenvalue [[Lambda].sub.k] is real and negative, that is, if the absolute value |arg [[Lambda].sub.k]| of the argument of the eigenvalue is equal to [Pi]. The smallest oscillation period [T.sub.k] = 2 that is representable in a time series sampled at a given sampling interval corresponds to what is known in Fourier analysis as the Nyquist frequency. If an eigenvalue [[Lambda].sub.k] is real and positive, the period [T.sub.k] is infinite, and the AR(1) model (7) for the associated time series of coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is called a stochastically forced relaxator.

Thus, a coefficient [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in the expansion (4) of the state vectors [v.sub.v] in terms of eigenmodes [S.sub.:k] represents, depending on the eigenvalue [[Lambda].sub.k], either a stochastically forced oscillator or a stochastically forced relaxator. The oscillators and relaxators are coupled only via the covariances of the noise, the stochastic forcing. The coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] can be viewed as the amplitudes of the eigenmodes [S.sub.:k] if the eigenmodes are normalized such that [[parallel][S.sub.:k][parallel].sub.2] = 1. To obtain a unique representation of the eigenmodes [S.sub.:k], we stipulate that the real parts X = Re S and the imaginary parts Y = Im S of the eigenmodes [S.sub.:k] = [X.sub.:k] + i[Y.sub.:k] satisfy the normalization conditions

(10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The normalized eigenmodes [S.sub.:k] represent aspects of the system under consideration whose amplitudes [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] oscillate with a characteristic period [T.sub.k] and would, in the absence of stochastic forcing, decay towards zero with a characteristic damping time [[Tau].sub.k]. Only oscillatory modes with a finite period have imaginary parts. The real parts and the imaginary parts of oscillatory modes represent aspects of the system under consideration in different phases of an oscillation, with a phase lag of [Pi]/2 between real part and imaginary part. In the geosciences, for example, the state vectors [v.sub.v] might represent the Earth's surface temperature field on a spatial grid, with each state vector component representing the temperature at a grid point. The eigenmodes would then represent structures of the surface temperature field that oscillate and decay with characteristic periods and damping times. In a principal oscillation pattern analysis, the spatial structures of the real parts and of the imaginary parts of the eigenmodes are analyzed graphically. It is in this way that, in the geosciences, eigenmodes of AR(1) models are analyzed to infer dynamical characteristics of a complex system (see Storch and Zwiers [1999] for more details, including the relationship between the periods of the eigenmodes and the maxima of the power spectrum of an AR model).

Dynamical Importance of Modes. The magnitudes of the amplitudes [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the normalized eigenmodes [S.sub.:k] indicate the dynamical importance of the various relaxation and oscillation modes. The variance of an amplitude [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], or the excitation

(11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

is a measure of the dynamical importance of an eigenmode [S.sub.:k].

The excitations can be computed from the coefficient matrix A and from the noise covariance matrix C. The covariance matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the state vectors [v.sub.v] satisfies [Lutkepohl 1993, Chapter 2.1.4]

(12) [Sigma] = A[Sigma][A.sup.T] + C.

Upon substitution of the eigendecomposition A = S[Lambda][S.sup.-1] and of the transformed noise covariance matrix C = SC'[S.sup[dagger]], the state covariance matrix becomes

(13) [Sigma] = S[Sigma]'[S.sup.[dagger]]

with a transformed state covariance matrix [Sigma]' that is a solution of the linear matrix equation [Sigma] = [Lambda][Sigma]'[[Lambda].sup.[dagger]] + C'. Since the eigenvalue matrix [Lambda] is diagonal, this matrix equation can be written componentwise as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

For a stable AR model for which the absolute values of all eigenvalues are less than one, |[[Lambda].sub.k]| [is less than] 1, this equation can be solved for the transformed state covariance matrix [Sigma]', whose diagonal elements [[Sigma]'.sub.kk] are the excitations [[Sigma].sub.k], the variances of the amplitudes [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. In terms of the transformed noise covariance matrix C' and of the eigenvalues [[Lambda].sub.k], the excitations can hence be written as

(14) [[Sigma].sub.k] = [C'.sub.kk] / 1 - [|[[Lambda].sub.k]|.sup.2],

an expression that can be interpreted as the ratio of the forcing strength [C'.sub.kk] over the damping 1 - [|[[Lambda].sub.k]|.sup.2] of an eigenmode [S.sub.:k].

The suggestion of measuring the dynamical importance of the modes in terms of the excitations [[Sigma].sub.k] contrasts with traditional studies in which the least damped eigenmodes of an AR(1) model were considered dynamically the most important. That is, the eigenmodes for which the associated eigenvalue had the greatest absolute value |[[Lambda].sub.k]| were considered dynamically the most important (e.g., see von Storch et al. [1995], Penland and Sardeshmukh [1995], and von Storch and Zwiers [1999, Chapter 15], and references therein). The tradition of viewing the least damped modes as the dynamically most important ones comes from the analysis of modes of deterministic linear systems, in which the least damped mode, if excited, dominates the dynamics in the limit of long times. In the presence of stochastic forcing, however, the weakly damped modes, if they are not sufficiently excited by the noise, need not dominate the dynamics in the limit of long times. The excitation [[Sigma].sub.k] therefore appears to be a more appropriate measure of dynamical importance.

2.2 AR Models of Arbitrary Order

To generalize the eigendecomposition of AR models of first order to AR models of arbitrary order, we exploit the fact that an m-variate AR(p) model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [[Epsilon].sub.v] = noise(C),

is equivalent to an AR(1) model

[v.sub.v] = A[v.sub.v-1] + [[Epsilon].sub.v], [Epsilon] = noise(C),

with augmented state vectors and noise vectors

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and with a coefficient matrix (e.g., Honerkamp [1994, p. 426])

(15) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The noise covariance matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

of the equivalent AR(1) model is singular.

An AR(1) model that is equivalent to an AR(p) model can be decomposed into eigenmodes according to the above decomposition of a general AR(1) model. If the augmented coefficient matrix A is nondefective, its mp eigenvectors form a basis of the vector space [R.sup.mp] of the augmented state vectors [v.sub.v]. As above, let S be the nonsingular matrix whose columns [S.sub.:k] are the eigenvectors of the augmented coefficient matrix A = S[Lambda][S.sup.-1], and let A be the associated diagonal matrix of eigenvalues [[Lambda].sub.k] (k = 1, ..., mp). In terms of the eigenvectors [S.sub.:k] of the augmented coefficient matrix A, the augmented state vectors and noise vectors can be represented as linear combinations

(16) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The dynamics of the coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in the expansion of the augmented state vectors are governed by a system of mp univariate AR(1) models

(17) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], k = 1, ..., mp,

which are coupled only via the covariances [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the noise coefficients. The covariance matrix of the noise coefficients is the transformed noise covariance matrix

(18) C' = [S.sup.-1]C[S.sup.-[dagger]]

of the equivalent AR(1) model. Thus, the augmented time series [v.sub.v] can be decomposed, just as above, into mp oscillators and relaxators with mp-dimensional eigenmodes [S.sub.:k].

However, because the augmented coefficient matrix A has the Frobenius structure (15), the augmented eigenmodes [S.sub.:k] have a structure that makes it possible to decompose the original time series [v.sub.v] into oscillators and relaxators with m-dimensional modes, instead of the augmented mp-dimensional modes [S.sub:k]. The eigenvectors [S.sub.:k] of the augmented coefficient matrix A have the structure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with an m-dimensional vector [S.sub.:k] (cf. Wilkinson's [1965, Chapter 1.30] discussion of eigenmodes of higher-order differential equations). Substituting this expression for the augmented eigenvectors [S.sub.:k] into the expansions (16) of the augmented state vectors and noise vectors and introducing the renormalized coefficients

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

one finds that the original m-dimensional state vectors [v.sub.v], and noise vectors [[Epsilon].sub.v] can be represented as linear combinations

(19) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

of the m-dimensional vectors [S.sub.:k]. Like the dynamics (17) of the coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in the expansion of the augmented state vectors [v.sub.v], the dynamics of the coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in the expansion of the original state vectors [v.sub.v], are governed by a system of mp univariate AR(1) models

(20) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

which are coupled only via the covariances [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the noise coefficients.

For AR(p) models of arbitrary order, the expansions (19) of the state vectors [v.sub.v] and of the noise vectors [[Epsilon].sub.v] and the dynamics (20) of the expansion coefficients parallel the expansions (4) of the state vectors and of the noise vectors and the dynamics (7) of the expansion coefficients for AR(1) models. In the decomposition of an AR(p) model of arbitrary order, the AR(1) models (20) for the dynamics of the expansion coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] can be viewed, as in the decomposition of AR(1) models, as oscillators or relaxators, depending on the eigenvalue [[Lambda].sub.k] of the augmented coefficient matrix A. The m-dimensional vectors [S.sub.:k] can be viewed as eigenmodes that possess characteristic damping times (8) and periods (9). To obtain a unique representation of the eigenmodes [S.sub.:k], we stipulate, as an extension of the normalization (10) in the first-order case, that the real parts X = Re S and the imaginary parts Y = Im S of the eigenvectors [S.sub.:k] = [X.sub.:k] + i[Y.sub.:k] of the augmented coefficient matrix A satisfy the normalization conditions

(21) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

With this normalization of the eigenvectors [S.sub.:k], the coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in the expansion of the augmented state vectors [v.sub.v], indicate the amplitudes of the modes [S.sub.:k]. The variances of these amplitudes, the excitations [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], are measures of the dynamical importance of the modes. In analogy to the excitations (14) of modes of AR(1) models, the excitations of modes of AR(p) models can be expressed as

[[Sigma].sub.k] = [C'.sub.kk] / 1 - [|[[Lambda].sub.k]|.sup.2]

where [C'.sub.kk] is a diagonal element of the transformed noise covariance matrix (18).

Thus, an AR(p) model of arbitrary order can be decomposed, just like an AR(1) model, into mp oscillators and relaxators with m-dimensional eigenmodes [S.sub.:k]. To infer dynamical characteristics of a complex system, the eigenmodes of AR(p) models can be analyzed in the same way as the eigenmodes of AR(1) models. All results for AR(1) models have a direct extension to AR(p) models. The only difference between the eigendecomposition of AR(1) models and the eigendecomposition of higher-order AR(p) models is that higher-order models possess a larger number of eigenmodes, which span the vector space of the state vectors [v.sub.v], but are not, as in the first-order case, linearly independent.

The ARFIT module armode computes the eigenmodes [S.sub.:k] of AR(p) models of arbitrary order by an eigendecomposition of the coefficient matrix A of an equivalent AR(1) model. It also computes the periods, damping times, and excitations of the eigenmodes.

3. STEPWISE LEAST SQUARES ESTIMATION FOR AR MODELS

To analyze the eigenmodes of an AR(p) model fitted to a time series of observations of a complex system, the unknown model order p and the unknown model parameters [A.sub.1], ..., [A.sub.p], w, and C must first be estimated. The model order is commonly estimated as the optimizer of what is called an order selection criterion, a function that depends on the noise covariance matrix C of an estimated AR(p) model and that penalizes the overparameterization of a model [Lutkepohl 1993, Chapter 4.3]. (The hat-accent A designates an estimate of the quantity A.) To determine the model order [p.sub.opt] that optimizes the order selection criterion, the noise covariance matrices C are estimated and the order selection criterion is evaluated for AR(p) models of successive orders [p.sub.min] [is less than or equal to] p [is less than or equal to] [p.sub.max]. If the parameters [A.sub.1], ..., [A.sub.p] and w are not estimated along with the noise covariance matrix C, they are then estimated for a model of the optimum order [p.sub.opt].

Both asymptotic theory and simulations indicate that, if the coefficient matrices [A.sub.1], ..., [A.sub.p], and the intercept vector w of an AR model are estimated with the method of least squares, the residual covariance matrix C of the estimated model is a fairly reliable estimator of the noise covariance matrix C and hence can be used in order selection criteria [Tjostheim and Paulsen 1983; Paulsen and Tjostheim 1985; Mentz et al. 1998].(1) The least squares estimates of AR parameters are obtained by casting an AR model in the form of an ordinary regression model and estimating the parameters of the regression model with the method of least squares [Lutkepohl 1993, Chapter 3]. Numerically, the least squares problem for the ordinary regression model can be solved with standard methods that involve the factorization of a data matrix (e.g., Bjorck [1996, Chapter 2]). In what follows, we will present a stepwise least squares algorithm with which, in a computationally efficient and stable manner, the parameters of an AR model can be estimated and an order selection criterion can be evaluated for AR(p) models of successive orders [p.sub.min] [is less than or equal to] p [is less than or equal to] [p.sub.max]. Starting from a review of how the least squares estimates for an AR(p) model of fixed order p can be computed via a QR factorization of a data matrix, we will show how, from the same QR factorization, approximate least squares estimates for models of lower order p' [is less than] p can be obtained.

3.1 Least Squares Estimates for an AR Model of Fixed Order

Suppose that an m-dimensional time series of N + p state vectors [v.sub.v] (v = 1 - p, ..., N) is available, the time series consisting of p presample state vectors [v.sub.1-p], ..., [v.sub.0] and N state vectors [v.sub.1], ..., [v.sub.N] that form what we call the effective sample. The parameters [A.sub.1], ..., [A.sub.p], w, and C of an AR(p) model of fixed order p are to be estimated.

An AR(p) model can be cast in the form of a regression model

(22) [v.sub.v] = [Bu.sub.v] + [[Epsilon].sub.v], [[Epsilon].sub.v] = noise(C), v = 1, ..., N,

with parameter matrix

(23) B = (w [A.sub.1] ... [A.sub.p])

and with predictors

(24) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

of dimension [n.sub.p] = mp + 1. Casting an AR model in the form of a regression model is an approximation in that in a regression model, the predictors [u.sub.v], are assumed to be constant, whereas the state vectors [u.sub.v] of an AR process are a realization of a stochastic process. The approximation of casting an AR model into the form of a regression model amounts to treating the first predictor

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

as a vector of constant initial values (cf. Wei [1994, Chapter 7.2.1]). What are unconditional parameter estimates for the regression model are therefore conditional parameter estimates for the AR model, conditional on the first p pre-sample state vectors [v.sub.1-p], ..., [v.sub.0] being constant. But since the relative influence of the initial condition on the parameter estimates decreases as the sample size N increases, the parameter estimates for the regression model are still consistent and asymptotically unbiased estimates for the AR model (e.g., Lutkepohl [1993, Chapter 3]).

In terms of the moment matrices

(25) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

the least squares estimate of the parameter matrix B can be written as

(26) B = [WU.sup.-1].

The residual covariance matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is an estimate of the noise covariance matrix C and can be expressed as

(27) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

A derivation of the least squares estimators and a discussion of their properties can be found, for example, in Lutkepohl [1993, Chapter 3].

The residual covariance matrix C is proportional to a Schur complement of the matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

which is the moment matrix [Gamma] = [K.sup.T]K belonging to the data matrix

(28) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The least squares estimates can be computed from a QR factorization of the data matrix

(29) K = QR,

with an orthogonal matrix Q and an upper triangular matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The QR factorization of the data matrix K leads to the Cholesky factorization [Gamma] = [K.sup.T]K = [R.sup.T]R of the moment matrix,

(30) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and from this Cholesky factorization, one finds the representation

(31) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for the least squares estimates of the parameter matrix B and of the noise covariance matrix C. The estimated parameter matrix B is obtained as the solution of a triangular system of equations, and the residual covariance matrix C is given in a factored form that shows explicitly that the residual covariance matrix is positive semidefinite.

If the moment matrix [Gamma] = [K.sup.T]K is ill-conditioned, the effect of rounding errors and data errors on the parameter estimates can be reduced by computing the parameter estimates (31) not from a Cholesky factorization (30) of the moment matrix [Gamma], but from an analogous Cholesky factorization of a regularized moment matrix [Gamma] + [Delta][D.sup.2], where [D.sup.2] is a positive definite diagonal matrix and [Delta] is a regularization parameter (e.g., Hansen [1997]). A Cholesky factorization of the regularized moment matrix [Gamma] + [Delta][D.sup.2] = [R.sup.T]R can be computed via a QR factorization of the augmented data matrix

(32) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Rounding errors and data errors have a lesser effect on the estimates (31) computed from the upper triangular factor R of this QR factorization. The diagonal matrix D might be chosen to consist of the Euclidean norms of the columns of the data matrix, D = Diag([[parallel][K.sub.:j][parallel].sub.2]). The regularization parameter [Delta], as a heuristic, might be chosen to be a multiple ([q.sup.2] + q + 1) [Eta] of the machine precision [Eta], the multiplication factor [q.sup.2] + q + 1 depending on the dimension q = [n.sub.p] + m of the moment matrix [Gamma] (cf. Higham's [1996] Theorem 10.7, which implies that with such a regularization the direct computation of a Cholesky factorization of the regularized moment matrix [Gamma] + [Delta][D.sup.2] would be well-posed). The ARFIT module arqr computes such a regularized QR factorization for AR models.

If the observational error of the data is unknown but dominates the rounding error, the regularization parameter can be estimated with adaptive regularization techniques. In this case, however, the QR factorization (32) should be replaced by a singular value decomposition of the rescaled data matrix [KD.sup.-1], because the singular value decomposition can be used more efficiently with adaptive regularization methods (e.g., see Hansen [1997] and Neumaier [1998]).

3.2 Downdating the Least Squares Estimates

To select the order of an AR model, the residual covariance matrix C must be computed and an order selection criterion must be evaluated for AR(p) models of successive orders [p.sub.min] [is less than or equal to] p [is less than or equal to] [p.sub.max]. Order selection criteria are usually functions of the logarithm of the determinant

[l.sub.p] = log det [[Delta].sub.p]

of the residual cross-product matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

For example, Schwarz's [1978] Bayesian Criterion (SBC) can be written as

SBC(p) = [l.sub.p] / m - (1- [n.sub.p] / N) log N,

and the logarithm of Akaike's [1971] Final Prediction Error (FPE) criterion as

FPE(p) = [l.sub.p]/m - log N(N - [n.sub.p]) / N + [n.sub.p].

(These and other order selection criteria and their properties are discussed, for example, by Lutkepohl [1985; 1993, Chapter 4].) Instead of computing the residual cross-product matrix [Delta].sub.p] by a separate QR factorization for each order p for which an order selection criterion is to be evaluated, one can compute an approximation of the residual cross-product matrix [[Delta].sub.p] for a model of order p [is less than] [p.sub.max] by downdating the QR factorization for a model of order [p.sub.max]. (For a general discussion of updating and downdating least squares problems, see Bjorck [1996, Chapter 3].)

To downdate the QR factorization for a model of order p to a structurally similar factorization for a model of order p' = p - 1, one exploits the structure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

of the data matrix (28). A data matrix K' = ([K'.sub.1] [K.sub.2]) for a model of order p' = p - 1 follows, approximately, from the data matrix K = ([K.sub.1] [K.sub.2]) for a model of order p by removing from the submatrix [K.sub.1] = ([K'.sub.1] [K".sub.1]) of the predictors [u.sub.v], the m trailing columns [K".sub.1]. The downdated data matrix K' is only approximately equal to the least squares data matrix (28) for a model of order p' because in the downdated data matrix K', the first available state vector [v.sub.1-p] is not included. When parameter estimates are computed from the downdated data matrix K', a sample of the same effective size N is assumed both for the least squares estimates of order p and the least squares estimates of order p' = p - 1, although in the case of the lower order p', the pre-sample state vector [v.sub.0] could become part of the effective sample so that a sample of effective size N + 1 would be available. The relative loss of accuracy that this approximation entails decreases with increasing sample size N.

A factorization of the downdated data matrix K' follows from the QR factorization of the original data matrix K if one partitions the submatrices [R.sub.11] and [R.sub.12] of the triangular factor R, considering the m trailing rows and columns of [R.sub.11] and the m trailing rows of [R.sub.12] separately,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

With the thus partitioned triangular factor R, the QR factorization (29) of the data matrix becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Dropping the m columns belonging to the submatrix [K".sub.1], one obtains a factorization of the downdated data matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This downdated factorization has the same block-structure as the QR factorization (29) of the original data matrix K, but the submatrix [R'.sub.22] in the downdated factorization is not triangular. The factorization of the downdated data matrix K' thus is not a QR factorization. That the submatrix [R'.sub.22] is not triangular, however, does not affect the form of the least squares estimates (31), which, for a model of order p' = p - 1, can be computed from the downdated factorization in the same way as they are computed from the original QR factorization for a model of order p.

From the downdated factorization of the data matrix, one can obtain, for the evaluation of order selection criteria, downdating formulas for the logarithm [l.sub.p] = log det [[Delta].sub.p] of the determinant of the residual cross-product matrix [[Delta].sub.p]. The factorization of the downdated data matrix leads to the residual cross-product matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

from which, with the notation

[R.sub.p] = [R".sub.12],

the downdating formula

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for the residual cross-product matrix follows. Because the determinant of the right-hand side of this formula can be brought into the form [Anderson 1984, Theorem A.3.2]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

the downdating formula for the determinant term [l.sub.p] = log det [[Delta].sub.p] becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This downdate can be computed from a Cholesky factorization

(33) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

as

(34) [l.sub.p-1] = [l.sub.p] + 2 log det [L.sub.p],

the determinant of the Cholesky factor [L.sub.p] being the product of the diagonal elements.

This downdating procedure can be iterated, starting from a QR factorization for a model of order [p.sub.max] and stepwise downdating the factorization and the determinant term [l.sub.p] appearing in the order selection criteria until the minimum order [p.sub.min] is reached. To downdate the inverse cross-product matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which is needed in the Cholesky factorization (33), one can use the Woodbury formula [Bjorck 1996, Chapter 3]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and compute the downdated inverse [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] from the Cholesky factorization (33) via

(35) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(36) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

With the downdating scheme (33)-(36), an order selection criterion such as SBC or FPE, given a QR factorization for a model of order [p.sub.max], can be evaluated for models of order [p.sub.max] - 1, ..., [p.sub.min], whereby for the model of order p, the first [p.sub.max] - p state vectors are ignored.

After evaluating the order selection criterion for a sequence of models and determining an optimum order [p.sub.opt], one finds the least squares parameter estimates (31) for the model of the optimum order by replacing the maximally sized submatrices [R.sub.11] and [R.sub.12] of the triangular factor R by their leading submatrices of size [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. If the available time series is short and the assumed maximum model order [p.sub.max] is much larger than the selected optimum order [p.sub.opt], computing the parameters of the AR model of optimum order [p.sub.opt] from the downdated factorization of the model of maximum order [p.sub.max], and thus ignoring ([p.sub.max] - [p.sub.opt]) available state vectors, might entail a significant loss of information. To improve the accuracy of the parameter estimates in such cases, the parameters of the model of optimum order [p.sub.opt] can be computed from a second QR factorization, a QR factorization for a model of order p = [p.sub.opt].

The above downdating scheme is applicable both to the QR factorization of the data matrix K and to the regularized QR factorization of the augmented data matrix (32). The ARFIT module arord evaluates order selection criteria by downdating the regularized QR factorization performed with the module arqr. The driver module arfit determines the optimum model order and computes the AR parameters for the model of the optimum order.

3.3 Computational Complexity of the Stepwise Least Squares Algorithm

The data matrix whose QR factorization is to be computed is of size N' x ([MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]), where the number of rows N' of this matrix is equal to the sample size N if the least squares estimates are not regularized, or the number of rows N' is equal to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] if the least squares estimates are regularized by computing the QR factorization of the augmented data matrix (32). Computing the QR factorization requires, to leading order, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] operations.

In traditional algorithms for estimating parameters of AR models, a separate factorization would be computed for each order p for which an order selection criterion is to be evaluated. In the stepwise least squares algorithm, the downdates (33)-(36) require O([m.sup.3]) operations for each order p for which an order selection criterion is to be evaluated. Since N' [is greater than or equal to] m, the downdating process for each order p requires fewer operations than a new QR factorization. If the number of rows N' of the data matrix whose QR factorization is computed is much greater than the dimension m of the state vectors, the number of operations required for the downdating process becomes negligible compared with the number of operations required for the QR factorization. With the stepwise least squares algorithm, then, the order and the parameters of an AR model can be estimated about ([p.sub.max] - [p.sub.min] + 1)-times faster than with traditional least squares algorithms that require ([p.sub.max] - [p.sub.min] + 1) separate QR factorizations. Since deleting columns of a matrix does not decrease the smallest singular value of the matrix, the stepwise least squares algorithm is a numerically stable procedure (cf. Bjorck [1996, Chapter 3.2]).

4. CONFIDENCE INTERVALS

Under weak conditions on the distribution of the noise vectors [[Epsilon].sub.v] of the AR model, the least squares estimator of the AR coefficient matrices [A.sub.1], ..., [A.sub.p] and of the intercept vector w is consistent and asymptotically normal (e.g., Lutkepohl [1993, Chapter 3.2]). Let the AR parameters [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be arranged into a parameter vector [x.sub.B] by stacking adjacent columns [B.sub.:j] of the parameter matrix B,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Asymptotically, in the limit of large sample sizes N, the estimation errors [x.sub.B] - [x.sub.B] are normally distributed with mean zero and with a covariance matrix [[Sigma].sub.B] that depends on the noise covariance matrix C and on the moment matrix <[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]> of the predictors [u.sub.v] in the regression model (22) (e.g., Lutkepohl [1993, Chapter 3.2.2]). Substituting the least squares estimate C of the noise covariance matrix and the sample moment matrix U of the predictors [u.sub.v] for the unknown population quantities, one obtains, for the least squares estimator [x.sub.B], the covariance matrix estimate

(37) [[Sigma].sub.B] = [U.sup.-1] [cross product] C,

where A [cross product] B denotes the Kronecker product of the matrices A and B.

Basing inferences for finite samples on the asymptotic distribution of the least squares estimator, one can establish approximate confidence intervals for the AR coefficients, for the intercept vector, and for the eigenmodes, periods, and damping times derived from the AR coefficients. A confidence interval for an element [Phi] [equivalent] [([x.sub.B]).sub.l] of the parameter matrix B can be constructed from the distribution of the t-ratio

(38) t = [[Phi].sub.[+ or -]]/[[Sigma].sub.[Phi]],

the ratio of the estimation error [[Phi].sub.[+ or -]] [equivalent] [Phi] - [Phi] of the least squares estimate [Phi] = [([x.sub.B]).sub.l] = [B.sub.jk] over the square root of the estimated variance

(39) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

of the least squares estimator. For the construction of confidence intervals, it is common practice to assume that the t-ratio (38) follows Student's t distribution with N - [n.sub.p] degrees of freedom (e.g., Lutkepohl [1993, Chapter 3.2]). To be sure, this assumption is justified only asymptotically, but for lack of finite-sample statistics, it is commonly made. Assuming a t distribution for the t-ratios yields, for the parameter [Phi], the 100[Alpha]% confidence limits [Phi] [+ or -] [[Phi].sub.[+ or -]] with margin of error

(40) [[Phi].sub.[+ or -]] = t(N - [n.sub.p], (1 + [Alpha])/2)[[Sigma].sub.[Phi]],

where t(d, [Beta]) is the [Beta]-quantile of a t distribution with d degrees of freedom (cf. Draper and Smith [1981, Chapter 1.4]). From the estimated variance (39) of the least squares estimator, one finds that the margin of error of a parameter estimate [Phi] = [([x.sub.B]).sub.l] = [B.sub.jk] takes the form

(41) [[Phi].sub.[+ or -]] = t(N - [n.sub.p], (1 + [Alpha]/2) [square root of] [([U.sup.-1]).sub.kk][C.sub.jj].

The ARFIT module arconf uses this expression to compute approximate confidence limits [Phi] [+ or -] [[Phi].sub.[+ or -]] for the elements of the AR coefficient matrices and of the intercept vector.

Establishing confidence intervals for the eigenmodes and their periods and damping times is complicated by the fact that these quantities are nonlinear functions of the AR coefficients. Whereas for certain random matrices--for symmetric Wishart matrices, for example [Anderson 1984, Chapter 13]--some properties of the distributions of eigenvectors and eigenvalues are known, no analytical expressions for the distributions of eigenvalues and eigenvectors appear to be known for nonsymmetric Gaussian random matrices with correlated elements. For the estimation of confidence intervals for the eigenmodes and their periods and damping times, we must therefore resort to additional approximations that go beyond the asymptotic approximation invoked in constructing the approximate confidence intervals for the AR coefficients.

Consider a real-valued function [Phi] = [Phi]([x.sub.B]) that depends continuously on the AR parameters [x.sub.B], and let the estimate [Phi] [equivalent] [Phi]([x.sub.B]) be the value of the function [Phi] at the least squares estimates [x.sub.B] of the AR parameters [x.sub.B]). The function [Phi] may be, for example, the real part or the imaginary part of a component of an eigenmode, or a period or damping time associated with an eigenmode. Linearizing the function [Phi] about the estimate [Phi] leads to [Phi] - [Phi] [approximately equals] [([del][Phi]).sup.T] ([x.sub.B] - [x.sub.B]), where [del][Phi] denotes the gradient of [Phi]([x.sub.B]) at the estimate [x.sub.B] = [x.sub.B]. From this linearization, it follows that the variance [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the estimator function [Phi] can be approximated as

(42) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [[Sigma].sub.B] is the covariance matrix of the least squares estimator [x.sub.B]. If the function [Phi] is linear in the parameters [x.sub.B], the relation (42) between the variance [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the estimator function [Phi] and the covariance matrix [[Sigma].sub.B] of the estimator [x.sub.B] holds exactly. But if the function [Phi] is nonlinear, as it is, for example, when [Phi] stands for a component of an eigenmode, the relation (42) holds only approximately, up to higher-order terms.

Substituting the asymptotic covariance matrix (37) into the expression (42) for the variance of the estimator function [Phi] gives a variance estimate

(43) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

that can be used to establish confidence intervals. If the function [Phi] is nonlinear, the t-ratio [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the estimation error [[Phi].sub.[+ or -]] = [Phi] - [Phi] over the square root of the estimated variance [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] generally does not follow a t distribution, not even asymptotically. But assuming that the t-ratio follows a t distribution is still a plausible heuristic for constructing approximate confidence intervals. Generalizing the above construction of confidence limits for the AR parameters, we therefore compute approximate confidence limits [Phi] [+ or -] [[Phi].sub.[+ or -]] for functions [Phi]([x.sub.B]) of the AR parameters with the estimator variance (43) and with the margin of error (40).

The ARFIT module armode thus establishes approximate confidence intervals for the real parts and the imaginary parts of the individual components of the eigenmodes, and for the periods and the damping times associated with the eigenmodes. The closed-form expressions for the gradients [del][Phi] of the eigenmodes, periods, and damping times, which are required for the computation of the estimator variance (43), are derived in the Appendix.

5. NUMERICAL EXAMPLE

To illustrate the least squares estimation of AR parameters and to test the quality of the approximate confidence intervals for the AR parameters and for the eigenmodes, periods, and damping times, we generated time series data by simulation of the bivariate AR(2) process

(44) [v.sub.v] = w + [A.sub.1][v.sub.v-1] + [A.sub.2][v.sub.v-2] + [[Epsilon].sub.v], [[Epsilon].sub.v] = WN(0, C), v = 1, ..., N

with intercept vector

(45) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

coefficient matrices

(46) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and noise covariance matrix

(47) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The pseudorandom vectors [[Epsilon].sub.v] = WN(0, C) are simulated Gaussian white noise vectors with mean zero and covariance matrix C. Ensembles of time series of effective length N = 25, 50, 100, and 400 were generated, the ensembles consisting of 20000 time series for N = 25; 10000 time series for N = 50; and 5000 time series for N = 100 and N = 400. With the methods of the preceding sections, the AR(2) parameters were estimated from the simulated time series, the eigendecomposition of the estimated models was computed, and approximate 05% confidence intervals were constructed for the AR parameters and for the eigenmodes, periods, and damping times.

Table I shows, for each AR parameter [Phi] = [B.sub.jk], the median of the least squares parameter estimates [Phi] = [B.sub.jk] and the median of the margins of error [[Phi].sub.[+ or -]] belonging to the approximate 95% confidence intervals (41). Included in the table are the absolute values of the 2.5th percentile [[Phi].sub.-] and of the 97.5th percentile [[Phi].sub.+] of the simulated distribution of the estimation errors [Phi] - [Phi]. Ninety-five percent of the least squares parameter estimates [Phi] lie between the limits [Phi] - [[Phi].sub.-] and [Phi] + [[Phi].sub.+]. The quantity

[q.sub.95] = 95th percentile of {[[Phi].sub.+], [[Phi].sub.-]/[[Phi].sub.[+ or -]]}

is the 95th percentile of the ratio of the simulated margins of error [[Phi].sub.+]] and [[Phi].sub.-] over the approximate margins of error [[Phi].sub.[+ or -]]. The quantity [q.sub.95] is a measure of how much the approximate margin of error [[Phi].sub.[+ or -]] can underestimate the simulated margins of error [[Phi].sub.-] and [[Phi].sub.+].

Table I. Least Squares Estimates and 95% Margins of Error for the Parameters of the Bivariate AR(2) Model (44) [Phi] [+ or -] [[Phi].sub. [[Phi]. [[Phi]. [Phi] [+ or -]] sub.-1] sub.+] [q.sub.95] N = 25 [w.sub.1] 0.289 0.451 0.820 2.13 [+ or -] 0.506 [w.sub.2] 0.139 0.574 0.969 2.06 [+ or -] 0.621 [([A.sub.1]).sub.11] 0.326 0.404 0.312 1.29 [+ or -] 0.391 [([A.sub.1]).sub.21] 0.223 0.564 0.370 1.57 [+ or -] 0.475 [([A.sub.1]).sub.12] 1.152 0.380 0.261 1.51 [+ or -] 0.329 [([A.sub.1]).sub.22] 0.629 0.437 0.310 1.30 [+ or -] 0.407 [([A.sub.2]).sub.11] 0.353 0.305 0.231 1.42 [+ or -] 0.292 [([A.sub.2]).sub.21] -0.402 0.320 0.372 1.47 [+ or -] 0.356 [([A.sub.2]).sub.12] -0.206 0.283 0.543 1.51 [+ or -] 0.443 [([A.sub.2]).sub.22] -0.418 0.374 0.640 1.45 [+ or -] 0.453 = 100 N = 100 [w.sub.1] 0.256 0.198 0.286 1.46 [+ or -] 0.224 [w.sub.2] 0.107 0.247 0.327 1.37 [+ or -] 0.247 [([A.sub.1]).sub.11] 0.384 0.170 0.152 1.13 [+ or -] 0.168 [([A.sub.1]).sub.21] 0.281 0.227 0.180 1.27 [+ or -] 0.206 [([A.sub.1]).sub.12] 1.188 0.160 0.125 1.24 [+ or -] 0.146 [([A.sub.1]).sub.22] 0.682 0.182 0.160 1.11 [+ or -] 0.178 [([A.sub.2]).sub.11] 0.352 0.142 0.114 1.26 [+ or -] 0.131 [([A.sub.2]).sub.21] -0.401 0.150 0.166 1.23 [+ or -] 0.159 [([A.sub.2]).sub.12] -0.276 0.149 0.215 1.24 [+ or -] 0.192 [([A.sub.2]).sub.22] -0.479 0.199 0.264 1.24 [+ or -] 0.234 [Phi] [+ or -] [[Phi].sub. [[Phi]. [[Phi]. [Phi] [+ or -]] sub.-1] sub.+] [q.sub.95] N = 50 [w.sub.1] 0.266 0.289 0.453 1.67 [+ or -] 0.328 [w.sub.2] 0.115 0.356 0.548 1.65 [+ or -] 0.402 [([A.sub.1]).sub.11] 0.368 0.256 0.215 1.19 [+ or -] 0.250 [([A.sub.1]).sub.21] 0.260 0.347 0.258 1.38 [+ or -] 0.305 [([A.sub.1]).sub.12] 1.175 0.236 0.185 1.32 [+ or -] 0.215 [([A.sub.1]).sub.22] 0.667 0.280 0.231 1.21 [+ or -] 0.263 [([A.sub.2]).sub.11] 0.351 0.205 0.162 1.33 [+ or -] 0.191 [([A.sub.2]).sub.21] -0.397 0.210 0.250 1.35 [+ or -] 0.234 [([A.sub.2]).sub.12] -0.256 0.207 0.340 1.39 [+ or -] 0.283 [([A.sub.2]).sub.22] -0.460 0.270 0.398 1.31 [+ or -] 0.347 N = 400 [w.sub.1] -0.252 0.099 0.123 1.20 [+ or -] 0.110 [w.sub.2] -0.104 0.125 0.146 1.17 [+ or -] 0.134 [([A.sub.1]).sub.11] -0.396 0.082 0.077 1.06 [+ or -] 0.082 [([A.sub.1]).sub.21] -0.295 0.106 0.092 1.14 [+ or -] 0.100 [([A.sub.1]).sub.12] -0.197 0.077 0.067 1.15 [+ or -] 0.071 [([A.sub.1]).sub.22] -0.695 0.092 0.082 1.10 [+ or -] 0.087 [([A.sub.2]).sub.11] -0.350 0.066 0.060 1.11 [+ or -] 0.064 [([A.sub.2]).sub.21] -0.400 0.077 0.081 1.13 [+ or -] 0.078 [([A.sub.2]).sub.12] -0.294 0.085 0.101 1.15 [+ or -] 0.093 [([A.sub.2]).sub.22] -0.494 0.102 0.124 1.15 [+ or -] 0.113

The simulation results in Table I show that the least squares estimates of the AR parameters are biased when the sample size is small (cf. Tjostheim and Paulsen [1983] and Mentz et al. [1998]). Consistent with asymptotic theoretical results on the bias of AR parameter estimates [Tjostheim and Paulsen 1983], the bias of the least squares estimates in the simulations decreases roughly as 1/N as the sample size N increases. The bias of the estimates affects the reliability of the confidence intervals because in the approximate confidence intervals [Phi] [+ or -] [[Phi].sub.[+ or -]], centered on the least squares estimate [Phi], the bias is not taken into account. The bias of the estimates is one of the reasons why, for each parameter [Phi], the median of the approximate 95% margins of error [[Phi].sub.[+ or -]] differs from the absolute values [[Phi].sub.-] and [[Phi].sub.+] of the 2.5th percentile and of the 97.5th percentile of the simulated estimation error distribution (cf. Nankervis and Savin [1988]). For small sample sizes N, the absolute values [[Phi].sub.-] and [[Phi].sub.+] of the 2.5th percentile and of the 97.5th percentile of the estimation error distribution differ considerably, for N = 25 by nearly a factor of two. Nevertheless, the median of the approximate margins of error [[Phi].sub.[+ or -]] lies, for each parameter [Phi], in between the absolute values of the percentiles [[Phi].sub.-] and [[Phi].sub.+] of the simulated estimation error distribution. The approximate margins of error [[Phi].sub.[+ or -]] can thus be used as rough indicators of the magnitudes of the estimation errors. But as the values of [q.sub.95] suggest, the approximate margins of error are reliable indicators of the magnitudes of the estimation errors only when the sample size N is large.

We carried out an analogous analysis for the eigendecomposition of the estimated AR(2) models. Imposing the normalization conditions (21) on the eigenvectors

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

of the augmented coefficient matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

that consists of the above coefficient matrices (46), one finds, for the simulated process (44), the eigenmodes

(48) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and the eigenvalues

(49) [[Lambda].sub.1] = -0.728, [[Lambda].sub.2] = 0.623, [[Lambda].sub.3,4] = 0.603 [+ or -] 0.536i.

Associated with the eigenmodes are the periods

(50) [T.sub.1] = 2, [T.sub.2] [right arrow] [infinity], [T.sub.3,4] = 8.643,

and the damping times

(51) [[Tau].sub.1] = 3.152, [[Tau].sub.2] = 2.114, [[Tau].sub.3,4] = 4.647.

The eigenmodes, periods, and damping times obtained from the ensembles of estimated models were compared with the eigenmodes, periods, and damping times of the simulated process.

Tables II and III show, for functions [Phi]([x.sub.B]) of the AR parameters B, the median of the estimates [Phi] = [Phi]([x.sub.B]) and the median of the margins of error [[Phi].sub.[+ or -]] belonging to the approximate 95% confidence intervals (40). The function [Phi] stands for a real part Re [S.sub.jk] or an imaginary part Im [S.sub.jk] of a component [S.sub.jk] of an eigenmode, or for a period [T.sub.k] or a damping time [[Tau].sub.k]. As in Table I, the symbols [[Phi].sub.-] and [[Phi].sub.+] refer to the absolute values of the 2.5th percentile and of the 97.5th percentile of the simulated distribution of the estimation errors [Phi] - [Phi], and the quantity [q.sub.95] is defined as above. A value of "NAN" for the quantity [q.sub.95] stands for the indefinite expression 0/0. A value of infinity ("Inf") results from the division of a nonzero number by zero.

Table II. Estimates and 95% Margins of Error for the Periods and Damping Times of the Bivariate AR(2) Model (44) [Phi] [+ or-] [[Phi].sub. [[Phi]. [[Phi]. [Phi] [+ or -]] sub.-] sub.+] [q.sub.95] N = 25 [T.sub.1] 2.000 0.000 0.000 NaN [+ or -] 0.000 [[Tau].sub.1] 3.285 2.181 8.677 4.51 [+ or -] 4.738 [T.sub.2] Inf 0.000 0.000 NaN [+ or -] 0.000 [[Tau].sub.2] 1.309 1.841 4.901 4.26 [+ or -] 2.491 [T.sub.3,4] 8.578 2.553 4.848 3.23 [+ or -] 2.791 [[Tau].sub.3,4] 6.449 2.557 21.942 5.43 [+ or -] 9.807 N = 1000 [T.sub.1] 2.000 0.000 0.000 NaN [+ or -] 0.000 [[Tau].sub.1] 3.147 1.402 2.638 2.22 [+ or -] 1.922 [T.sub.2] Inf 0.000 0.000 NaN [+ or -] 0.000 [[Tau].sub.2] 1.903 1.325 2.184 2.52 [+ or -] 1.687 [T.sub.3,4] 8.643 1.436 2.405 2.18 [+ or -] 1.644 [[Tau].sub.3,4] 5.097 1.698 4.153 2.10 [+ or -] 3.123 [Phi] [+ or-] [[Phi].sub. [[Phi]. [[Phi]. [Phi] [+ or -]] sub.-] sub.+] [q.sub.95] N = 50 [T.sub.1] 2.000 0.000 0.000 NaN [+ or-] 0.000 [[Tau].sub.1] 3.202 1.752 4.229 2.89 [+ or-] 2.904 [T.sub.2] Inf 0.000 0.000 NaN [+ or-] 0.000 [[Tau].sub.2] 1.694 1.646 3.121 3.38 [+ or-] 2.140 [T.sub.3,4] 8.650 1.883 3.420 2.59 [+ or-] 2.187 [[Tau].sub.3,4] 5.484 2.112 8.204 2.97 [+ or-] 5.172 N = 400 [T.sub.1] 2.000 0.000 0.000 NaN [+ or-] 0.000 [[Tau].sub.1] 3.143 0.802 1.047 1.44 [+ or-] 0.928 [T.sub.2] Inf 0.000 0.000 NaN [+ or-] 0.000 [[Tau].sub.2] 2.071 0.797 0.970 1.51 [+ or-] 0.892 [T.sub.3,4] 8.646 0.798 1.049 1.51 [+ or-] 0.889 [[Tau].sub.3,4] 4.747 1.049 1.634 1.54 [+ or-] 1.339 Table III. Estimated Eigenmodes (48) [Phi] [+ or-] [[Phi].sub. [[Phi]. [[Phi]. [Phi] [+ or -]] sub.-] sub.+] [q.sub.95] N = 25 Re [S.sub.11] 0.738 0.139 0.149 1.44 [+ or-] 0.162 Re [S.sub.21] -0.302 0.252 0.348 2.45 [+ or-] 0.256 Re [S.sub.12] 0.701 0.926 0.065 12.78 [+ or-] 0.488 RE [S.sub.22] -0.557 0.628 0.426 3.46 [+ or-] 0.663 Re [S.sub.13,4] 0.477 0.566 0.176 4.48 [+ or-] 0.231 |Im [S.sub.13,4]| 0.329 0.496 0.194 6.32 [+ or-] 0.189 RE [S.sub.23,4] 0.348 0.628 0.293 5.62 [+ or-] 0.259 |Im [S.sub.23,4]| 0.300 0.518 0.109 3.71 [+ or-] 0.256 N = 100 Re [S.sub.11] 0.749 0.066 0.082 1.34 [+ or-] 0.074 Re [S.sub.21] -0.300 0.110 0.126 1.60 [+ or-] 0.109 Re [S.sub.12] 0.754 0.367 0.033 13.91 [+ or-] 0.137 RE [S.sub.22] -0.413 0.506 0.255 1.99 [+ or-] 0.404 Re [S.sub.13,4] 0.492 0.307 0.135 3.38 [+ or-] 0.162 |Im [S.sub.13,4]| 0.320 0.319 0.183 4.01 [+ or-] 0.167 RE [S.sub.23,4] 0.331 0.377 0.236 3.46 [+ or-] 0.212 |Im [S.sub.23,4]| 0.363 0.287 0.121 2.64 [+ or-] 0.164 [Phi] [+ or-] [[Phi].sub. [[Phi]. [[Phi]. [Phi] [+ or -]] sub.-] sub.+] [q.sub.95] N = 50 Re [S.sub.11] 0.745 0.090 0.115 1.41 [+ or-] 0.108 Re [S.sub.21] -0.300 0.158 0.197 1.87 [+ or-] 0.162 Re [S.sub.12] 0.742 0.626 0.042 15.37 [+ or-] 0.248 RE [S.sub.22] -0.460 0.606 0.308 2.39 [+ or-] 0.528 Re [S.sub.13,4] 0.486 0.433 0.156 3.97 [+ or-] 0.195 |Im [S.sub.13,4]| 0.324 0.420 0.196 5.19 [+ or-] 0.184 RE [S.sub.23,4] 0.338 0.519 0.264 4.61 [+ or-] 0.242 |Im [S.sub.23,4]| 0.336 0.405 0.119 3.18 [+ or-] 0.208 N = 400 Re [S.sub.11] 0.750 0.034 0.038 1.16 [+ or-] 0.037 Re [S.sub.21] -0.301 0.050 0.056 1.25 [+ or-] 0.053 Re [S.sub.12] 0.766 0.110 0.023 6.82 [+ or-] 0.053 RE [S.sub.22] -0.371 0.256 0.163 1.57 [+ or-] 0.211 Re [S.sub.13,4] 0.494 0.136 0.093 1.97 [+ or-] 0.104 |Im [S.sub.13,4]| 0.316 0.144 0.117 2.03 [+ or-] 0.115 RE [S.sub.23,4] 0.324 0.174 0.139 1.90 [+ or-] 0.141 |Im [S.sub.23,4]| 0.388 0.122 0.089 1.74 [+ or-] 0.099

Which eigenmode, period, and damping time of an estimated AR(2) model corresponds to which eigenmode, period, and damping time of the simulated AR(2) process (44) is not always obvious, in particular not when the effective time series length N is so small that the estimated parameters are affected by large uncertainties. To establish the statistics in Tables II and III, we matched the estimated eigenvalues [[Lambda].sub.k] with the eigenvalues [[Lambda].sub.k] of the simulated process (49) by finding the permutation [Pi] of the indices 1, ..., 4 that minimized

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The estimated eigenmodes, periods, and damping times were matched with the eigenmodes, periods, and damping times of the simulated process according to the minimizing permutation of the indices. To remove the remaining ambiguity of the sign of the estimated eigenmode [S.sub.:[Pi]k belonging to the eigenvalue [[Lambda].sub.[Pi]k], we chose the sign of the estimated eigenmode [S.sub.:[Pi]k] such as to minimize

[[parallel][S.sub.:[Pi]k] - [S.sub.:k][parallel].sub.2].

This procedure allowed us to match a parameter [Phi] of the eigendecomposition of an estimated model uniquely with a parameter [Phi] of the eigendecomposition of the simulated process.

The estimation results in Tables II and III show that large samples of time series data are required to estimate the eigenmodes, periods, and damping rates of an AR model reliably. The approximate 95% margins of error are rough indicators of the estimation errors most of the time, but the approximate margins of error are always reliable as indicators of the magnitude of the estimation error only when the sample size N is large. Even for large samples, the approximate confidence intervals for the eigenmodes, periods, and damping times are less accurate than the confidence intervals for the AR parameters themselves. The median of the approximate margins of error [[Phi].sub.[+ or -]] does not always lie in between the percentiles [[Phi].sub.-] and [[Phi].sub.+] of the simulated estimation error distribution. The fact that the absolute values of the percentiles [[Phi].sub.-] and [[Phi].sub.+] in some cases differ significantly even when the bias of the estimates for a parameter [Phi] is small indicates that the distribution of the parameter estimates can be skewed, showing that the t-ratio (38) does not follow Student's t distribution. The lower accuracy of the confidence intervals for the eigendecomposition compared with the accuracy of the confidence intervals for the AR parameters themselves is a consequence of the linearization involved in the construction of the approximate confidence intervals for the eigendecomposition.

6. CONCLUSIONS

If a multivariate time series can be modeled adequately with an AR model, dynamical characteristics of the time series can be examined by structural analyses of the fitted AR model. The eigendecomposition discussed in this paper is a structural analysis of AR models by means of which aspects of a system that oscillate with certain periods and that relax towards a mean state with certain damping times can be identified. Eigendecompositions of AR(1) models have been used in the geosciences to characterize oscillations in complex systems [von Storch and Zwiers 1999, Chapter 15]. We have generalized the eigendecomposition of AR(1) models to AR(p) models of arbitrary order and have shown that the eigendecomposition of higher-order models can be used as a data analysis method in the same way the eigendecomposition of AR(1) models is currently used. Because a larger class of systems can be modeled with higher-order AR(p) models than with AR(1) models, generalizing the eigendecomposition of AR(1) models to AR(p) models of arbitrary order renders this data analysis method more widely applicable.

Since the eigendecomposition of AR models is of interest in particular for high-dimensional data as they occur, for example, when the state vectors of a time series represent spatial data, we have proposed a computationally efficient stepwise least squares algorithm for the estimation of AR parameters from high-dimensional data. In the stepwise least squares algorithm, the least squares estimates for an AR model of order p [is less than] [p.sub.max] are computed by downdating a QR factorization for a model of order [p.sub.max]. The downdating scheme makes the stepwise least squares algorithm a computationally efficient procedure when both the order of an AR model and the AR parameters are to be estimated from large sets of high-dimensional data.

The least squares estimates of the parameters of an AR(p) model are conditional estimates in that in deriving the least squares estimates, p initial state vectors of the available time series data are taken to be constant, although, in fact, they are part of a realization of a stochastic process. Unconditional estimates that are not based on some such approximation are obtained from Gaussian data with exact maximum likelihood procedures (e.g., see Wei [1994]). By the exact treatment of the p initial state vectors, the problem of maximizing the likelihood becomes nonlinear, so that exact maximum likelihood algorithms are iterative and usually slower than least squares algorithms (cf. Wei [1994]). To be sure, with an exact maximum likelihood algorithm such as that of Ansley and Kohn [1983; 1986], stability of the estimated model can be enforced, the time series data can have missing values, and model parameters can be constrained to have given values, which with linear least squares algorithms is impossible. But for high-dimensional data without missing values, the computational efficiency of the stepwise least squares algorithm might be more important than the guarantee that the estimated AR model be stable or satisfy certain constraints on the parameters. The conditional least squares estimates might then be used as initial values for an exact maximum likelihood algorithm. Or, because it appears that for AR models the conditional least squares estimates are of an accuracy comparable with the accuracy of the unconditional maximum likelihood estimates (cf. Mentz et al. [1998]), the stepwise least squares algorithm might be used in place of computationally more complex exact maximum likelihood algorithms.

Approximate confidence intervals for the eigenmodes and their periods and damping times can be constructed from the asymptotic distribution of the least squares estimator of the AR parameters. For lack of a distributional theory for the eigenvectors and eigenvalues of Gaussian random matrices, the confidence intervals for the eigenmodes, periods, and damping times are based on linearizations and rough approximations of the distribution of estimation errors. Simulations of a bivariate AR(2) process illustrate the quality of the least squares AR parameter estimates and of the derived estimates of the eigendecomposition of an AR(2) model. The simulations show that the confidence intervals for the eigendecomposition roughly indicate the magnitude of the estimation errors, but that they are reliable only when the sample of available time series data is large.

APPENDIX

For the construction of approximate confidence intervals for the eigenmodes, periods, and damping times, one needs the gradient of these functions [Phi]([x.sub.B]) of the AR parameters [x.sub.B]. The eigendecomposition of an AR model depends only on the AR coefficient matrices A1, ..., [A.sub.p] but not on the intercept vector w, so that the partial derivatives of the eigenmodes, periods, and damping times with respect to components of the intercept vector w are zero. Because the normalization conditions (10) for the eigenmodes [S.sub.:k] of AR(1) models have the same form as the normalization conditions (21) for the augmented eigenmodes [S.sub.:k] of higher-order AR(p) models, it suffices to obtain closed-form expressions for the partial derivatives of the eigenmodes [S.sub.:k], periods [T.sub.k], and damping times [[Tau].sub.k] of AR(1) models. From the partial derivatives for the AR(1) model, the corresponding partial derivatives for higher-order AR(p) models are then obtained by replacing the coefficient matrix A and its eigenvectors [S.sub.:k] by the augmented coefficient matrix A and its eigenvectors [S.sub.:k].

One obtains the partial derivatives of the eigenvectors [S.sub.:k] and of the eigenvalues [[Lambda].sub.k] by differentiating the normalization conditions (10) and the eigenvector-eigenvalue relation

[AS.sub.:k] = [[Lambda].sub.k][S.sub.:k].

Taking the derivatives leads to the system of equations

[AS.sub.:k] + [AS.sub.:k] = [[Lambda].sub.k][S.sub.:k] + [Lambda].sub.k][S.sub.:k],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

with dotted quantities denoting partial derivatives with respect to an element of the coefficient matrix A. Upon substitution of the eigendecomposition of the coefficient matrix A = S[Lambda][S.sup.-1], these equations become

(52) ([Lambda] - [[Lambda].sub.k]I)[S.sup.-1][S.sub.:k] - [e.sup.(k)][[Lambda].sub.k] = [-S.sup.-1][AS.sub.:k],

(53) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

(54) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [e.sup.(k)] is the kth column of an identity matrix. The kth component of the differentiated eigenvector-eigenvalue relation (52) yields

(55) [[Lambda].sub.k] = [([S.sup.-1]AS).sub.kk]

as an explicit formula for the partial derivative of the eigenvalue [S.sub.:k] with respect to the AR coefficients.

If we write the derivative of the eigenvector [S.sub.:k] as

(56) [S.sub.:k] = [SZ.sub.:k],

the remaining components of Eqs. (52)-(54) take the form

([[Lambda].sub.j]-[[Lambda].sub.k])[Z.sub.jk] = -[([S.sup.-1]AS).sub.jk] for j [is not equal to] k,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

If all eigenvalues are distinct, these equations can be solved for the matrix Z and yield

(57) [Z.sub.jk] = [([S.sup.-1]AS).sub.jk]/[[Lambda].subj.k] - [[Lambda].sub.j] for j [is not equal to] k,

and

(58) Re [Z.sub.kk] = [[summation].sub.l [is not equal to] k] ([([X.sup.T]Y - [Y.sup.T]X).sub.kl] Im [Z.sub.lk] - [([X.sup.T]X + [Y.sup.T]Y).sub.kl]Re [Z.sub.lk]),

(59) Im [Z.sub.kk] = [[summation].sub.l [is not equal to] k] Im [Z.sub.lk] - [([X.sup.T]X + [Y.sup.T]Y).sub.kl] Re [Z.sub.lk])/[([X.sup.T]X + [Y.sup.T]Y).sub.kk]

(In deriving (58) and (59), we used the normalization conditions (10) in the form [([X.sup.T]X + [Y.sup.T]Y).sub.kk] = 1 and [([X.sup.T]Y).sub.kk] = 0.) The expressions (57)-(59) for the elements of the matrix Z together with the relation (56) give explicit formulas for the partial derivatives of the eigenvectors [S.sub.:k] with respect to the AR coefficients.

In the case of multiple eigenvalues, the system of equations (52)-(54) cannot be solved uniquely for the partial derivatives of the eigenvectors with respect to the AR coefficients. In this case, however, the eigenvectors are not uniquely determined, so that it is not meaningful to give confidence intervals for them.

Writing the eigenvalues as [[Lambda].sub.k] = [a.sub.k] + [ib.sub.k] with real parts [a.sub.k] and imaginary parts [b.sub.k], we deduce from the derivative (55) of the eigenvalues that

[a.sub.k] = Re[([S.sup.-1]AS).sub.kk], [b.sub.k] = Im[([S.sup.-1]AS).sub.kk].

It can be verified that the derivatives of the damping time scales (8) and of the periods (9) then take the form

(60) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

(61) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

By means of the formulas (55)-(61), the ARFIT module armode assembles the gradients that are required in the variance estimate (43).

ACKNOWLEDGMENTS

We thank Dennis Hartmann for introducing us to the principal oscillation pattern analysis; Jeff Anderson, Stefan Dallwig, Susan Fischer, Johann Kim, and Steve Griffies for comments on drafts of this paper; and Heidi Swanson for editing the manuscript.

(1) Although the residual covariance matrix of an AR model whose parameters are estimated with the method of least squares is not itself a least squares estimate of the noise covariance matrix, we will, as is common practice, refer to this residual covariance matrix as a least squares estimate.

REFERENCES

AKAIKE, H. 1971. Autoregressive model fitting for control. Ann. Inst. Stat. Math. 23, 163-180.

ANDERSON, T. W. 1984. An Introduction to Multivariate Statistical Analysis. 2nd. John Wiley and Sons, Inc., New York, NY.

ANSLEY, C. F. AND KOHN, R. 1983. Exact likelihood of a vector autoregressive moving average process with missing or aggregated data. Biometrika 70, 275-278.

ANSLEY, C. F. AND KOHN, R. 1986. A note on reparameterizing a vector autoregressive moving average model to enforce stationarity. J. Stat. Comput. Simul. 24, 2 (June), 99-106.

BJORCK, A. 1996. Numerical Methods for Least Squares Problems. SIAM, Philadelphia, PA.

DRAPER, N. AND SMITH, H. 1981. Applied Regression Analysis. 2nd. John Wiley and Sons, Inc., New York, NY.

HANSEN, P. C. 1997. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. SIAM, Philadelphia, PA.

HASSELMANN, K. 1988. PIPs and POPs: The reduction of complex dynamical systems using principal interaction and oscillation patterns. J. Geophys. Res. 93, D9, 11015-11021.

HIGHAM, N. J. 1996. Accuracy and Stability of Numerical Algorithms. SIAM, Philadelphia, PA.

HONERKAMP, J. 1994. Stochastic Dynamical Systems: Concepts, Numerical Methods, Data Analysis. VCH Publishers, New York, NY.

HUANG, B. H. AND SHUKLA, J. 1997. Characteristics of the interannual and decadal variability in a general circulation model of the tropical Atlantic Ocean. J. Phys. Oceanogr. 27, 1693-1712.

LUTKEPOHL, H. 1985. Comparison of criteria for estimating the order of a vector autoregressive process. J. Time Ser. Anal. 6, 35-52. Correction, Vol 8 (1987), page 373.

LUTKEPOHL, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. Springer-Verlag, Berlin, Germany.

MENTZ, R. P., MORETTIN, P. A., AND TOLOI, C. M. C. 1988. On residual variance estimation in autoregressive models. J. Time Ser. Anal. 19, 187-208.

NANKERVIS, J. C. AND SAVIN, N. E. 1988. The Student's t approximation in a stationary first order autoregressive model. Econometrica 56, 119-145.

NEUMAIER, A. 1998. Solving ill-conditioned and singular linear systems: A tutorial on regularization. SIAM Rev. 40, 3, 636-666.

PAULSEN, J. AND TJOSTHEIM, D. 1985. On the estimation of residual variance and order in autoregressive time series. J. Roy. Statist. Soc. B47, 216-228.

PENLAND, C. AND SARDESHMUKH, P. D. 1995. The optimal growth of tropical sea surface temperature anomalies. J. Clim. 8, 1999-2024.

SCHNEIDER, T. AND NEUMAIER, A. 2001. Algorithm 808: ARfit--A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models. ACM Trans. Math. Softw. 27, 1 (Mar.), 58-65.

SCHWARZ, G. 1978. Estimating the dimension of a model. Ann. Stat. 6, 461-464.

TIAO, G. C. AND Box, G. E. P. 1981. Modeling multiple time series with applications. J. Amer. Statist. Assoc. 76, 802-816.

TJOSTHEIM, D. AND PAULSEN, J. 1983. Bias of some commonly used time series estimates. Biometrika 70, 389-399.

VON STORCH, H. AND ZWIERS, F. W: 1999. Statistical Analysis in Climate Research. Cambridge University Press, New York, NY.

VON STORCH, H., BURGER, G., SCHNUR, R., AND YON STORCH, J.-S. 1995. Principal oscillation patterns: A review. J. Clim. 8, 377-400.

WEI, W. W. S. 1994. Time Series Analysis. Addison-Wesley Publishing Co., Inc., Redwood City, CA.

WILKINSON, J. H. 1965. The Algebraic Eigenvalue Problem. Clarendon Oxford Science Publications, Oxford, UK.

Xu, J. S. AND VON STORCH, H. 1990. Predicting the state of the Southern Oscillation using principal oscillation pattern analysis. J. Clim. 3, 1316-1329.

Received: September 1997; revised: January 2000; accepted: October 2000

Guest Editor: Geoff Miller

A draft of this paper was written in 1995, while Tapio Schneider was with the Department of Physics at the University of Washington in Seattle. Support, during that time, by a Fulbright grant and by the German National Scholarship Foundation (Studienstiftung des deutschen Volkes) is gratefully acknowledged. The paper was completed while Tapio Schneider was with the Atmospheric and Oceanic Sciences Program at Princeton University.

Authors' addresses: A. Neumaier, Institute for Mathematics, Universitat Wien, Strudlhofgasse 4, Wien, A-1090, Austria; email: neum@cma.univie.ac.at; T. Schneider, Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012; email: tapio@cims.nyu.edu.

Printer friendly Cite/link Email Feedback | |

Author: | NEUMAIER, ARNOLD; SCHNEIDER, TAPIO |
---|---|

Publication: | ACM Transactions on Mathematical Software |

Article Type: | Statistical Data Included |

Geographic Code: | 1USA |

Date: | Mar 1, 2001 |

Words: | 13018 |

Previous Article: | Solving Systems of Partial Differential Equations Using Object-Oriented Programming Techniques with Coupled Heat and Fluid Flow as Example. |

Next Article: | Algorithm 808: ARFIT--A Matlab Package for the Estimation of Parameters and Eigenmodes of Multivariate Autoregressive Models. |

Topics: |