Printer Friendly

A spatially adaptive iterative method for a class of nonlinear operator eigenproblems.

AMS subject classifications. 65F15, 65N35, 65N25

1. Introduction. PDE eigenvalue problems arise naturally in many modeling situations.

In some cases, e.g., when the PDE eigenvalue problem stems from a time-dependent PDE involving higher order derivatives in time or when it involves a delay, the corresponding PDE eigenvalue problem will be nonlinear in the eigenvalue parameter. In this paper we present a method for a class of PDE eigenvalue problems with that kind of nonlinearity. Examples of such problems are given, e.g., in [6, 8, 33] and in Section 4.

The nonlinear operator eigenvalue problem we are concerned with consists of finding a value [lambda] [member of] B([mu], r) := {[lambda] [member of] C : [absolute value of [lambda] - [mu]] < r} close to [mu][member of] C and a nonzero function f such that

M([lambda])f = 0,

[c.sub.1]([lambda], f)=0,

(1.1)

[c.sub.k]([lambda], f)=0.

In these equations, M([lambda]) denotes a family of operators defined on a common domain D = D(M([lambda])) [subset][L.sup.w.sub.2][a, b]) and with a range space [L.sup.w.sub.2]([a, b]). The domain D here is assumed to be independent of the eigenvalue [lambda] and will typically involve regularity conditions (e.g., differentiability). Note that for every fixed parameter [lambda], the operator M([lambda]) is linear but the dependence of M([lambda]) on [lambda] is generally nonlinear. The set [L.sup.w.sub.2]([a, b]) denotes functions which are square integrable on the interval [a, b] with a suitable weight function w. We shall specify a convenient weight function in Section 3 allowing us to efficiently compute scalar products in [L.sup.w.sub.2]([a, b]) numerically. The weight function is selected in order to achieve efficiency in the algorithm, and it does not necessarily correspond to the "natural inner product" associated with physical properties of the involved operators. The functions [c.sub.i] : C x D [right arrow] C specify k constraints that need to be satisfied for an eigenpair ([lambda], f).

We will assume that M([lambda]) can be represented as

(1.2) M([lambda]) = [g.sub.1]([lambda])[[laplace].sub.1] + [g.sub.2] ([lambda])[[laplace].sub.2] + ... + [g.sub.m] ([lambda])[[laplace].sub.m],

where [[laplace].sub.i] : D [right arrow] [L.sup.w.sub.2]([a, b]) are closed linear operators and [g.sub.i] : [OMEGA][right arrow] C are given analytic functions defined in an open neighborhood [OMEGA][contains] D([mu], r). We also assume that the constraints [c.sub.i] can be represented in a similar fashion. More precisely, we assume that for all i = 1, ..., k we have

[c.sub.i]([lambda], f) = [h.sub.i,1]([lambda])[C.sub.i,1] f + ... + [h.sub.i,n]([lambda])[C.sub.i,n]f,

where [h.sub.i,j] : [OMEGA] [right arrow] C are analytic functions and [C.sub.i,j] : D [right arrow] C are closed linear operators. We further assume that the constraints are such that the problem (1.1) is well posed in the sense that its solutions A G B(/x, r) have finite multiplicities and are elements of a discrete set without accumulation points. The assumption that the spectrum is discrete restricts the problem class such that we do not face the complicated spectral phenomena that may occur for more general nonlinear operators; see, e.g., [1].

We have formulated the operator problem (1.1) in a quite general form, mostly for notational convenience. The problems we have in mind come from PDEs (with one spatial variable), e.g., PDEs with delays; see Section 4 for examples. For instance, the operators in (1.2) may correspond to differentiation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

In this case, the functions [c.sub.i] specify k = m boundary conditions and we assume that they are such that (1.1) is a well-posed nonlinear operator eigenvalue problem.

The algorithm we propose is closely related to the infinite Arnoldi method presented in [19]. The infinite Arnoldi method can, in principle, solve nonlinear matrix eigenvalue problems (for eigenvalues in a disk) to arbitrary precision provided that certain derivatives associated with the problem are explicitly available. One can approach problems of the type (1.1) with the infinite Arnoldi method by first discretizing the PDE on the interval [a, b], thereby obtaining a matrix eigenvalue problem whose solutions hopefully approximate those of (1.1). There are a number of approaches available for the nonlinear matrix eigenvalue problem [2,4, 13, 25, 32]. Such a discretize-first approach requires an a priori choice of the discretization of the interval [a, b]. The algorithm presented here does not require such a choice because the spatial discretization will be adapted automatically throughout the iteration.

We derive the algorithm as follows. By approximating [g.sub.i] and [c.sub.i] by truncated Taylor expansions of order N, we first show that the resulting truncated operator eigenvalue problem can be written as an eigenvalue problem for an operator acting on arrays of functions in [L.sup.w.sub.2][([a, b]).sup.N]. This approach is similar to what for matrices is commonly called a companion linearization. See [24] for an analysis of companion linearizations. We select a particular companion-like operator formulation having a structure that is suitable for the Arnoldi method [28] applied to the operator formulation, and our derivation does not require a spatial discretization at this stage. We show that when the Arnoldi method for the companion-like operator formulation is initialized in a particular way, each iteration is equivalent to a result that would be obtained with an infinite truncation parameter N. We further exploit the structure of the Arnoldi method applied to the companion-like formulation so that the iterates of the algorithm are represented as arrays of [L.sup.w.sub.2]([a, b]) functions. The abstract algorithm presented in Section 2 can, in principle, find solutions to (1.1) with arbitrary accuracy with the main computational cost being the solution of an inhomogeneous differential equation derived from M([mu]) in every iteration.

As our algorithm derived in Section 2 is, in theory, based on iterating with functions in [L.sup.w.sub.2]([a, b]) and due to the fact that the algorithm does not involve a spatial discretization, we are free to choose the representation of the functions. In Section 3 we present an adaptive multi-level representation suitable to be combined with the algorithm in Section 2. Each iterate is represented via coefficients of a Chebyshev expansion of length automatically adapted to achieve machine precision. Details for some of the common [[laplace].sub.i]-operations (like differentiation and pointwise multiplication) are also given in Section 3. In Section 4 we demonstrate the performance of our algorithm by three numerical examples.

Our approach of adaptive representation of functions together with an adaptive resolution of differential operators is clearly inspired by the Chebfun system [3] with its chebop functionality [12]. The idea to carry out iterations with functions has been presented in other settings. A variant of GMRES for functions is given in [26], where the functions are represented using Chebfun [3]. See also the discussion of infinite-dimensional numerical linear algebra in [17].

Apart from the notation introduced above, we use the following conventions. Calligraphic style will be used to denote operators, in particular, I will denote the identity operator and O will denote the zero operator. The set of one-dimensional (or k-dimensional) arrays of functions will be denoted by [L.sup.w.sub.2][([a, b]).sup.N] (or [L.sup.w.sub.2][([a, b]).sup.N x k]). The weighted 2- norm associated with a function f [member of] [L.sup.w.sub.2][([a, b]).sup.N] will be denoted by [parallel] f [[parallel].sub.w]. The partial derivative with respect to A will be denoted by (*)', the second partial derivative with respect to [lambda] by (*)", etc.

2. The infinite Arnoldi method in an abstract PDE setting.

2.1. Truncated Taylor expansion. The derivation of our algorithm is based on a truncated Taylor-like expansion of the operator M around a given point [mu] [member of] C. Given an integer N, let the truncated operator [M.sub.N] be defined by

[M.sub.n]([lambda]) := M([mu]) + [[lambda] - [mu]/1!] [M.sup.(1)]([mu]) + ... + [[lambda] - [mu]/N!] [M.sup.(N)]([mu]),

with the operators [M.sup.(j)] being analogues of the j-th derivative of M evaluated at [mu],

[M.sup.(j)]([mu]) := + [g.sup.(j).sub.1]([mu])[[laplace].sub.1] + [g.sup.(j).sub.2]([mu])[[laplace].sub.2] + ... + [g.sup.(j).sub.m]([mu])[[laplace].sub.m].

Accordingly, we define a Taylor-like expansion for the boundary conditions,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We now consider the truncated operator eigenproblem

(2.1a) [M.sub.N]([[lambda].sub.N])[f.sub.N] = 0,

(2.1b) [c.sub.i,N]([[lambda].sub.N], [f.sub.N]) = 0 , i = 1, ..., k

with solution ([[lambda].sub.N], [f.sub.N]). This eigenproblem approximates (1.1) in the sense that the residual of ([[lambda].sub.N], [f.sub.N]) vanishes as N [right arrow] [infinity]. This is summarized in the following theorem.

THEOREM 2.1 (Convergence of operator Taylor-like expansion). Let [{([[lambda].sub.N], fN)}.sup.[infinity].sub.N=1] denote a sequence of solutions to (2.1) with [f.sub.N] [member of] [L.sup.w.sub.2]([a, b]) and [[lambda].sub.N][member of] D([mu], r) for all N = 1, 2 .... Moreover, suppose that these solutions are convergent in the [L.sup.w.sub.2] norm, i.e., ([[lambda].sub.N], [f.sub.N]) [right arrow] ([[lambda].sub.*], [[lambda].sub.*]). Also suppose [[laplace].sub.i] [f.sub.N] and [C.sub.i,j] [f.sub.N] are convergent in the [L.sup.w.sub.2] norm for any i, j as N [right arrow] [infinity]. Then there exist positive constants [gamma] and [beta] < 1 independent of N such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Proof. Since the functions [g.sub.i](i = 1, ..., m) are assumed to be analytic in a neighborhood of D([mu], r), the complex Taylor theorem asserts that

[g.sub.i]([lambda]) = [N.summation over (j=0)][[g.sup.(j).sub.i]([mu])/j!][([lambda] - [mu]).sup.j] + [R.sub.i,N]([lambda]),

where the remainder term can be expressed via the Cauchy integral formula

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and [GAMMA] can be taken as a circular contour with center [mu] and radius r > [absolute value of [lambda] - [mu]]. With the setting [M.sub.i,r] := [max.sub.[zeta][member of][GAMMA]] [absolute value of [g.sub.i]([zeta])], we obtain the standard Cauchy estimate

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with [absolute value of [lambda] - [mu]]/r [less than or equal to] [??] < 1. Consequently,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2)

The conclusion about the bound on |[parallel]M([[lambda].sub.N])[f.sub.N][[parallel].sub.w] now follows from the fact that [[laplace].sub.i][f.sub.N] is assumed to be convergent. The conclusion about the bound on the boundary condition residuals follows from a completely analogous argument. The constants [beta] and [gamma] are formed by maximizing the computed bounds which are all of the form (2.2).

REMARK 2.2. Theorem 2.1 illustrates that the residuals will decrease when N is sufficiently large and eventually approach zero as N [right arrow] [infinity]. The conclusion holds under the assumption that ([[lambda].sub.N], [f.sub.N]) converges to a pair ([[lambda].sub.*], [f.sub.*]). Despite this, note that the operators under consideration are not necessarily bounded, and therefore Theorem 2.1 does not necessarily imply that [parallel]M([[lambda].sub.*])[f.sub.*][[parallel].sub.w] = 0. For example, suppose that M([[lambda].sub.*]) = [partial derivative]/[partial derivative]x and consider a situation where ([[lambda].sub.N], [f.sub.N]) is a solution to the truncated problem and [f.sub.N](x) = [f.sub.*](x) + [1/N] sin (Nx). Then [f.sub.N] [right arrow] [f.sub.*] but M([[lambda].sub.*])[f.sub.N] will not converge to zero as N [right arrow] [infinity]. In such a situation, also a discretize-first approach could not be expected to give meaningful results. When f* and all [f.sub.N] are sufficiently smooth, this is unlikely to occur, and our numerical experiments in Section 4 suggest that such a situation would be rather artificial.

2.2. Operator companion linearization. From the above discussion it follows that one can approximate the original operator problem (1.1) by an operator problem where the coefficients in the operator and the boundary conditions are polynomials in [lambda]. This is essentially an operator version of what is commonly called a polynomial eigenvalue problem [23, 29], and such problems are often analyzed and solved by the companion linearization technique. There are many types of companion linearizations [24], but for the purpose of this paper, a particular companion linearization is most suitable.

We first define an operator [A.sub.N] acting on [L.sup.w.sub.2][([a, b]).sup.N] such that

(2.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and an operator [B.sub.N] with action defined by

(2.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Using these two operators, we can formulate the following generalized operator eigenproblem with boundary conditions

(2.5a) [A.sub.N[phi]] = ([lambda] - [mu])[B.sub.N[phi]] [c.sub.i]([mu], [[phi].sub.1]) + [c'.sub.i]([mu], [[phi].sub.2]) + ...

(2.5b) + [c.sup.(N-1).sub.i]([mu],[[phi].sub.N]) = - [[[lambda] - [mu]/N][c.sup.(N).sub.i]([mu],[[phi].sub.N]), i = 1, ..., k.

This particular companion linearization is useful because, for any M [greater than or equal to] N, the leading N x N blocks in the operators [A.sub.m] and [B.sub.M] consist precisely of [A.sub.N] and [B.sub.N]. This will be implicitly exploited in Section 2.3. The companion operator problem (2.5) is equivalent to the [M.sub.N]-problem (2.1) in the following sense.

THEOREM 2.3. Consider [phi] = [([[phi].sub.1], ..., [[phi].sub.N]).sup.T] [member of] [L.sup.w.sub.2][([a, b].sup.)N] with [[phi].sub.1] = f. The companion linearization (2.5) and the truncated Taylor expansion (2.1) are equivalent in the sense that the following two statements are equivalent.

a) The pair ([lambda], [phi]) is a solution to (2.5).

b) The pair ([lambda], f) is a solution to (2.1).

Proof. Consider a solution [phi] = [([[phi].sub.1], ..., [[phi].sub.N]).sup.T] to (2.5). Then the last N - 1 rows of (2.5a) imply that

(26) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

By inserting (2.6) into the first row in (2.5a), we have

(2.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Similarly, (2.5b) implies with (2.6) and the linearity of [c.sub.i]([lambda], f) with respect to f that

(2.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The forward implication now follows from the fact that (2.7) is identical to (2.1a) and that (2.8) is identical to (2.1b).

In order to show the converse, suppose f is a solution to (2.1) and define [[phi].sub.1] = f and [[phi].sub.i] (for i = 2, ..., N) as in (2.6). The relation (2.7) holds because of (2.1), and a similar argument is used for the constraints (2.8).

2.3. The infinite Arnoldi algorithm. Now note that (2.5) is a linear operator eigenproblem for the variable [??] = [([lambda] - [mu]).sup.-1] Linear eigenvalue problems can be solved in a number of ways, where the Arnoldi method [28] is one of the most popular procedures. We will now show how to formulate the Arnoldi method1 for (2.5) and exploit the structure and thereby avoid the traditional approach to first discretize the problem. This is similar to the "Taylor version" of the infinite Arnoldi method for nonlinear matrix eigenvalue problems described in [19].

Conceptually, it is straightforward to use the Arnoldi method in an operator setting, and this has been done to study its convergence, e.g., in [9, 21]. In order to apply the Arnoldi algorithm to the formulation (2.5), we will need

* a procedure for solving

(2.9a) [A.sub.N[phi]] = [B.sub.N[psi]]

(2.9b) [c.sub.i]([mu], [[phi].sub.1]) + ... + [c.sup.(N-1).sub.i]([mu],[[phi].sub.N]) = - [1/N] [c.sup.(N).sub.i]([mu],[[psi].sub.N]), i = 1, ..., k

for the unknown [phi][member of][L.sup.w.sub.2][([a, b]).sup.N], where [psi][member of][L.sup.w.sub.2][([a, b]).sup.N] is given and

* a scalar product for [L.sup.w.sub.2][([a, b]).sup.N].

It turns out that the structure of [A.sub.N] and [B.sub.N] is particularly well suited for the Arnoldi method. Suppose we start the Arnoldi method with a function [psi][member of][L.sup.w.sub.2][([a, b]).sup.N] of the form

(2.10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [[psi].sub.1][member of][L.sup.w.sub.2]([a, b]). In the first step of the Arnoldi method, we need to solve (2.9). By

inspection of the structure of AN and BN, the solution will be of the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Hence, the action corresponding to the nonzero part of the solution of (2.9) is independent of N if we start with a vector consisting of just one leading nonzero block. More generally, the solution of (2.9) can be characterized as follows.

THEOREM 2.4. Consider a given function [psi] [member of][L.sup.w.sub.2][([a, b]).sup.N] with the structure

(2.11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [[psi].sub.1], ..., [[psi].sub.p][member of][L.sup.w.sub.2]([a, b]). Consider the operators [A.sub.N] and [B.sub.N] defined by (2.3) and (2.4) for any N > p. Suppose that [phi][member of][L.sup.w.sub.2][([a, b]).sup.N] is a solution to the operator problem (in the space [L.sup.w.sub.2][([a, b]).sup.N])

(2.12a) [A.sub.N[phi]] = [B.sub.N[psi]]

(2.12b) [c.sub.i]([mu], [[phi].sub.1]) + ... [c.sup.(N-1).sub.i]([mu], [[phi].sub.N -1]) = - [1/N][c.sup.(N).sub.i]([mu], [[psi].sub.N]), i = 1, ..., k.

Then this solution satisfies

(2.13) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [[phi].sub.1][member of][L.sup.w.sub.2]([a,b]) is the solution to the operator problem (in [L.sup.w.sub.2] ([a,b]))

(2.14a) M([mu])[[phi].sub.1] = -[M.sup.(1)]([mu])[[phi].sub.1] - [1/2] [M.sup.(2)] ([mu])[[phi].sub.2] - ... - [1/p] [M.sup.(p)]([mu])[[psi].sub.p]

(2.14b) [c.sub.i]([mu], [[phi].sub.1]) = -[c'.sub.i]([mu], [[psi].sub.1]) - [1/2] [c".sub.i]([mu], [[phi].sub.2]) - ... [1/p] [c.sup.(p).sub.i] ([mu], [[psi].sub.p]), i = 1, ..., k.

Proof. The last N - 1 rows of (2.12a) imply that [phi] has the structure (2.13). Equation (2.14a) follows directly from the insertion of (2.13) and (2.11) into the first row of (2.12a). Note that the terms [M.sup.(j)]([mu])[[psi].sub.j] vanish for j > p since [[psi].sub.j] = 0. Similarly, by inserting the structure of [phi] given in (2.13) and [psi] given in (2.11) into Equation (2.12b), several terms vanish and (2.14b) is verified.

From the previous theorem we make the following key observation.

The nonzero part of the solution to (2.12) for a function 0 with structure (2.11) is independent of N as long as N > p.

By only considering functions of the structure (2.11) we can, in a sense, take N [right arrow] [infinity] without changing the nonzero part of the solution. With N [right arrow] [infinity], the truncation error in the Taylor expansion vanishes and (2.1) corresponds to the original problem (1.1) (under the conditions stated in Theorem 2.1 and Remark 2.2). In other words, our method has the remarkable property that at any iteration it gives the same results as if the Arnoldi method was run on the untruncated operator linearization. Hence, the truncation parameter can be formally considered as being N = [infinity].

The key idea for an implementation is to start the Arnoldi algorithm with an array of functions of the structure (2.10). Due to the fact that the Arnoldi method essentially involves solutions of (2.12) at every iteration combined with a Gram-Schmidt orthogonalization, all arrays of functions will be of the structure (2.11). This naturally leads to a growth in the basis matrix in the Arnoldi algorithm not only by a column but also by a row at each iteration. The basis matrix after k iterations will be represented by

(2.15) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [v.sub.i,j] [member of] [L.sup.w.sub.2]([a, b]).

In the Arnoldi algorithm we also need a scalar product. For the space [L.sup.w.sub.2][([a, b]).sup.N] it appears to be natural to use the aggregated scalar product associated with a scalar product [<*, *>.sub.w] for [L.sup.w.sub.2]([a, b]), i.e., given f, g [member of] [L.sup.w.sub.2][([a, b]).sup.N], we define

[<f,g>.sub.w] := [<[f.sub.1], [g.sub.1]>.sub.w] + ... + [<[f.sub.N], [g.sub.N]).sub.w],

where f = [([f.sub.1], ..., [f.sub.N]).sup.T], g = [([g.sub.1], ..., [g.sub.N]).sup.T]. The scalar product [<*, *>.sub.w] can be tailored to the problem at hand, but we will propose a particularly convenient one in Section 3. A version of the Arnoldi algorithm that exploits the structure of the involved variables is given in Algorithm 1 below and referred to as the infinite Arnoldi method (for nonlinear operator eigenproblems).

REMARK 2.5 (Existence). Algorithm 1 defines a sequence of function iterates uniquely only if there exists a unique solution to (2.14). Existence issues will not be studied in detail here and should be established in a problem specific manner. For the numerical examples we present in Section 4, existence and uniqueness of the solutions of (2.14) will be guaranteed by the well-posedness of the considered differential equations. The assumption that (2.14) has a solution in D, the domain of M, is natural, though it is a restriction on the class of operator problems and allowed starting functions (which will be polynomials in our implementation, so this is not a practical restriction). Roughly speaking, this assumption means that only problems with sufficiently smooth solutions can be solved with our algorithm.

3. Multi-level spatial resolution. The main computational cost in a practical implementation of our nonlinear eigensolver (Algorithm 1) lies in the solution of a differential equation (2.14) at every Arnoldi iteration. In this section we will propose a polynomial spectral method for solving differential equations with analytic (or sufficiently smooth) solutions defined on an interval [a, b] suitable to be used in this setting. Because the Arnoldi method can be sensitive to inexact computations, we aim to solve these equations "exactly", that is, with an error close to machine precision. Our approach is inspired by the automatic grid refinement idea implemented in the Chebfun system [3] with its chebop functionality [12], but it differs from Chebfun in the representation of the polynomials. The Chebfun system is based on interpolation polynomials represented on a Chebyshev grid with an adaptively chosen number of grid points, whereas we prefer to represent the polynomials by their coefficient: in the Chebyshev basis. In other words, our approach is based on the tau method explained in Subsection 3.2 below instead of a collocation (or pseudospectral) method. The reason for our choice is that with a coefficient representation of polynomials, all operations required in our Arnoldi method can be implemented very efficiently without resampling function value: between non-matching Chebyshev grids.

Algorithm 1 Infinite Arnoldi method for nonlinear operator
eigenproblems (1.1).

Require: Starting function [v.sub.1,1][member of] [L.sup.w.sub.2]
([a, b])

1: [v.sub.1,1] = [v.sub.1,1]/[square root of
   [<[v.sub.1,1], [v.sub.1,1]>.sub.w]
2: for k = 1, 2, ..., [k.sub.max] do
3:    Compute [[phi].sub.2], ..., [[phi].sub.k+1] from (2.13) where
      [[psi].sub.1] = [v.sub.1,k], ..., [[psi].sub.k] = [v.sub.k,k]
      and p = k.
4:    Solve the inhomogeneous differential equation (2.14) for
      [[phi].sub.1] with the setting [[psi].sub.1] = [v.sub.1,k], ...,
      [[psi].sub.k] = [v.sub.k,k] and p = k.
5:    for i = 1, ..., k do
6:       [h.sub.i,k] = [<[[phi].sub.i], [v.sub.1,i]).sub.w] + ... +
         [<[[phi].sub.i], [v.sub.i,i]>.sub.w]
7:       for j = 1, ..., i do
8:          [[phi].sub.j] = [[phi].sub.j] - [h.sub.i,k][v.sub.j,i]
9:       end for
10:    end for
11:  [h.sub.k+1k] = [square root of [<[[phi].sub.1],
     [[phi].sub.1]].sub.w]] + ... ) [<[[phi].sub.k+1],
     [[phi].sub.k+1]>.sub.w]
12:  for j = 1, ..., k + 1 do
13:    [v.sub.j,k+1] = [[phi].sub.j]/[h.sub.k+1,k]
14:   end for
15: end for

16: Compute the eigenvalues [MATHEMATICAL EXPRESSION NOT
    REPRODUCIBLE IN ASCII] of the Hessenberg matrix with elements
    [H.sub.i,j] = [h.sub.i,j], for i, j = 1 ..., [k.sub.max].
17: Return the eigenvalue approximations [MATHEMATICAL EXPRESSION
    NOT REPRODUCIBLE IN ASCII] of (1.1).


3.1. Coefficient spatial representation. Let [a, b] be a given interval. In this section we will use the convention that with every occurrence of the variable x in [a, b], we identify the variable y = (2x - b - a)/(b - a) in [-1,1]. Any polynomial [P.sub.m] of degree at most m can be represented as

[P.sub.m](x) = [m.summation over (j=0)] [c.sub.j][T.sub.j](y), x [member of][a,b],

with the well-known Chebyshev polynomials [T.sub.j](y) = cos(j arccos(y)). Recall that these polynomials satisfy the recurrence

[T.sub.0](y) = 1, [T.sub.1](y) = y, [T.sub.j+i](y) = 2y[T.sub.j](y) - [T.sub.j-1](y):

and are orthogonal with respect to the weighted [L.sup.w.sub.2] scalar product

[<f,g>.sub.w] = [2/[pi]] [[integral].sup.1.sub.-1] f(y)[bar.g(y)]/[square root of 1 - [y.sup.2]] dy,

more precisely

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

In contrast to the more popular spectral collocation approach [5, 11, 30], where a polynomial [P.sub.m] is represented by its function values on a Chebyshev grid with nodes [y.sub.j] = cos([pi]j/m) (for j = 0,1, ..., m), we here prefer to represent [P.sub.m] by its Chebyshev coefficients [c.sub.j]. Given two polynomials [P.sub.m](x) = [[summation].sup.m.sub.j=0] [c.sub.j][T.sub.j](y) and [Q.sub.n](x) = [[summation].sup.n.sub.j=0][T.sub.j](y) of possibly different degrees, the coefficient representation allows us to compute linear combinations

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

without resampling function values of [P.sub.m] or [Q.sub.n] on a refined Chebyshev grid. (We assume that coefficients [c.sub.j] or [d.sub.j] with j exceeding the degree of the associated polynomial are equal to 0.) Moreover, it is easily verified that the Euclidean scalar product between coefficient vectors (with the 0-th coefficients divided by [square root of 2]) corresponds to a weighted [L.sup.w.sub.2] scalar product between the corresponding polynomials:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Note that our infinite Arnoldi method is rich in scalar product computations, and this relation allows for an efficient implementation.

3.2. The Chebyshev tau method with automated degree adaptation. Given a polynomial [P.sub.m], in spectral methods one represents linear operations like differentiation [P.sub.m][??][P'.sub.m], pointwise multiplication [P.sub.m](x) [right arrow] f(x)[P.sub.m](x), or the nonlocal reversal operation [P.sub.m](x) [right arrow] [P.sub.m] (a + b - x) by matrix-vector products with spectral matrices. The tau method (invented by Lanczos [22], see also [5, Chapter 21], [18, Section 7.2]) is a spectral method for solving differential equations using the coefficient representation of polynomials where the coefficients are determined such that the residual of the approximate solution is orthogonal to as many basis polynomials as possible. The Chebyshev tau method is a tau method where the Chebyshev polynomials are used as a basis.

In the following we give an exemplary list of three coefficient maps representing the action of linear operators on a polynomial [P.sub.m](x) = [[summation].sup.m.sub.j=0] [c.sub.j][T.sub.j](x). These maps will be needed in order to apply the algorithm to the examples in Section 4. For the identities involving Chebyshev polynomials used in the derivation, we refer to [14, Section 3].

* Differentiation. By the relation for the derivative of a Chebyshev polynomial [T.sub.j] (y),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

we deduce that the matrix mapping the Chebyshev coefficients of [P.sub.m] to the Cheby

shev coefficients of [P'.sub.m] is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Higher order derivatives are obtained by taking corresponding powers of the differentiation matrix [D.sub.m]. Note that-in contrast to spectral collocation matrices acting on function values rather than coefficients-the matrix [D.sub.m] is not dense.

* Multiplication. Let [Q.sub.n](x) = [[summation].sup.n.sub.j=0] [d.sub.j] [T.sub.j](y) be a polynomial. From the relation

[T.sub.j](y)[T.sub.k](y) = [1/2]([T.sub.j+k](y) + [T.sub.[absolute value of j-k]](y)),

it is easily verified that the matrix mapping the Chebyshev coefficients of [P.sub.m] to the Chebyshev coefficients of [P.sub.m][Q.sub.n] is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

which is the sum of a rank-1-modified Toeplitz matrix and a Hankel matrix.

* Reversal. Using the fact that [T.sub.j](y) = [(-1).sup.j][T.sub.j](-y), it is easily verified that the matrix

[R.sub.m] = diag(1, -1,1, -1, ...)[member of][R.sup.(m+1)x(m+1)]

maps the coefficients of [P.sub.m](x) to the coefficients of the "reversed" (right-to-left) polynomial [P.sub.m](a + b - x).

* Combinations of the above. Note that the above operators can be combined in an additive and multiplicative fashion by adding and multiplying the corresponding matrices. For example, the variable coefficient second-order operator d/dy (Q(y) d/dy x) can be approximated as [D.sub.m+n] [M.sub.m]([Q.sub.n])[D.sub.m+n] provided that Q(y) can be (uniformly) approximated by a Chebyshev expansion [Q.sub.n] of moderate degree n. For nonsmooth functions Q(y), however, a global Chebyshev expansion may fail to converge (e.g., in the case of jumps causing the Gibbs phenomenon) or converge slowly (e.g., in the case of discontinuous derivatives); see [5, 30, 31]. Both of these cases would require a more sophisticated approach, such as, e.g., piecewise polynomial representations.

Let A be a linear operator acting on functions defined on the interval [a, b], and denote by [A.sub.m] [member of] [C.sup.(m+1)x(m+1)] the spectral matrix mapping the Chebyshev coefficients of polynomials [P.sub.m] to the Chebyshev coefficients of [Q.sub.m] = A[P.sub.m],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

(Again we have assumed that coefficients with an index exceeding the degree of a polynomial are set to 0.) Typically the matrix [A.sub.m] is not invertible. In order to specify [P.sub.m] uniquely as the solution of the linear system [A.sub.m][P.sub.m] = [Q.sub.m] for a given right-hand side [Q.sub.m], a number of constraints, say k, need to be imposed on [P.sub.m]. In the tau method this is typically achieved by replacing the last k rows of [A.sub.m] by row vectors corresponding to the boundary conditions of the differential equation (boundary bordering), e.g.,

(1, -1,1, -1, ..., [(-1).sup.m+1]) Dirichlet b.c. on the left

1, 1, ..., 1 Dirichlet b.c. on the right

[2/b-a] (0,1, -2,4, ..., [(-1).sup.m][(m - 1).sup.2]) Neumann b.c. on the left

[2/b-a] (0,1, 2,4, ..., [(m - 1).sup.2]) Neumann b.c. on the right,

and to alter the last k coefficients of [Q.sub.m], namely [([d.sub.m-k+1], ..., [d.sub.m]).sup.T], to the prescribed boundary values (zeros for homogeneous conditions). The results of this modification are denoted as [A.sub.m] and [Q.sub.m], respectively. This ensures that the polynomial [P.sub.m] = [A.sub.m.sup.- 1][Q.sub.m] satisfies the boundary conditions exactly and the residual for the original differential operator is of the form

[Q.sub.m](x) - A[P.sub.m](x) = [[infinity].summation over (j=m+1-k)][e.sub.j][T.sub.j](y)

provided that the exact solution [A.sup.-1][Q.sub.m] exists and has a Chebyshev expansion. Lanczos realized that with [P.sub.m], we have obtained the exact polynomial solution of A[P.sub.m] = [Q.sub.m] + [[epsilon].sub.m] to a (slightly) perturbed problem, [[epsilon].sub.m] = - [[summation].sup.[infinity].sub.j=m+1-k] [e.sub.j][T.sub.j] (y). Under the condition that [P.sub.m] converges uniformly to a solution function f (the solution of the spectrally discretized differential equation) as m [right arrow][infinity] and the condition that this function f is analytic in a neighborhood of the interval [a, b] (the Bernstein ellipse), it is known that the convergence is geometric (see, e.g., [31, Chapter 8]): for some [rho] > 1 and C > 0, one has

[absolute value of f(x) - [P.sub.m](x)] [less than or equal to] C[[rho].sup.-m] for all x [member of] [a, b].

If f has no singularities too close to [a, b], then [rho] is large enough to achieve fast uniform convergence of [P.sub.m] towards f, indicated by a rapid decay of [P.sub.m]'s Chebyshev coefficients [c.sub.0], [c.sub.1], ..., [c.sub.m]. This fact is exploited in the Chebfun system with its chebop functionality for solving operator equations [12], and we will employ a similar rule of thumb: assume that the weighted [L.sup.w.sub.2] error of a Chebyshev approximant [P.sub.m] is of about the same order as its trailing Chebyshev coefficient [c.sub.m] (see also [5, p. 51]). This error estimate allows us to adaptively adjust the degree of [P.sub.m] such that the solution [P.sub.m] of [A.sub.m][P.sub.m] = [Q.sub.m] is likely to be close to [A.sup.-1] f in a relative error sense:

1. Choose a number m, say m = 16.

2. Construct [A.sub.m] (the spectral matrix with boundary conditions included), and solve the linear system [A.sub.m][P.sub.m] = [Q.sub.m] for [P.sub.m](x) = [[summation].sup.m.sub.j=0] [c.sub.j] [T.sub.j] (x).

3. If the last coefficient Cm/||[P.sub.m]||w is not small enough relative to the norm ||[P.sub.m]||w induced by [<.,.>.sub.w] , increase m (e.g., multiply by a factor of 1.5 and round to integer), and go to Step 2.

Note that more sophisticated error estimates could be developed (for example, by taking into account more than just the last Chebyshev coefficient Cm). However, every such estimate will eventually be based on a heuristic. In the numerical experiments described in Section 4, we found the above procedure (Steps 1-3) to perform satisfactorily.

3.3. Implementation. The implementation of our infinite Arnoldi method is straightforward in object-oriented Matlab. All spatial functions [v.sub.i,j] defined on the interval [a, b] are approximated by polynomials [P.sub.i,j] of degree adaptively chosen such that the estimate [parallel][v.sub.i,j] - [P.sub.i,j][[parallel].sub.w][??] tol [parallel][P.sub.i,j][[parallel].sub.w] holds, where tol = 2.2 x [10.sup.-16]. These polynomial representations are stored in a two-dimensional "cell array" (cf. (2.15))

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where each column corresponds to a Krylov basis vector and V will have an upper triangular structure. The action of the linear companion operator onto a column of V results in a new column of spatial functions, where the number of nonzero components in the input and output columns may be different. Note that a modified Gram-Schmidt orthogonalization of these columns is fast when working with the coefficient representation described above.

4. Examples.

4.1. A differential equation with time delay. We consider a PDE with delay for a function u : [0, [pi]] x [-[tau], + [infinity]) [right arrow] R,

(4.1a) [u.sub.t](x,t) = [u.sub.xx](x,t) - u(x,t - [tau]),

(4.1b) u(0,t) = 0,

(4.1c) u([pi],t)=0,

an example which has also been considered in [7, Formula (112)]. Employing the ansatz u(x,t) = f (x)[e.sup.[lambda]t], the PDE (4.1) leads to a nonlinear operator eigenvalue problem of the form (1.1), where

(4.2) M([lambda]) = -[lambda]I + [[partial derivative].sup.2]/[partial derivative][x.sup.2] - [e.sup.[tau][lambda]]I,

with boundary conditions

[c.sub.1]([lambda], f) = f(0) ,

[c.sub.2]([lambda], f) = f ([pi]).

In the implementation of our method we need to provide the derivatives of (4.2), which in this case are explicitly given by

[M.sup.(1)]([mu]) = -I + [tau][e.sup.-[tau][mu]] I

[M.sup.(k)]([mu]) = [(-[tau]).sup.k][e.sup.-[tau][mu]]I, k [greater than or equal to] 2.

Consequently, in every iteration of our algorithm we need to solve (2.14), which reduces to solving

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for [[phi].sub.1] with boundary conditions [[phi].sub.1](0) = [[phi].sub.1]([pi]) = 0.

In this first example we have selected M([lambda]) such that the problem can be solved explicitly as follows. By defining [gamma] := [lambda] + [e.sup.-[tau][lambda]], it is clear from (4.2) that all such [gamma] correspond to the eigenvalues of the Laplacian with homogeneous boundary conditions, i.e., [[partial derivative].sup.2]/[partial derivative][x.sup.2] f = [gamma]f with [c.sub.1]([partial derivative], f) = f (0) = 0, [c.sub.2]([lambda], f) = f ([pi]) = 0. This eigenvalue problem can be solved analytically and the explicit eigenfunction solution is f(x) = sin (jx) with eigenvalues [gamma] = -[j.sup.2] for any positive integer j. Hence,

-[j.sup.2] = [lambda] + [e.sup.-[tau][lambda]]

It is straightforward to solve this equation for [lambda] by using the Lambert W-function [10]. We find that the eigenvalues of the nonlinear operator eigenvalue problem are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for any j [member of] [N.sub.+] and any l [member of] Z where [W.sub.l] is the l-th branch of the Lambert W-function. Note that different eigenvalues can have the same eigenfunction as the eigenfunctions do not depend on l. The exact eigenvalues are shown in Figure 4.1(a). For our infinite Arnoldi procedure we have chosen the target [mu] = -1, and the starting vector [[phi].sub.1] was a polynomial of degree 5 with random (normally distributed) coefficients in the Chebyshev basis. Figure 4.1(a) also displays the approximate eigenvalues after 60 iterations of the infinite Arnoldi method, and Figure 4.1(b) displays the 10 approximate eigenfunctions f to which this method converged first. (Each two if these eigenfunctions coincide.)

The error norm for each of the 10 approximate eigenfunctions compared to the exact solution as a function of the number of Arnoldi iterations is shown in Figure 4.1(c) (there are always two error curves overlaying each other). Our spatial discretization was adapted such that the expected truncation error in the Chebyshev expansion is of the order of machine precision. We observe an error decay for each eigenfunction to about the same accuracy level as the number of Arnoldi iterations increases. The residual norm [parallel]M([lambda])f [[parallel].sub.w] for each of the 10 approximate eigenpairs ([lambda], f) is shown in Figure 4.1(d) as a function of the number of Arnoldi iterations. Note how the degrees of Arnoldi vectors grow moderately with each Arnoldi iteration as depicted in Figure 4.1(e). More precisely, we display here the maximal degree among all polynomials collected in each block Arnoldi vector. This growth is expected because we potentially discover approximations to increasingly "nonsmooth" eigenvectors (i.e., those which are difficult to approximate by polynomials of low degree).

4.2. Vibrating string with boundary control. We now consider a vibrating string on an interval [0, L] with a clamped boundary condition at x = 0 and a feedback law at x = L The feedback law is constructed with the goal to damp the vibrations of the string. In practice a feedback control may only be available at a later point in time due to, e.g., a delay in measurement or the time required for calculating the feedback parameters. In such a situation the vibrating string is governed by a PDE with delay for u : [0, L] x [-[tau], [infinity]) [right arrow] R,

(4.3a) [u.sub.tt](x,t) = [c.sup.2] [u.sub.xx](x, t),

(4.3b) u(0,t)=0,

(4.3c) [u.sub.x](L,t) = [alpha][u.sub.t](L, t - [tau]),

where c is the wave speed, [tau] is the delay, and [alpha] corresponds to a feedback law. See [15, 33] and the references therein for PDEs with delays and in particular the wave equation. In our setting, the eigenvalues associated with (4.3) are described by the [lambda]-dependent operator

M([lambda]) = [[lambda].sup.2]I - [c.sup.2][[partial derivative].sup.2]/[partial derivative][x.sup.2],

with [lambda]-dependent boundary conditions,

[c.sub.1]([lambda], f) = f (0)

[c.sub.2]([lambda], f) = f'(L) - [alpha][lambda][e.sup.-[tau][lambda]] f(L).

We now provide the implementation details for this example by specifying how to set up the differential equation (2.14). First note that

[M.sup.(1)]([mu]) = 2[mu]I,

[M.sup.(2)]([mu]) = 2I.

In our algorithm we require the derivatives of the boundary condition with respect to [lambda], which are explicitly given for k > 0 by

[c.sup.(k).sub.1]([mu], f)=0,

[c.sup.(k).sub.2]([mu], f) = -[alpha]f(L)[e.sup.-[tau][mu]][(-[tau]).sup.k-1](k - [tau][mu]).

Hence, the specialization of (2.14) to this example is, for p = k > 1,

(4.4a) [[mu].sup.2][[phi].sub.1](x) - [c.sup.2][[phi]".sub.1](x) = -2[mu][[psi].sub.1](x) - [1/2]2[[psi].sub.2](x)

(4.4b) [[phi].sub.1](0) = 0

(4.4c) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where the functions [[psi].sub.1], ..., [[psi].sub.k] are given and [[phi].sub.1] [member of] [L.sub.1]([a, b]) has to be computed. When p = k = 1, i.e., in the first iteration, the term [[psi].sub.2] should be set to zero in the inhomogeneous term in (4.4a), whereas (4.4b) and (4.4c) remain the same for p = k = 1. Note that (4.4) is just a second order inhomogeneous differential equation with one Dirichlet and one Robin boundary condition.

In Figure 4.2 we visualize the computed approximate eigenvalues and (complex) eigenvectors of M, as well as the decay of the residual norms [parallel]M([lambda])f [[parallel].sub.w] for the first 10 approximate eigenpairs with [lambda] closest to the target [mu] = -1. The involved constants have been chosen as [alpha] = 1, c = 1, and [tau] = 0.1. The infinite Arnoldi method performs well on this example (for which an analytical solution does not seem to be available): after about 45 iterations the first 10 eigenpairs ([lambda], f) are resolved nicely while the degree of the Arnoldi vectors grows moderately.

4.3. An artificial example. In order to illustrate the broad applicability of our method we will now consider the following artificial nonlinear operator eigenvalue problem, which is complex, involves coefficient functions with branch cuts and a non-local operator in space. We use an interval [a, b] = [0, [pi]] and an operator defined by

M([lambda]) = [[partial derivative].sup.2]/[partial derivative][x.sup.2] + [lambda]I + i[([lambda] - [[sigma].sub.1]).sup.1/2] R + i[([lambda] - [[sigma].sub.2]).sup.1/2] sin(x)[partial derivative]/[partial derivative]x

with boundary conditions

[c.sub.1]([lambda], f) = f(0)

[c.sub.2]([lambda], f) = [lambda]f([pi]) - f'([pi]).

Here R represents the reversal operator R : u(x) [??] u([pi] - x). We let [[sigma].sub.1] = -5, [[sigma].sub.2] = -10 and let [mu] = 15 be the target, so that the algorithm is expected to eventually find all eigenvalues in the disk D (15, 20).

The derivatives of the operator with respect to A are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and the derivatives of the boundary conditions are simply [c.sup.(k).sub.1]([lambda], f) = 0, k [greater than or equal to] 1 and [c.sup.(1).sub.2]([lambda], f) = f([pi]), [c.sup.(k).sub.2]([lambda], f) = 0 for k [greater than or equal to] 2.

The numerical results are illustrated in Figure 4.3. Although the Arnoldi method still performs robustly, convergence is somewhat slower than for the previous two examples (see Figure 4.3(c)). A possible explanation may be given by the fact that the eigenvectors f of this problem have singularities nearby the interval [a, b] (see how the polynomial degree of the Arnoldi vectors shown in Figure 4.3(d) increases to about 48 immediately after the first iteration), and therefore the Arnoldi method requires more iterations to resolve these.

A beautiful observation from Figure 4.3(a) is that the Arnoldi method starts to find spurious eigenvalues near the boundary of the disk of convergence D(15, 20). (For iteration numbers higher than 50 this effect becomes even more pronounced.) This phenomenon is possibly related to a classical result from approximation theory due to Jentzsch [20], which states that the zeros of a truncated Taylor series have limit points everywhere on the boundary of the disk of convergence D([mu], r). Note that all our theoretical results are valid only inside D([mu], r), so that the appearance of spurious eigenvalues outside this set is not a contradiction of the theory. Of course these spurious eigenvalues will have large residuals associated with them, so that they are easily detectable even if the radius of convergence r = 20 is unknown. A more detailed investigation of the convergence behavior of the infinite Arnoldi method and the interesting phenomenon of spurious eigenvalues will be subject of future work.

5. Concluding remarks and outlook. A key contribution of this paper is the formulation of an Arnoldi-type iteration for solving nonlinear operator eigenproblems. Our approach relies on a dynamic representation of the Krylov vectors in the infinite Arnoldi algorithm, which are resolved automatically such that their trailing Chebyshev coefficients are of the order of machine precision and with the aim to compute eigenpairs to very high precision. It would be interesting to see if the spectral method recently proposed in [27] could further improve the accuracy of solutions computed with our algorithm. We have focused on the situation where the functions are of the type f : [a, b] [right arrow] C, mostly, but not entirely, for notational convenience. The abstract formulation of the algorithm in Section 2 carries over to higher dimensions, e.g., to functions f : [R.sup.2] [right arrow] C. However, in higher dimensions, the automatic adaption of the spatial resolution advocated in Section 3 becomes more delicate. A suitable function representation for two-dimensional problems highly depends on the geometry of the domain and is outside the scope of this paper. For PDEs with complicated geometries, the finite-element method (FEM) is a popular approach to representing functions. One could, of course, represent functions on such geometries using a (high-order) finite-element basis and carry out Algorithm 1, but it is not clear whether such a FEM-based infinite Arnoldi variant of Algorithm 1 would be computationally feasible (because it requires the solution of a PDE at each iteration).

The treatment of boundary conditions in the presented algorithm is, to our knowledge, somewhat novel and attractive. Note that boundary conditions nonlinear in [lambda] can be handled in a general fashion, and their effect is simply propagated into the differential equation (2.14), i.e., the equation to be solved at every iteration. Some boundary conditions being nonlinear in [lambda] can also be treated in a discretize-first approach, e.g., the derivative could be estimated by one-sided finite differences. We are, however, not aware of a generic procedure to incorporate nonlinear boundary conditions in a discretize-first approach.

We wish to point out that in [19], two variants of the infinite Arnoldi method are presented, and here we worked out the "Taylor version" of this method. An adaption of the presented algorithm along the lines of the "Chebyshev version" appears feasible although a completely different reasoning might be needed. We believe that our approach of dynamic representations of the Krylov vectors can be combined with the NLEIGS method presented in [16], which is based on rational interpolation instead of polynomial expansions. Besides these extensions, there are also several theoretical challenges that we wish to investigate in future work. For example, it would be interesting to understand how our special choice of the starting vector influences the convergence of the Arnoldi method and to characterize in which cases breakdown may appear and how it could be detected and handled.

Acknowledgments. The first author gratefully acknowledges the support of a Dahlquist research fellowship. We also acknowledge the valuable discussions with Prof. Wim Michiels, KU Leuven. We have gained much inspiration from the concepts of the software package Chebfun [3], and the authors acknowledge valuable discussions with Alex Townsend, Nick

Trefethen, and the rest of the Chebfun team. We would also like to thank the two anonymous referees whose insightful comments have considerably improved our paper.

REFERENCES

[1] J. APPELL, E. DE PASCALE, AND A. VIGNOLI, Nonlinear Spectral Theory, de Gruyter, Berlin, 2004.

[2] J. ASAKURA, T. SAKURAI, H. TADANO, T. IKEGAMI, AND K. KIMURA, A numerical method for nonlinear eigenvalue problems using contour integrals, JSIAM Letters, 1 (2009), pp. 52-55.

[3] Z. BATTLES AND L.N. TREFETHEN, AN EXTENSION OF MATLAB to continuous functions and operators, SIAM J. Sci. Comput., 25 (2004), pp. 1743-1770.

[4] W. -J. BEYN, An integral method for solving nonlinear eigenvalue problems, Linear Algebra Appl., 436 (2012), pp. 3839-3863.

[5] J. P. BOYD, Chebyshev and Fourier spectral methods, 2nd. ed., Dover Publications, Mineola, NY, 2001.

[6] D. BREDA, S. MASET, AND R. VERMIGLIO, Pseudospectral approximation of eigenvalues of derivative operators with non-local boundary conditions, Appl. Numer. Math., 56 (2006), pp. 318-331.

[7] --, Numerical approximation of characteristic values of partial retarded functional differential equations, Numer. Math., 113 (2009), pp. 181-242.

[8] --, Computation of asymptotic stability for a class of partial differential equations with delay, J. Vib. Control, 16 (2010), pp. 1005-1022.

[9] F. CHATELIN, Spectral Approximation of Linear Operators, Academic Press, New York, 1983.

[10] R. CORLESS, G. CONNET, D. HARE, D. J. JEFFREY, AND D. KNUTH, On the Lambert Wfunction, Adv. Comput. Math., 5 (1996), pp. 329-359.

[11] T. A. DRISCOLL, Automatic spectral collocation for integral, integro-differential, and integrally reformulated differential equations, J. Comput. Phys., 229 (2010), pp. 5980-5998.

[12] T. A. DRISCOLL, F. BORNEMANN, AND L. N. TREFETHEN, The chebop system for automatic solution of differential equations, BIT, 48 (2008), pp. 701-723.

[13] C. EFFENBERGER AND D. KRESSNER, Chebyshev interpolation for nonlinear eigenvalue problems, BIT, 52 (2012), pp. 933-951.

[14] L. FOX AND I. B. PARKER, Chebyshev Polynomials in Numerical Analysis, Oxford University Press, London, 1968.

[15] M. GUGAT, Boundary feedback stabilization by time delay for one-dimensional wave equations, IMA J. Math. Control Inf., 27 (2010), pp. 189-203.

[16] S. GUTTEL, R. V. BEEUMEN, K. MEERBERGEN, AND W. MICHIELS, NLEIGS: A class of robust fully rational Krylov methods for nonlinear eigenvalue problems, Tech. Report, MIMS EPrint 2013.49, Manchester Institute for Mathematical Science, University of Manchester, 2013.

[17] A. C. HANSEN, Infinite-dimensional numerical linear algebra: theory and applications, Proc. R. Soc. Lond., Ser. A, Math. Phys. Eng. Sci., 466 (2010), pp. 3539-3559.

[18] J. S. HESTHAVEN, S. GOTTLIEB, AND D. GOTTLIEB, Spectral Methods for Time-dependent Problems, Cambridge University Press, Cambridge, 2007.

[19] E. JARLEBRING, W. MICHIELS, AND K. MEERBERGEN, A linear eigenvalue algorithm for the nonlinear eigenvalue problem, Numer. Math., 122 (2012), pp. 169-195.

[20] R. JENTZSCH, Untersuchungen zur Theorie der Folgen analytischer Funktionen, Acta Math., 41 (1916), pp. 219-251.

[21] A. B. KUIJLAARS, Convergence analysis of Krylov subspace iterations with methods from potential theory, SIAM Rev., 48 (2006), pp. 3-40.

[22] C. LANCZOS, Trigonometric interpolation of empirical and analytical functions, J. Math. Phys. Mass. Inst. Tech., 17 (1938), pp. 123-199.

[23] S. MACKEY, N. MACKEY, C. MEHL, AND V. MEHRMANN, Structured polynomial eigenvalue problems: good vibrations from good linearizations, SIAM J. Matrix Anal. Appl., 28 (2006), pp. 1029-1051.

[24] --, Vector spaces of linearizations for matrix polynomials, SIAM J. Matrix Anal. Appl., 28 (2006), pp.971-1004.

[25] V. MEHRMANN AND H. Voss, Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods, GAMM Mitt., 27 (2004), pp. 121-152.

[26] S. OLVER, Shifted GMRES for oscillatory integrals, Numer. Math., 114 (2010), pp. 607-628.

[27] S. OLVER AND A. TOWNSEND, A fast and well-conditioned spectral method, SIAM Rev., 55 (2013), pp. 462489.

[28] Y. SAAD, Variations on Arnoldi's method for computing eigenelements of large unsymmetric matrices, Linear Algebra Appl., 34 (1980), pp. 269-295.

[29] F. TISSEUR AND K. MEERBERGEN, The quadratic eigenvalue problem, SIAM Rev., 43 (2001), pp. 235-286.

[30] L. N. TREFETHEN, Spectral Methods in MATLAB, SIAM, Philadelphia, 2000.

[31] --, Approximation Theory and Approximation Practice, SIAM, Philadelphia, 2013.

[32] H. Voss, Nonlinear eigenvalue problems, in Handbook of Linear Algebra, L. Hogben, ed., Taylor & Francis, Boca Raton, Fl, 2014, pp. 60-1-60-24.

[33] J. Wu, Theory and Applications of Partial Functional-Differential Equations, Springer, NY, 1996.

(1) Note that our construction corresponds to a variant also known as shift-and-invert Arnoldi method since we actually approximate eigenvalues [??] = 1/[lambda] - [mu]. For simplicity we will still refer to this variant as the Arnoldi method.

ELIAS JARLEBRING ([dagger]) AND STEFAN GUTTEL *

* Received November 27, 2012. Accepted October 29, 2013. Published online on March 17, 2014. Recommended by M. Hochstenbach. The work of S. G. was supported by Deutsche Forschungsgemeinschaft Fellowship GU 1244/1-1.

([dagger]) Department of Mathematics, NA group, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden (eliasjskth. se).

([double dagger]) The University of Manchester, School of Mathematics, Alan Turing Building, M13 9PL, Manchester, UK (stefan.guettel@manchester.ac.uk).
COPYRIGHT 2014 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Jarlebring, Elias; Guttel, Stefan
Publication:Electronic Transactions on Numerical Analysis
Article Type:Report
Geographic Code:4EUSW
Date:Apr 1, 2014
Words:9333
Previous Article:A note on preconditioners and scalar products in Krylov subspace methods for self-adjoint problems in Hilbert space.
Next Article:Multigrid preconditioning of the non-regularized augmented Bingham fluid problem.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters