Printer Friendly

On difference schemes for quasilinear evolution problems.

Abstract. We review several methods leading to variable-coefficient schemes and/or to exact difference schemes for ordinary differential equations (error elimination; functional fitting; Principle of Coherence). Necessary and suffient conditions are given for t-independence of fitted RK coefficients. Conditions for [tau]-independence are investigated, [tau] the time-step. The theory is illustrated by examples. In particular, examples are given for non-uniqueness of exact schemes and for efficient difference schemes based on exact schemes and well suited for highly oscillatory ordinary differential systems or for parabolic equations with blow-up solutions.

Key words. difference schemes, time stepping, nonstandard schemes, exact schemes, exponential fitting, functional fitting, Runge-Kutta, collocation methods, review

AMS subject classification. 65L05, 65M06, 65P99

1. Introduction. Time discretization for the numerical solution of initial value problems means that we approximate a continuous dynamical system by a family of discrete dynamical systems. We introduce the additional parameter [DELTA]t =: [tau] and require O([[tau].sup.s]) convergence for [tau] [right arrow] 0, s [greater than or equal to] 1. If the dynamics of the discrete and continuous systems are very different for larger [tau], then the step-size [tau] must be small for satisfactory results. If the dynamics of the systems are very similar, [tau] may be larger, computations are more efficient. In the ideal case, the step-size of the computations is determined by the solution to be computed: by its structure and by the accuracy required. In many applications, for instance in equilibrium computations and turbulence computations in plasma physics, the bounds for the step-size have to be determined by properties of the numerical method instead.

Let us look at a very simple example:

(1.1) u = [u.sup.2], u(0) = [u.sub.0] > 0,

with solution

(1.2) u(t) = [u.sub.0]/1 - [tu.sub.0]

This solution ceases to exist when the denominator vanishes, i.e. at its blow-up time T = 1/[u.sub.0]. We take [u.sub.0] = [U.sub.0] and compute discrete solutions [{[U.sub.n]}.sup.N.sub.n=1].

If we discretize eq. (1.1) with the explicit forward Euler scheme, we obtain

[U.sub.n+1] - [U.sub.n]/[tau] = [U.sup.2.sub.n], or [U.sub.n+1] = [U.sub.n] + [tau][U.sup.2.sub.n],

and the iterates exist for all times, independent of the value of [U.sub.0]. Moreover, the step-size [tau] must be small enough to prohibit instability of the scheme.

If we discretize eq. (1.1) with the implicit backward Euler scheme, we obtain


A choice in favor of the value [U.sup.-.sub.n+1] has to be made in each time step: this ensures that we get convergence to a continuous function in the limit [tau] [right arrow] 0, and it enforces uniqueness of the solution. It does not prohibit though that the iterates on the two branches meet and turn complex at a time T ([tau]) < T ([u.sub.0]). Real iterates thus cease to exist, but the non-existence happens in a way which is different from blow-up. Moreover, implicit difference schemes have a tendency to turn superstable and thus to produce qualitatively wrong solutions for step-sizes [tau] which are not small enough.

If we discretize eq. (1.1) with the 'nonstandard scheme'

(1.3) [U.sub.n+1] - [U.sub.n]][tau] = [U.sub.n][U.sub.n+1], or [U.sub.n+1] = [U.sub.n]/1 - [tau][U.sub.n],

we find that this scheme is exact, i.e. for any step-size [tau] it reproduces the solution (1.2) without discretization error, as long as n * [tau] < T ([u.sub.0]) = 1/[u.sub.0].

Given any individual differential equation, how to find an optimal scheme for it? How should nonlinear terms in differential equations be discretized? Attempts are made for developing a theory of nonstandard schemes 'optimal for individual differential equations' [ 16]. In the case of eq. (1.1) and f (u) = [u.sup.2], however, we notice that

[U.sub.n+1] - [U.sub.n]/[tau] = [U.sub.n] [U.sub.n+1] = f ([U.sub.n]) + f' ([U.sub.n]) [U.sub.n+1] - [U.sub.n]/2,

and this is a linearly implicit standard scheme, a so-called Rosenbrock-Wanner scheme. Linearly implicit schemes were introduced by Rosenbrock in 1963. Today they are standard in the numerical treatment of stiff differential equations and of differential-algebraic equations [7]. Also other 'nonstandard' schemes found in the literature turned out to be standard [ 15].

In the following we shall have a 'nonstandard' view on standard schemes. We shall discuss on which functions given, well-known schemes are exact (on which f (u), on which u(t)?) and we shall discuss several methods for finding schemes which are exact on given s-dimensional function spaces. Depending on the chosen function space, the schemes have constant coefficients, or coefficients depending on time t and/or time-step [tau]. Necessary and sufficient conditions for t-independence are given, [tau]-independence is discussed.

These investigations mostly lead to results for simple quasilinear equations. Exact schemes for simple equations have been used successfully for designing efficient schemes relevant to applications. As examples, Denk-Bulirsch schemes and LeRoux schemes are discussed in this text. Kojouharov-Chen schemes for advection-diffusion-convection equations [8] and structure-preserving schemes for canonical and non-canonical Hamiltonian systems [4] must at least be mentioned. This is an updated, enlarged version of [13].

2. Exact schemes. We start by considering non-autonomous systems

(2.1) u = f (t, u), u(0) = [u.sub.0],

with smooth functions f : ([t.sub.1], [t.sub.2]) x [R.sup.q] [right arrow] [R.sup.q], q [greater than or equal to] 1. We assume [0, T] [subset] ([t.sub.1], [t.sub.2]) [subset] R and consider one-step schemes

(2.2) [U.sub.n+1] = A(f) ([U.sub.n], [tau]), [U.sub.0] = [u.sub.0],

for the numerical solution of system (2.1) on the interval [0, T]. Here [tau] = [DELTA]t is the time step, [U.sub.n], is an approximation to the exact solution u([t.sub.n]) at time [t.sub.n] = n[tau], n = 0, 1, 2, ... , and A(f) denotes the evolution map given by the numerical scheme. Note that for implicit methods, the evolution map A (f) requires a non-linear solve. In this text we assume that the explicit form (2.2) can always be obtained uniquely in exact arithmetic and we neglect the presence of rounding errors. We also allow numerical methods for which the evolution map A(f) involves derivatives of f with respect to u.

We define the truncation error T ([u.sub.0], [tau], f) of scheme (2.2) by

T ([u.sub.0], [tau], f) := 1/[tau] (u([tau]) - [U.sub.1] = 1/[tau] (u([tau]) - A(f)([u.sub.0],[tau])).

Scheme (2.2) is

* of order m on eq. (2.1), if m is the largest integer such that


for all smooth f and arbitrary [u.sub.0];

* exact on the solution u(t; [u.sub.0]) of eq. (2.1) for given f, if T ([u.sub.0], [tau], f) vanishes for arbitrary step-size [tau] [less than or equal to] [[tau].sub.0] [member of] [0, T], [[tau].sub.0] > 0 small enough;

* exact on eq. (2.1) for given f, if T ([u.sub.0], [tau], f) vanishes for arbitrary initial value [u.sub.0] [member of] [R.sup.q] and arbitrary step-size [tau] < [[tau].sub.0] [member of] [0, T].

2.1. Error expansions for constant-coefficient schemes. We now confine to autonomous scalar initial value problems

(2.3) u = f (u), u(0) = [u.sub.0] [member of] R.

For the analysis we expand [tau] ([u.sub.0], [tau], f) in a Taylor series in [tau],

(2.4) T ([u.sub.0], [tau], f) = [[infinity].summation over (j=0)] [B.sub.j] (f)[[tau].sup.j].

Scheme (2.2) is of order m on eq. (2.3), if [B.sub.j] (f) = 0 for all j < m, arbitrary [u.sub.0] and all smooth functions f. Scheme (2.2) is exact on eq. (2.3) for a given function f, if [B.sub.j](f) = 0 for arbitrary [u.sub.0] [member of] R and for all j [greater than or equal to] 0.

LEMMA 2.1. The trapezoidal rule

(2.5) [U.sub.n+1] - [U.sub.n]/[tau] = f([U.sub.n+1]) + f([U.sub.n])/2

is exact on equation (2.3) for those functions f : R [right arrow] R satisfying

(2.6) f" f + [(f').sup.2] = 0,

i.e. for f (u) = [+ or -] [square root of ([a.sub.1]u + [a.sub.2], [a.sub.1], [a.sub.2])] constant, [a.sub.1]u + [a.sub.2] [greater than or equal to] 0. It follows that it is exact for solutions u(t) satisfying u(t) [member of] span{1, t, [t.sup.2]}.

The proof was already given in [14] and [4]. We thus only sketch it here. For the truncation error we obtain the expansion (2.4) with


For general smooth f we thus obtain B0 = B1 = 0 and


The method is second order in general. For those f which satisfy eq. (2.6), we obtain [B.sub.2] (f) = 0, and moreover [B.sub.j] (f) = 0 for all j. Thus the trapezoidal rule is exact for those f satisfying eq. (2.6). We then get by integration f (u) = f[([a.sub.1]u + [a.sub.2]).sup.1/2], [a.sub.1], [a.sub.2] constants. Special solutions of eq. (2.3) with this f are u(t) = [a.sub.2]t + [u.sub.0] and u(t) = ([a.sub.1]t/2 + [u.sup.1/2.sub.0])[sup.2]. More general we obtain from the differential equation that

0 = [d.sup.2] f (u(t))/[dt.sup.2] = [d.sup.3]u(t)[dt.sup.3].

Thus the solution space is U = span{ 1, t, t2 }.

In a similar way the following lemma was proved as well:

LEMMA 2.2. The implicit midpoint rule

[U.sub.n+1] - [U.sub.n]/[tau] = f ([U.sub.n+1] + [U.sub.n]/2)

is exact on equation (2.3) for those functions f : R [right arrow] R satisfying

(2.8) f" f - 2[(f').sup.2] = 0,

i.e. for f (u) = [a.sub.2] [([a.sub.3] - [a.sub.1]u).sup.-1], [a.sub.1], [a.sub.2], [a.sub.3] constants, [a.sub.3] - [a.sub.1]u(t) [not equal to] 0 for all t. Solutions u of eq. (2.3) then satisfy

u(t) = [a.sub.2]t + [u.sub.0] for [a.sub.1] = 0, [a.sub.3] = 1,



Note that this time the solutions form a nonlinear manifold, not a linear space like in the previous case. In particular, u(t) = [square root of (t)] does not belong to the solution manifold for [t.sub.0] = 0.

In general, the solutions of eq. (2.3) should not be expected to form a linear space for nonlinear f. If u(t) solves u = f (u), u(0) = [u.sub.0], then w (t) := [a.sub.1] u(t) + [a.sub.2], [a.sub.1] [not equal to] 0, solves w = [a.sub.1] f ([w - [a.sub.2]] /[a.sub.1]), w(0) = a,[u.sub.0] + [a.sub.2]. That's all we can say for general f.

Also, uniqueness of exact schemes should not be expected: already in [4] it was shown that both the second-order Taylor method

(2.10) [U.sub.n+1] = [U.sub.n] + [tau]f ([U.sub.n]) + [[tau.sup.2]/2 f'([U.sub.n]) f ([U.sub.n]),

and the trapezoidal rule (2.5) are exact on the same set of differential equations (2.3), i.e. on those with a r.h.s.-function f satisfying (2.6). They are clearly different difference schemes, one explicit, one implicit, and also their error expansions are different: expansion (2.4) for scheme (2.10) has the coefficients


So all [B.sub.j], j [greater than or equal to] 2, are different from those in (2.7), but vanish for the same fs.

This non-uniqueness should not surprise: difference schemes are equations to be satisfied by the approximate solutions of the differential equations under consideration. So there is not the exact difference scheme, there might be many of them, differing by terms which vanish for those differential equations on which they are exact. What we have to require, of course, is the unique solvability of the difference scheme for given initial value and sufficiently small step-size [tau].

The trapezoidal rule and the implicit midpoint rule both are Runge-Kutta methods. We thus look at exact schemes within the framework of RK methods now.

2.2. Constant coefficient RK methods as exact schemes. In this subsection we collect some facts on RK methods to be used lateron. We consider here non-autonomous differential equations (2.1) again, and we will always assume [U.sub.0] = [u.sub.0] for the discrete iteration.

2.2.1. Constant coefficient RK methods. Let [b.sub.i], [a.sub.ij] (i, j = 1, ... , s) be real numbers and let

(2.11) [c.sub.i] = [s.summation over (j=1)] [a.sub.ij], i = 1, ..., s.

The method defined by


is called an s-stage Runge-Kutta scheme (RK scheme). An alternative definition is

(2.12) [Y.sub.i] = [U.sub.n] + [tau] [s.summation over (j=1)] [a.sub.ij]f([t.sub.n] + [c.sub.j][tau],[Y.sub.j]), i=1, ...,s,

(2.13) [U.sub.n+1] = [U.sub.n] + [tau] [s.summation over (i=1)] [b.sub.i] f ([t.sub.n] + [c.sub.i][tau], [Y.sub.i]).

The connection between both is given by

[k.sub.i] = f ([t.sub.n] + [c.sub.i][tau], [Y.sub.i]),

[Y.sub.i] = [U.sub.n] + [tau] [s.summation over (j=1) [a.sub.ij][k.sub.j], i=1,...,s.

There is a certain redundancy: formally different schemes can define the same numerical integration method, even when they have different stage numbers [s.sub.1] and [s.sub.2]. In the following we consider Runge-Kutta methods, assuming that the resulting numerical integration method is at least of first order, and that the scheme representing it has minimal stage number s and satisfies [c.sub.i] [not equal to] [c.sub.j] for i [not equal to] j.

2.2.2. Collocation methods. Remember the following facts [3, p.58], [6, p.211f]:

* If an RK method satisfies the simplifying conditions


and is used for integrating

(2.15) y = f (t), y([t.sub.n]) = 0,

on the interval ([t.sub.n,] [t.sub.n+1]), then eq. (2.13) is an integration method of order [xi]. Equation (2.13) is then exact on f [member of] span{1, t, ..., [t.sup.[xi]-1]}. Trivial consequence: all consistent RK schemes are exact on u = const.

* If an RK method satisfies the simplifying conditions


then the stage equations (2.12) define integration methods of order [xi] for eq. (2.15) on the intervals ([t.sub.n], [t.sub.n] + [c.sub.i][tau]). Thus they are exact there for f [member of] span{1, t, .... [t.sup.[xi]-1]}. Note that the validity of C(1) is ensured by eq. (2.11).

* If an s-stage RK method satisfies B(s) and C(s), it is called a collocation method (after Burrage 1978).

* Apply a collocation method (2.12), (2.13) to the differential equation (2.1). The collocation polynomial, i.e. the polynomial p(t) interpolating the numerical solution of (2.12), (2.13) in the points [t.sub.n] and [] := [t.sub.n] + [c.sub.i][tau], i = 1, ..., s, then has degree s. Its derivative p has degree s - 1 and is integrated exactly by the stage equations (2.12) on the intervals [[t.sub.n], []]. We thus obtain


Put p([t.sub.n]) = [u.sub.n], where [u.sub.n] = u([t.sub.n]) and u(t) is the solution of eq. (2.1) to be approximated. Then the collocation polynomial satisfies eq. (2.1) at the internal abscissas [] = [t.sub.n] + [c.sub.i][tau], i = 1,...,s,

p([t.sub.n] + [c.sub.i][tau]) = f ([t.sub.n] + [c.sub.i][tau],p([t.sub.n] + [c.sub.i][tau])), i = 1,...,s.

* Given a positive integer s and numbers [c.sub.1],..., [c.sub.s] [member of] R, 0 [less than or equal to] [c.sub.i] [less than or equal to] 1, i= 1, ...,s, [c.sub.i] [not equal to] [c.sub.j] for i [not equal to] j. The collocation method satisfying

p(t.sub.n]) = [U.sub.n]

p([t.sub.n] + [c.sub.i][tau]) = f ([t.sub.n] + [c.sub.i][tau],p([t.sub.n] + [c.sub.i][tau])), i = 1,...,s,

p([t.sub.n] +[tau]) =: [U.sub.n+1]

is equivalent to the s-stage RK method (2.12), (2.13) with coefficients


where the [l.sub.j] (t) are the Lagrange polynomials


Note that [c.sub.i] [not equal to] [c.sub.j] for i [not equal to] j is essential here.

2.2.3. RK methods as exact schemes. The trapezoidal rule can be written as a Runge-Kutta method,

Y1 = [U.sub.n],

[Y.sub.2] = [U.sub.n] + [tau] (f([t.sub.n], [Y.sub.j]) + f ([t.sub.n] + [tau], [Y.sub.2]))/2,

[U.sub.n+1] = [U.sub.n] + [tau] (f([t.sub.n], [Y.sub.j]) + f ([t.sub.n] + [tau], [Y.sub.2]))/2.

It is a second-order 2-stage method and it satisfies the simplifying conditions B(k) and C(k) for k [less than or equal to] 2, but not for k = 3. It thus is a collocation method and integrates eq. (2.15) exactly for f [member of] span{1, t}. As we have seen earlier, it is also exact on autonomous eqs. (2.3) if f satisfies eq. (2.6), which implies that u (t) [member of] span{1, t, [t.sup.2]}.

The implicit midpoint rule can also be written as a Runge-Kutta method,

[Y.sub.1] = [U.sub.n], + [tau]f ([t.sub.n] + [tau]/2, [Y.sub.1])/2,

[U.sub.n+1] = [U.sub.n], + [tau]f ([t.sub.n] + [tau]/2, [Y.sub.1]).

It is a second-order 1-stage method satisfying B(2) and C(1). It thus is a collocation method and integrates eq. (2.15) exactly for f [member of] span{1, t}. As we have seen earlier, it is also exact on autonomous eqs. (2.3) if f satisfies eq. (2.8).

The question thus arises if every RK method is exact on some nontrivial differential equation. As the following example shows, the answer is no. Consider the [mu]-dependent family of second-order 2-stage schemes for autonomous f,

(2.18) [Y.sub.i] = [U.sub.n],

[Y.sub.2] = [U.sub.n] + [tau] ([mu] f ([Y.sub.1]) + (1 - [mu]) f ([Y.sub.2])),

[U.sub.n+1] = [U.sub.n] + [tau] (f([Y.sub.i]) + f([Y.sub.2]))/2.

For [mu] = 1/2 this is the trapezoidal rule. For the counterexample we choose [mu] = 2/3. For [mu] = 2/3 we obtain [B.sub.0] = [B.sub.1] = 0 and

[B.sub.2](f)=(1/2 * 0 * 1 + 1/2 * 1/3 * 1 - 1/6) f [f'.sup.2] + (1/2 * 1/2 * 1 - 1/6) [f.sup.2]f" = 1/12[f.sup.2]f".

Hence the scheme is second order in general. [B.sub.2] (f) vanishes if either f = 0 or f" = 0. This means f (u) = [a.sub.1]u + [a.sub.2] for arbitrary constants [a.sub.1] and [a.sub.2]. Thus for the differential equation

(2.19) u = [a.sub.1]u + [a.sub.2]

the scheme is at least third order. With f" = 0 we find

[B.sub.3] (f) = 1/72 f [f'.sup.3] + 1/24 f"' [f.sup.3] + 1/12 [f.sup.2] f' f" = 1/72 f [f'.sup.3].

Hence [B.sub.3] does not vanish when [B.sub.2] does. Thus the 2nd order scheme (2.18) with [mu] = 2/3 is only third order for eq. (2.19) and not an exact scheme.

Thus some RK schemes are exact for larger classes of autonomous differential equations, others only for the trivial case where f is constant. The vanishing of the first non-zero term in the error expansion by particular choice of the Lh.s. function f does not guarantee exactness as one might have hoped for from the analysis of the classical schemes at the beginning of this section.

We now turn to different approaches which do allow to find exact schemes for equation (2.19) and for linear systems of type (2.19). It turns out, however, that the coefficients of the schemes must be allowed to depend on the step-size. Examples are given in eqs. (2.26) and (3.2).

2.3. Functional fitting: variable-coefficient RK schemes. In recent years much research has been performed for finding efficient numerical methods for system (2.1) with oscillatory solutions. If a good estimate of the frequency is known in advance, exact integration of the linear part of system (2.1) leads to very useful integration methods which are called 'exponentially-fitted' integration methods. For a list of references we refer to [17] and [18] and the references therein. Here we take a closer look at two different methods in this family: the application of the Principle of Coherence by Denk [1, 2] and the 'functional-fitting RK methods' as introduced by Ozawa [17].

2.3.1. Invariant spaces. Functional fitting methods approach exactness from the solution side. They do not look at the r.h.s. function f to find conditions under which a given scheme becomes exact, but they construct schemes which allow given functions u(t) to be represented exactly. To formulate the following existence theorem without inconvenient restrictions, we consider non-autonomous systems (2.1) again and allow variable coefficients in the RK schemes, i.e. we consider schemes whose coefficients [a.sub.ij], [b.sub.i] depend on the independent variable t and on the step-size [tau]. Ozawa [17] proved the following results, which we repeat here in the formulation of [4]:

LEMMA 2.3. Let [{[.sub.i]}.sup.s.sub.i=1]] [member of] R be given, [c.sub.i] [not equal to] [c.sub.j] for i [not equal to] j. Let [{v.sub.m](t)}.sup.s.sub.m=1] [member of] [C.sup.s][0, T] be linearly independent functions, sufficiently smooth such that each of them satisfies


and suppose that they solve in [0, T] a homogeneous linear differential equation

(2.21) [s.summation over (m=0)] [p.sub.m] (t) [v.sup.(m)](t) = 0 with [p.sub.s] (t) [equivalent to] 1, [p.sub.0](t) [not equal to] 0,

with continuous coefficients [p.sub.m] [member of] C[0, T]. Then the linear system


is uniquely solvable for [a.sub.ij] ([t.sub.n] [tau]) and [b.sub.i] ([t.sub.n] [tau]), with t [member of] [0, T] and 0 < [tau] < [[tau].sub.0] for [[tau].sub.0] small enough.

Note that the assumptions on the functions [v.sub.m](t) exclude the constant u(t) [equivalent to] const from the set [{[v.sub.m]}.sup.s.sub.m=1]. Nevertheless, the constant function will be contained in any linear space spanned by functions satisfying system (2.22): it satisfies system (2.22) for any set of coefficients [a.sub.ij] ([t.sub.n] [tau]) and [b.sub.i] ([t.sub.n] [tau]) and thus for any set of functions [v.sub.i] (t), ... , [v.sub.s] (t).

The idea of the proof given by Ozawa is the following: For fixed t and [tau], system (2.22) is a collection of s + 1 linear systems of order s with matrix

(2.23) [([v.sub.m](t + [c.sub.j][tau])).sub.m,j=1.... s] =: U ([t.sub.n] [tau]),



and [s.sup.2] + s unknowns [a.sub.ij]([t.sub.n][tau]), [b.sub.i]([t.sub.n][tau]). These systems are uniquely solvable if the matrix U([t.sub.n] [tau]) is nonsingular. To prove that U([t.sub.n] [tau]) is nonsingular for all t [member of] [0, T] and small enough [tau], conditions (2.20) and (2.21) are used. Condition (2.21) ensures that the Wronskian matrix of the linearly independent functions ([v.sub.i].... , [v.sub.s]} is nonsingular [6, p. 64ff].o

Now we consider the performance of a scheme obtained via Lemma 2.3. Assume that a function u [member of] U := span{1, [v.sub.i], ....[v.sub.s]} satisfies the non-autonomous system (2.1). Then we expect that the RK scheme with coefficients attained according to Lemma 2.3 will be exact on u(t). We get more: from this exactness on the linear space U it follows that the scheme has order s:

LEMMA 2.4. Let the coefficients of the variable-coefficient s-stage RK scheme


be obtained according to Lemma 2.3. Then the order of the scheme is at least s. If the abscissae [c.sub.i], i = 1, ..., s, are taken to satisfy


then the order of accuracy is s + v. The maximum attainable order is 2s.

The proof of these statements uses results on RK collocation methods with constant coefficients and can be found in Ozawa [17]. It is shown there that the first terms [a.sup.(0).sub.ij] and [b.sup.(0).sub.i] in the power expansions of [a.sub.ij] ([t.sub.n] [tau]) and [b.sub.i] ([t.sub.n] [tau]) with respect to [tau] satisfy the simplifying conditions B(s) and C(s) and depend only on {[c.sub.i]}i-1, but not on the generating functions [{[v.sub.m]}.sup.s.sub.m=1]. They thus agree with the coefficients of the corresponding collocation scheme. If the abscissae [c.sub.i] satisfy the additional condition (2.24), both the RK collocation scheme and the scheme obtained according to (2.22) have order p = s + v [less than or equal to] 2s.

The s-stage RK scheme obtained with the linearly independent functions [{[v.sub.m]}.sup.s.sub.i=1] is exact on the solution u(t) whenever u(t) [member of] U = span{1,[v.sub.1],....[v.sub.s]}. If all solutions of (2.1) happen to belong to U in the full time interval [0, T], the scheme is exact on (2.1), no matter how nonlinear f is, because we can first construct the linear combination of the basis functions and afterwards we replace it by f (t, u). It is thus of interest to use functional fitting RK-schemes whenever there is some knowledge about the solution in advance. If the scheme is not exact, the remaining part of the solution is captured by the order of the RK-scheme; and the constants in the error expansion will be small as long as the scheme is in a small neighborhood of an exact scheme.

Given an s-stage RK scheme which is exact on U = span{1, [v.sub.1],... , [v.sub.s]}, it is a member of a family of nonconfluent schemes which depend on s parameters [c.sub.1],... [c.sub.s]. All these schemes are exact on the same function space [U.sub.n] Though all these schemes are equivalent when the scheme is used as an exact scheme, they differ in their numerical performance when the scheme is used on a problem where it is not exact. This follows from the second statement in Lemma 2.4.

2.3.2. Examples for schemes obtained with Lemma 2.3. When we apply Lemma 2.3, the resulting scheme might have constant or variable coefficients, depending on the generating functions. This is illustrated by the following examples. RK schemes with variable coefficients are nonstandard. Time-dependent coefficients are very unusual and would be inconvenient in computations. In [section] 2.3.3 we will show how to avoid them. Coefficients depending on the step-size [tau] are not quite that unusual: such coefficients are always obtained in the context of exponential fitting [18] and of evaluating the Principle of Coherence [2] and seem to be unavoidable in certain situations.

For constant-coefficient schemes for non-autonomous differential equations, it is a convention to satisfy eq. (2.11) when designing new schemes. Because condition (2.11) implies that [t.sub.n] + [c.sub.i][tau] = [U.sub.n], + [c.sub.i][tau] for u(t) = t [3, p. 56]. In the case of Lemma 2.3, condition (2.11) is satisfied if u(t) = t is one of the generating functions.

Example 1 (constant coefficients): We chose s = 2, [v.sub.i] (t) = t, [v.sub.2] (t) = [t.sup.2] and use [c.sub.1], [c.sub.2] as parameters. Solving system (2.22) we find the coefficients


For varying [c.sub.1], [c.sub.2] with [c.sub.1] [not equal to] [c.sub.2] this is a 2-parameter family of RK schemes. For [c.sub.1] = 0, [c.sub.2] = 1 we obtain the coefficients of the trapezoidal rule, as expected. We obtain the trapezoidal rule also for [c.sub.1] = 1, [c.sub.2] = 0. Though all these schemes are equivalent when exact, they differ in their order and numerical performance when not exact.

These schemes are equivalent to the collocation schemes obtained for {[c.sub.1], [c.sub.2]} via eq. (2.17). This was to be expected because of the remark on [a.sup.(0).sub.ij], [b.sup.(0).sub.i]. Moreover, if p(t) is the collocation polynomial introduced earlier, we find p(t) [member of] span{1, t, [t.sup.2]}. We see that the coefficients are undefined in the degenerate case [c.sub.1] = [c.sub.2], i.e. when the collocation approach of [section] 2.2.2 is not applicable.

Example 2 ([tau]-dependent coefficients): With s = 2, [c.sub.1] = 0, [c.sub.2] = 1 and [v.sub.1] (t) = t, [v.sub.2] (t) = exp [LAMBDA]t, 1/[LAMBDA] [??] [0, T], we obtain the coefficients


These coefficients have an apparent singularity in the limit [tau] [right arrow] 0. The limiting values of a21([tau]), a22 ([tau]), bl ([tau]) and b2 ([tau]) computed by L'Hopital's rule all are equal to 1/2. This was to be expected because the scheme becomes the related collocation scheme for [tau] [right arrow] 0. Ozawa thus recommends to use [tau]-expanded coefficients in practical computations. o

Example 3 (t and [tau] dependent coefficients): With s = 2, parameters [c.sub.1] [not equal to] [c.sub.2] and [v.sub.1] (t) = t, [v.sub.2] (t) = [(t + 1).sup.-1], we obtain with d(c) := [(t + 1 + c[tau]).sup.-1], d[(c).sup.2] = d(c) * d(c),


This example shows that quite simple functions {[v.sub.m]} can lead to complicated coefficients which depend on t and [tau].

2.3.3. Conditions for obtaining constant coefficients. We now ask for general conditions such that the coefficients are constant, i.e. independent of time t and/or step-size [tau].

THEOREM 2.5. Let the assumptions of Lemma 2.3 be satisfied. Then the coefficients computed according to Lemma 2.3 are independent of t iff the linear space

U := span{1, [v.sub.1] (t), .... [v.sub.s](t)}

is closed with respect to differentiation.

Proof. 1. necessary: Time-independent coefficients satisfy

(2.28) [a.sub.ij] (t, [tau]) := [delta]/[delta]t [a.sub.ij] (t, [tau]) = 0, [b.sub.i] (t, [tau]) := [delta]/[delta]t [b.sub.i] (t, [tau]) = 0

for all t. Computing the time derivative of every equation of system (2.22) and using relations (2.28) we obtain


Formally, this is the same as system (2.22), with [v.sub.m] (t) replaced by [v.sub.m] (t). Time-independent coefficients have to satisfy both this system and system (2.22). Since system (2.22) already determines the coefficients uniquely, this is possible only if system (2.29) does not add new conditions but consists of linear combinations of equations contained in system (2.22), possibly adding the trivial equation for the constant. This means that span{[v.sub.1], ...,[v.sub.s]} is a subset of span{1, [v.sub.1], ... , [v.sub.s]}. Thus the linear space U is closed with respect to differentiation. 2. sufficient: If the linear space U is closed with respect to differentiation, there are coefficients [[alpha]] such that [v.sub.m] (t) = [[SIGMA].sup.8.sub.v=0] [[alpha]][v.sub.v](t), m = 1, ..., s, [v.sub.0] (t) [equivalent to] 1. Inserting this into the time derivative of every equation of system (2.22) and using the fact that the functions [v.sub.v] solve system (2.22), we obtain


This is equivalent to the s + 1 linear systems


where U(t, [tau]) is the Wronskian matrix of the generating functions introduced in eq. (2.23). Nonsingularity of this matrix was essential for the proof of Lemma 2.3. From this follows that [a.sub.ij] (t, [tau]) [equivalent to] 0, [b.sub.i] (t, [tau]) [equivalent to] 0, i, j = 1, ... , s.

Examples: Each of the spaces span{[e.sup.wt]}, span{[e.sup.-wt]} and span{sin wt, cos wt} is closed with respect to differentiation. The linear spaces span{[t.sup.s], [t.sup.s-1],... , t, 1}, s [member of] N, and span{1, t, [e.sup.wt]} are also closed with respect to differentiation. This agrees with the time-independence of the coefficients given in (2.25) and (2.26).

There is no finite-dimensional linear space containing [span{(t + c).sup.-1]}], c [member of] R, and closed with respect to differentiation. This is in agreement with the time-dependence of coefficients (2.27).

There is no finite-dimensional linear space containing [span{(1 + t).sup.1/2]}] and closed with respect to differentiation. We thus get (quite complicated) coefficients which depend both on t and [tau]. As we have seen earlier, however, there is a 1-stage constant-coefficient RK scheme exact on u (t) = [(t + 1).sup.1/2], when applied to differential equation (2.3) with (2.8): the implicit midpoint rule (put [u.sub.0] = 1, [a.sub.1] = -2, [a.sub.3] = 0 in eq. (2.9)). This example confirms that the properties of the nonlinearities f play an important role for the results of [section] 2.1.

If we procede analogously for finding general conditions on generating functions that allow [tau]-independent coefficients, we find that the generating functions have to satisfy both system (2.22) and


From [section] 2.2.2 we know that the collocation schemes must satisfy these two systems. When we check how they do so, we find that the equations for t' in system (2.30), m = 1, ... , s, are equivalent to the mth level of the simplifying conditions B(s) and C(s) introduced in eqs. (2.14) and (2.16).

2.4. The Principle of Coherence. Exact schemes for linear differential equations with constant coefficients and for systems of such equations were derived by many authors, sometimes by using the known continuous solution. A straight-forward procedure for deriving equations to be satisfied by the coefficients of exact schemes is the Principle of Coherence introduced by Hersch in 1958.

The basic idea of the Principle of Coherence was formulated by Hersch [5] as Successive approximations should not contradict each other. We explain this by the following example: consider

(2.31) z(t) + [[LAMBDA].sup.2]z(t) = 0, [LAMBDA] > 0.

Using central finite differences at t with step-size [tau] we obtain

(2.32) z (t - [tau]) - (2 - [LAMBDA][[tau].sup.2])z(t) + Z(t + [tau]) = 0.

We write instead

(2.33) z (t - [tau]) - A([tau])z(t) + z(t + [tau]) = 0,

where the coefficient A([tau]) is to be determined. With step-size 2[tau] we obtain similarly

(2.34) z (t - 2[tau]) - A(2[tau])z(t) + z(t + 2[tau]) = 0.

By linear combination of three difference equations of type (2.33) centered at t - [tau], t and t + [tau] we obtain on the other hand

(2.35) z (t - 2[tau]) - (A[([tau]).sup.2] - 2)z(t) + z(t + 2[tau]) = 0.

For a coherent numerical approximation of eq. (2.31), eqs. (2.34) and (2.35) should coincide. This means A(2[tau]) = A[([tau]).sup.2] - 2. This is satisfied for A([tau]) = 2 cos k[tau], k [member of] R. Moreover, the resulting difference scheme has to be consistent with eq. (2.31), i.e. the first-order truncation error has to vanish for [tau] [right arrow] 0. Thus we obtain k = [LAMBDA]. The coherent scheme therefore is

(2.36) z (t - [tau]) - 2 cos([LAMBDA][tau]) z(t) + z (t + [tau]) = 0.

Comparison shows that scheme (2.36) is exact on eq. (2.31), while the second-order central difference formula (2.32) uses the first two terms of the Taylor expansion of A([tau]). We required coherence using three grid points, and we obtained exactness. For equations of higher order or larger systems, the derivation of exact schemes by the Principle of Coherence can become quite envolved. Denk employed the calculus of distributions and obtained schemes that are very useful in applications [1].

3. Applications. We conclude by showing that exact schemes are of practical relevance in scientific computing. We give two examples where exact schemes for simpler equations led to efficient schemes for more complicated equations.

3.1. Denk-Bulirsch schemes. Denk used the Principle of Coherence explained in [section] 2.4 for designing a numerical integration method which is capable of simulating highly oscillatory circuits efficiently and reliably. He treated first order systems

(3.1) u + Au = f (t, u),

where the matrix A represents the linear elements of the circuit and f (t, u) assembles the nonlinear terms. Applying the Principle of Coherence

with the step-sizes [tau] and 2[tau] to z + Az = 0 leads to

(3.2) Z ([t.sub.1]) = [PHI]([tau])Z(to +[tau]) = [PHI][([tau]).sup.2]Z([t.sup.0]), = [PHI] (2[tau]) Z ([t.sub.0]),

and thus to [PHI] ([tau]) = exp(-A[tau]), as expected. For approximation of eq. (3.1), Denk combined this with a multistep approach (explicit or implicit). Explicit:

U(t + m[tau]) - exp(-A[tau])U(t + (m - 1)[tau]) = [tau] [m.summation over (i=0)] [[beta.sub.i] ([tau])f (t + (m - 1)[tau], U(t + (m - 1)[tau])),

where [[beta].sub.i] ([tau]) are matrix coefficients of the multistep scheme. In the case m = 1 this leads to

[[beta].sub.1] ([tau]) = - (I - (I - exp(-A[tau]))[(A[tau]).sup.-1]) [(A[tau]).sup.-1],

[[beta].sub.0] ([tau]) = (I - exp(-A[tau])) [(A[tau]).sup.-1] - [[beta].sub.1].

Note that the coefficients [[beta].sub.i] ([tau]) have an apparent singularity for [tau] [right arrow] 0, similar to the coefficients given in (2.26). This seems to be unavoidable.

The resulting scheme is consistent of order m, A(0)-stable and convergent. It is A-stable without order restrictions. This is no contradiction to Dahlquist's order barriers because those barriers were shown to hold for constant-coefficient schemes. Tests with a system of five equations in a time interval corresponding to about 250 000 oscillations showed that the scheme is more efficient and more reliable than the standard code LSODE for this type of problems [2]. The code HERSCH developed using these ideas for equations more general than eq. (3.1) was tested in numerical experiments on equations modeling electric circuits. It proved to be more efficient than the codes LSODE, DOPRI5 and RADAU5 on highly oscillatory problems [1].

3.2. LeRoux schemes. Consider the parabolic problem


where [OMEGA] is a smooth bounded domain, [alpha] [greater than or equal to] 0 real and m > 1 an integer. Let ([[LAMBDA].sub.1], [u.sub.1]) be the principal eigenpair of

-[DELTA]u = [LAMBDA]u, u|[delta][OMEGA] = 0

satisfying [u.sub.1](x) > 0 in [OMEGA] and [parallel][u.sub.1][parallel][L.sup.1]([OMEGA]) = 1. Let [v.sub.1] be a real smooth function satisfying [v.sup.m.sub.1](x) = [u.sub.1](x) in [OMEGA]. Then [[theta]v.sub.1], 0 > 0, is a steady-state solution of (3.3) for [alpha] = [[LAMBDA].sub.1]. A steady-state solution for all [alpha] is v(x, t) [equivalent to] 0. The time-dependent solutions of (3.3) for given initial function [v.sub.0] (x) > 0 were investigated by Sacks and others, and the results are [10]:

(i) If 0 [less than or equal to] [alpha] < [[LAMBDA].sub.1] and [v.sub.0] [member of] [L.sup.q]([OMEGA]), q > 1, problem (3.3) has a solution existing for all times and decaying to zero for t [right arrow] [infinity].

(ii) If [alpha] = [[LAMBDA].sub.1] and [v.sub.0] [member of] [L.sup.q]([OMEGA]), q > 1, problem (3.3) has a solution existing for all times and tending to [[theta]v.sub.1] = [[theta]u.sup.1/m.sub.1] for t [right arrow] [infinity]. The factor [theta] depends on the initial function [v.sub.0].

(iii) If [alpha] > [[LAMBDA].sub.1] there exists T > 0 such that problem (3.3) has for given [v.sub.0] a unique weak solution v on [0, T] with [lim.sub.t [right arrow] T] [parallel] v(*, t) [parallel] [L.sup.[infinity] ([OMEGA]) = + [infinity]. Such solutions are called blow-up solutions. The only nonnegative solution of problem (3.3) which exists for all times is v (x, t) [equivalent to] 0.

To construct a numerical scheme whose solution has similar properties as the solution of the continuous problem, Le Roux [10] used the exact scheme (1.3) for

[dw.sup.m]/dt = [beta][([w.sup.m]).sup.2], w(0) = [w.sub.0] > 0, [beta] [member of] R,

to derive the approximate semi-discrete scheme

(3.4) 1/m - 1 ([V.sup.1-m.sub.n] [V.sup.m.sub.n+1] - [V.sub.n+1]) - [tau][DELTA][V.sup.m.sub.n+1] = [alpha][tau][V.sup.m.sub.n+1]

for eq. (3.3). Here [tau] is the time step and [V.sub.n] = V (x, n[tau]) approximates v (x, t) at t = n[tau]. Note that solving eq. (3.4) for [V.sub.n+1] with given [V.sub.n] means solving a quasilinear elliptic boundary value problem with [V.sub.n+1]|[delta][OMEGA] = 0, and this has to be done at each time step. With

p := 1/m, q := 1 - p, [U.sub.n] := [V.sup.m.sub.n]

and [U.sub.n], [member of] B := [H.sup.1.sub.0] ([OMEGA]) [intersection] [C.sup.2] ([OMEGA]) where all elements satisfy the given boundary condition, (3.4) becomes

(3.5) [tau][DELTA][U.sub.n+1] = p/q ([U.sub.n+1] [U.sup.-q.sub.n] - [U.sup.p.sub.n+1]) - [alpha][tau][U.sub.n+1]

=: f ([U.sub.n+1]; [U.sub.n]),

which is a standard quasilinear elliptic problem for [U.sub.n+1]. Le Roux [10] proved existence and uniqueness of the solution of scheme (3.5) for


and formulated conditions on [Y.sub.0] under which the iterative scheme

[DELTA][Y.sub.j+1] = 1/[tau] f ([Y.sub.j]; [U.sub.n]), [Y.sub.0] given,

converges for j [right arrow] [infinity] (monotonic) to [U.sub.n+1]. She proved stability and convergence of the time discretization for a wide class of initial conditions, gave for fixed [tau] = [DELTA]t the estimates


where c is a constant, c = c([OMEGA], p, [alpha], [U.sub.0]), and showed:

(i) If [alpha] [less than or equal to] [[LAMBDA].sub.1], then there exists a constant [DELTA][t.sub.0] > 0 depending only on [OMEGA], p, [alpha], [U.sub.0] such that the numerical solution [U.sub.n] exists for all n [right arrow] [infinity] for every time step [DELTA]t < [DELTA][t.sub.0]. From the estimate (3.6) it then follows that [parallel] [U.sub.n], [parallel] [right arrow] 0 if [alpha] < [[LAMBDA].sub.1], as desired. We also see that the norm of the initial function [U.sub.0] specifies the numerical value of [theta] in the case [alpha] = [[LAMBDA].sub.1].

(ii) If a > Ai then there exists T* depending on the time step [DELTA]t and on [U.sub.0] such that the numerical solution [U.sub.n] exists for n[DELTA]t < T* and becomes infinite at T*. The following estimate is valid:


and this estimate has also been obtained for the exact solution.

Thus we see that, for sufficiently small [DELTA]t < [DELTA][t.sub.0], scheme (3.5) produces qualitatively correct numerical solutions which satisfy the estimates known for the exact solutions.

In further work this scheme and its mathematical analysis were extended to the more general case


where [OMEGA] [subset] [R.sup.d] is a smooth bounded domain, [delta] is a parameter describing diffusion, [delta] [member of] (-1, 0) for fast diffusion and [delta] > 0 for slow diffusion, [alpha] > 0 real and p [greater than or equal to] 1 + [delta]. This mathematical work is reviewed in [9].

The usefulness of scheme (3.5) for Computational Plasma Physics is reported. In investigations of fusion plasmas, diffusion equations with slow diffusion (e.g. [delta] = 2) are used for the density of particles, and equations with fast diffusion (e.g. [delta] = -1/2) for their temperature. In reference [11] a coupled system for density and temperature of ions is solved for various parameter values, while in reference [ 12] the two equations are solved separately for various cases (decay of the solution, evolution to a constant profile, blow-up case).

* Received December 22, 2003. Accepted for publication May 28, 2004. Recommended by A. Ruffing.


[1] G. DENK, A new numerical method for the integration of highly oscillatory second-order ordinary differential equations, Appl. Num. Math., 13 (1993), pp. 57-67.

[2] G. DENK, The simulation of oscillatory circuits: an efficient integration scheme, in Efficient Numerical Methods in Electronic Circuit Simulation, G. Denk, M. Giinter, P. A. Selting, and O. V Stryk, eds., Technische Universitdt Munchen, TUM-M9413, 1994, pp. 1-8.

[3] K. DEKKER AND J. G. VERWER, Stability of Runge-Kutta Methods for Stiff Nonlinear Differential Equations, North Holland, Amsterdam, 1984.

[4] M. J. GANDER AND R. MEYER-SPASCHE, An introduction to numerical integrators preserving physical properties, in Applications of Nonstandard Finite Difference Schemes, R. E. Mickens, ed., Chapter 5, World Scientific, Singapore, 2000.

[5] J. HERSCH, Contribution a la methode des equations aux differences, Z. Angew. Math. Phys., 2 (1958), pp. 129-180.

[6] E. HAIRER, S.P. NORSETT, AND G. WANNER, Solving Ordinary Differential Equations, Vol. 1, 2nd ed., Springer Verlag, Berlin, 1993.

[7] E. HAIRER AND G. WANNER, Solving Ordinary Differential Equations, Vol. 11, Springer Verlag, Berlin, 1991.

[8] H. V. KOJOUHAROV AND B. M. CHEN, Nonstandard Methods for Advection Diffusion Reaction Equations, in Applications of Nonstandard Finite Difference Schemes, R. E. Mickens, ed., Chapter 2, World Scientific, Singapore, 2000.

[9] M. N. LE Roux, Numerical solution of fast-diffusion or slow-diffusion equations, J. Comput. Appl. Math., 97 (1998), pp. 121-136.

[10] M. N. LE Roux, Semidiscretization in time of nonlinear parabolic equations with blowup of the solution, SIAM J. Numer. Anal., 31 (1994), pp. 170-195.

[11] M. N. LE ROUX, J. WEILAND, AND H. WILHELMSSON, Simulation of a coupled dynamic system of temperature and density in a fusion plasma, Phys. Scripta, 46 (1992), pp. 457-462.

[12] M. N. LE ROUX AND H. WILHELMSSON,Extemal boundary effects on simultaneous diffusion and reaction processes, Phys. Scripta, 40 (1989), pp. 674-681.

[13] R. MEYER-SPAS CHE, Variable-coefficient difference schemes for quasilinear evolution problems, in Numerical Methods and Applications, I. Dimov et al., eds., Lectrure Notes in Computer Sciences, Vol. 2542, Springer-Verlag, 2003, pp. 36-47.

[14] R. MEYER-SPASCHE, Difference schemes of optimum degree of implicitness for a family of simple ODES with blow-up solutions, J. Comput. Appl. Math., 97 (1998), pp. 137-152.

[15] R. MEYER-SPASCHE AND D. DUCHS, A general method for obtaining unconventional and nonstandard difference schemes, Dyn. Cont., Discr. Impulsive Systems, 3 (1997), pp. 453-467.

[16] R.E. MICKENS, ed., Applications of Nonstandard Finite Difference Schemes, World Scientific, Singapore, 2000.

[17] K. OZAWA, Functional fitting Runge-Kutta method with variable coefficients, Jap. J. Ind. Appl. Math., 18 (2001), pp. 107-130.

[18] G. VANDEN BERGHE, H. DE MEYER, M. VAN DAELE, AND T. VAN HECKE, Exponentially-fitted explicit Runge-Kutta methods, Comp. Phys. Comm., 123 (1999), pp. 7-15.

[19] G. VANDEN BERGHE, L. GR. IXARU, AND H. DE MEYER, Frequency determination and step-length control for exponentially fated Runge-Kutta methods, J. Comput. Appl. Math., 132 (2001), pp. 95-105.


([dagger]) Max-Planck-Institut fur Plasmaphysik, EURATOM-Association, D-85748 Garching, Germany (
COPYRIGHT 2007 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Meyer-Spasche, Rita
Publication:Electronic Transactions on Numerical Analysis
Date:Apr 1, 2007
Previous Article:Langenhop's inequality and applications for dynamic equations.
Next Article:Pick functions related to entire functions having negative zeros.

Related Articles
Driving Seat: Evo's missing link.
Finite difference schemes and partial differential equations, 2d ed.
Holder Continuity of Weak Solutions to Subelliptic Equations with Rough Coefficients.
Elliptic, hyperbolic and mixed complex equations with parabolic degeneracy; including Tricomi-Bers and Tricomi-Frankl-Rassias problems.
Special volume on difference equations and special functions.
Leading-edge research on evolution equations.
Advances in plasma physics research; v.6.
Oscillation theory of partial differential equations.
Male, Female, second edition.
Regularly and strongly decaying solutions for quasilinear dynamic equations.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters