Printer Friendly

On the relationship between the classical linearization and optimal derivative (1).

Abstract

The aim of this paper is to present the relationship between the classical linearization and the optimal derivative of a nonlinear ordinary differential equation. An application is presented using the quadratic error. AMS subject classification: 34A30, 34A34.

Keywords: Classical linearization, Frechet derivative, optimal derivative, quadratic error.

1. Introduction

The study of stability of the equilibrium point of a nonlinear ordinary differential equation is an almost trivial problem if the function F which defines the nonlinear equation is sufficiently regular in the neighborhood of this point and if its linearization in this point is hyperbolic. In this case, we know that the nonlinear equation is equivalent to the linearized equation, in the sense that there exists a local diffeomorphism which transforms the neighboring trajectories of the equilibrium point to those neighbors of zero of the linear equation. On the other hand, the problem is all other when the nonlinear function is nonregular or the equilibrium point is the center.

Consider the nonregular case. Imagine the case when the only equilibrium point is nonregular. In this case, we cannot derive the nonlinear function and consequently we cannot study the linearized equation. A natural question arises then: Is it possible to associate another linear equation to the nonlinear equation which has the same asymptotic behavior?

The idea proposed by Benouaz and Arino is based on the method of approximation. In [2, 5-8], the authors introduced the optimal derivative, which is in fact a global approximation as opposed to the nonlinear perturbation of a linear equation, having a distinguished behavior with respect to the classical linear approximation in the neighborhood of the stationary point. The approach used is the least square approximation.

The aim of this paper is to present the relationship between the optimal derivative and Frechet derivative in the equilibrium point. After a brief review of the optimal derivative procedure in the second section, the third section is devoted to the study of the relationship between the optimal derivative and Frechet derivative in the equilibrium point in the scalar and vectorial case. In particular, in the scalar case, we prove for a class of functions that the optimal derivative can be computed even though the classical linearization in 0 does not exist. In the last section, we present applications and comparison using the quadratic error. The study shows, in particular, the influence of the choice of initial conditions. In the appendix, we have presented the details of the proof for the example in the case where the nonlinear function is not regular in 0.

2. The Optimal Derivative

2.1. The Procedure

Consider a nonlinear ordinary differential problem of the form

dx/dt = F(x), x(0) = [x.sub.0], (2.1)

Where

* x = ([x.sub.1], ... , [x.sub.n]) is the unknown function,

* F = ([f.sub.1], ... , [f.sub.n]) is a given function on an open subset [OMEGA] [subset] [R.sup.n], with the assumptions

([H.sub.1]) F(0) = 0,

([H.sub.2]) the spectrum s(DF(x)) is contained in the set {z : Rez < 0} for every x [not equal to] 0, in a neighborhood of 0 for which DF(x) exists,

([H.sub.3]) F is [gamma]-Lipschitz continuous.

Consider [x.sub.0] [member of] [R.sup.n] and the solution x of the nonlinear equation starting at [x.sub.0]. With all linear A [member of] L([R.sup.n]), we associate the solution y of the problem

dy/dt = Ay(t ), y(0) = [y.sub.0],

and we try to minimize the functional

G(A) = [[integral].sup.[infinity].sub.0] [[parallel]F(y(t)) - Ay(t)[parallel].sup.2] dt (2.2)

along a solution y. We obtain

[??] = [([[integral].sup.[infinity].sub.0] [[F(x(t))][x(t)].sup.T] dt)([[integral].sup.[infinity].sub.0] [[x(t)][x(t)].sup.T] dt).sup.-1]. (2.3)

Precisely, the procedure is defined by the following scheme: Given [x.sub.0], we choose a first linear map. For example, if F is differentiable in [x.sub.0], then we can take [A.sub.0] = DF([x.sub.0]) or the derivative value in a point in the vicinity of [x.sub.0]. This is always possible if F is locally Lipschitz. If [A.sub.0] is an asymptotically stable map, then the solution starting from [x.sub.0] of the problem

dy/dt = [A.sub.0]y(t), y(0) = [y.sub.0]

tends to 0 exponentially. We can evaluate G(A) using (2.2) and we minimize G for all matrices A. If F is linear, then the minimum is reached for the value A = F (and we have [A.sub.0] = F). Generally, we can always minimize G, and the matrix which gives the minimum is unique. We call this matrix [A.sub.1] and replace [A.sub.0] by [A.sub.1], we replace y by the solution of the linearized equation associated to [A.sub.1], and we continue. The optimal derivative [??] is the limit of the sequence build as such, and it is given by (2.3) (for details see [2, 5-7]).

2.2. Properties of the Procedure

We will now consider situations where the procedure converges.

Influence of the choice of the initial condition

Note that if we change x(t) to z, then the relation (2.3) can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the curvilinear integral along the orbit [gamma] ([x.sub.0]) = {[e.sup.Bt] : t [greater than or equal to] 0} of [x.sub.0]. We obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It is clear that the optimal derivative depends on the initial condition [x.sub.0].

Case when F is linear

If F is linear with s(F) in the negative part of the complex plane, then the procedure gives F at the first iteration. Indeed, in this case, (2.3) reads

A[GAMMA](x) = F[GAMMA] (x)

and it is clear that A = F is a solution. It is unique if [GAMMA] (x) is invertible. Therefore, the optimal approximation of a linear system is the system itself.

Case when F is the sum of a linear and nonlinear term

Consider the more general system of nonlinear equations with a nonlinearity of the form

F(x) = Mx + [??](x), x(0) = [x.sub.0],

where M is linear. The computation of the matrix [A.sub.1] gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Hence, [A.sub.1] = M + [[??].sub.1] with

[[??].sub.1] = ([[integral].sup.[infinity].sub.0] [[??](x(t))][x(t)].sup.T] dt)[[GAMMA](x)].sup.-1].

Then, for all j we have [A.sub.j] = M + [[??].sub.j] with

[[??].sub.j] = ([[integral].sup.[infinity].sub.0] [[??](xj (t))][xj (t)].sup.T] dt)[[GAMMA]([x.sub.j])].sup.-1].

If, in particular, some components of F are linear, then the corresponding components of [??] are zero, and the corresponding components of [A.sub.j] are those of F. If [f.sub.k] is linear, then the kth row of the matrix [A.sub.j] is equal to [f.sub.k].

3. Relationship between the Optimal Derivative and the Classical Linearization in Zero

3.1. Scalar Case

Expression

Consider the scalar differential problem

Dx/dt = f (x), x(0) = [x.sub.0] (3.1)

with f : R [right arrow] R and under the assumptions

([h.sub.1]) f (0) = 0,

([h.sub.2]) f'(x) < 0 in every point where f' exists in an interval (-[alpha], [alpha]) with [alpha] > 0,

([h.sub.3]) f is absolutely continuous with respect to the Lebesgue measure.

The calculation is done in a way similar to that of the vectorial case. We start with the calculation of a0 = f'([x.sub.0]), then we calculate [a.sub.1] by solving the problem

dx/dt = [a.sub.0]x, x(0) = [x.sub.0].

By changing F to f in (2.3), we have

[a.sub.1] = [[integral].sup.[infinity].sub.0] f (x(t))x(t)dt/ [x.sub.0] [[integral].sup.[infinity].sub.0] [x.sup.2](t)dt,

and by substituting x = exp(a0t)[x.sub.0], we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Note that [a.sub.1] does not depend on a0, and consequently, the procedure for the optimal derivative converges in the first step, namely

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.2)

We remind the reader that it has been shown that [??]([x.sub.0]) is a Lyapunov function [8] for the nonlinear problem (3.1). The scalar case is very interesting in the sense that we can write the optimal derivative as a function of the classical linearization of f in 0 (if f' exists in 0); so it is possible to find a limit when [x.sub.0] [right arrow] 0, namely [??]([x.sub.0]), even though the derivative of f in 0 does not exist. The importance of the result lies in the possibility of using [??]([x.sub.0]) for the description of the behavior of the solution and for the study of stability in the vicinity of 0 when the derivative in this point does not exist.

Case when the derivative of f in 0 exists

If f is continuous and if the derivative of f in 0 exists, then it is known [2] that [??]([x.sub.0]) can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. This relation shows that the two quantities [??]([x.sub.0]) and f'(0) are almost equal and are equal in the limit as [x.sub.0] tends to 0.

Case when f is analytic in 0

Now assume that f is analytic in 0, i.e.,

f(x) = [[infinity].summation over (n=1)] [f.sup.(n)](0)/n! [x.sup.n]. (3.3)

Then it is possible to give an expansion of [??]([x.sub.0]) similar to the Taylor expansion of f in the neighborhood of 0. For this, we use the relation (3.2) and replace f (z) by the expression given by relation (3.3) so that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where this formula holds in the interval of convergence of the Taylor series in 0. Generally, if f is of class [C.sup.k] with k [member of] N in the vicinity of 0 and f (0) = 0, then [??] is of class [C.sup.k-1] in this vicinity, and we obtain

[[??].sup.(j)](0) = 2/(j + 1)! [x.sup.j-1.sub.0] [f.sup.(j)](0), 0 [less than or equal to] j [less than or equal to] k - 1.

Case when f is not regular in 0

We now consider the nonregular case, and more particularly the case that f is only nondifferentiable in 0. Writing f (z) in the form

f (z) = -zg(z),

the relation (3.2) becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.4)

The chosen function

[g.sub.r](z) = p([|ln z|.sup.r]),

where p is a bounded nonnegative periodic function of period 1 with [bar.p] = [[integral].sup.1.sub.0] p(z)dz > 0, is nondifferentiable in 0. The relation (3.4) is written for r = 1 and 0 < [x.sub.0] < 1 as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For all [alpha] [member of] (0, 1), we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

So in particular, if ln [alpha] = -1, i.e., [alpha] = e-1, then [??]([x.sub.0]/e) = [??]([x.sub.0]). In this case, [??]([x.sub.0]) does not have a limit when [x.sub.0] [right arrow] 0+. In the case r > 1, we obtain

[[??].sub.r]([x.sub.0]) = -2 [[integral].sup.1.sub.0] zp([(-ln [x.sub.0] - ln z).sup.r])dz.

Let us now consider the relation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.5)

where [g.sub.r](u) = p([|ln u|.sup.r]). Note that [g.sub.r](u) is nondifferentiable in 0. In this case, we will show that the optimal derivative (3.5) can exist even if the derivative of the function [g.sub.r](u) in 0 does not exist. Then

[[??].sub.r]([x.sub.0])[right arrow]- [bar.p] when [x.sub.0] [right arrow] 0 for every r > 1.

For more details, see the proof given in the appendix.

3.2. Vectorial Case

Let us suppose that the sequence [A.sub.j] given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

converges to the optimal matrix and that the derivative DF(0) of F in 0 exists. In this case, we can write

F(x) = DF(0)x + o(|x|). (3.6)

Replacing the relation (3.6) in (2.3) and using the properties of the optimal derivative from [2, 5], we find

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[[[integral].sup.[infinity].sub.0] [o(|x(t)|)][x(t)]T dt][[[integral].sup.[infinity].sub.0] [x(t)][[x(t)].sup.T] dt].sup.-1] = o(1),

i.e., a quantity which tends to 0 when [x.sub.0] [right arrow] 0, by supposing that |x(t)| remains of the order of [x.sub.0].

4. Application

The precision of the optimal derivative is expressed in terms of the norm of the initial condition [x.sub.0] [8] and is given by

[parallel]x(t)- [??](t)[parallel] < O[([parallel][x.sub.0][parallel]).sup.2].

The goal is to try to show for which initial conditions the precision is maintained. As long as [parallel][x.sub.0][parallel] is large in a certain sense, the approximation must be good. It becomes more difficult when approaching 0. Indeed, it is shown that the approach of 0 yields inversion of the quadratic error to the profit of the classical linearization. This shows, that the classical linearization is better near the origin when it exists. Let us present examples emphasizing the theoretical aspect in relation to the influence of the choice of the initial conditions on the quality of the approximation.

4.1. Computational Procedure

First of all let us point out briefly the iterative procedure allowing the calculation of the optimal derivative. Starting the calculus, the point [x.sub.0] is selected arbitrarily near the origin. The differential equations have been solved using the fourth order Runge-Kutta method [13, 16].

* Input [x.sub.0] and [A.sub.0].

* Level (I): Computation of [A.sub.1] in terms of [A.sub.0]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

* Level (II): Computation of [A.sub.j] in terms of [A.sub.j-1]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

* Level (III): Computation of

[parallel][A.sub.j] - [A.sub.j-1][parallel].

* Level (IV): If

[parallel][A.sub.j] - [A.sub.j-1][parallel] < [epsilon],

where [epsilon] is the desired level of approximation, then set [??] = [A.sub.j] . [??] is the optimal derivative of F at [x.sub.0]. Otherwise set [A.sub.j-1] = [A.sub.j] and go to Level (II).

4.2. Example

The function of the electronic circuit (see [11]) in Figure 1 is represented by two variables of states (the voltage drop [V.sub.c1] on the terminal of the first capacity and the voltage drop [V.sub.c2] on the terminal of the second capacity). The nonlinearity is due to the use of a nonlinear diode.

[FIGURE 1 OMITTED]

When a tension [V.sub.c] is applied to the diode in the direct direction, the model of the diode is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

With the parameters

R = 33 * [10.sup.2][OMEGA], [C.sub.1] = 220 * [10.sup.-4]F, [C.sub.2] = 350 * [10.sup.-4]F, a = [10.sup.-4], b= 10-5, d= [10.sup.-6]

and starting from the laws of Kirchhoff relating to the nodes and the meshes of the circuit, we obtain the equations

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.1)

Changing

x = [V.sub.c1] and y = [V.sub.c2],

the system (4.1) can be rewritten as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

By replacing the parameters with their values, the system becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.2)

Classical linearization

The classical linearization at the equilibrium point (0, 0) is obtained by calculating the Frechet derivative of the nonlinear function of the system (4.2),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Optimal derivative

The optimal derivative is obtained by applying the algorithm proposed above, see Section 4.1. For the quadratic error, we use the relation

[E.sub.Q] = [n.summation over (i=1)] [[parallel][x.sub.i] (t)- [[??].sub.i] (t)[parallel].sup.2],

where

* x(t) represents a solution of the nonlinear system,

* [??](t) represents a solution of the optimal derivative.

Results of the method

We study the system using several initial conditions. The results obtained are exhibited in the following table, where [E.sub.Qmax] (O.D.) and [E.sub.Qmax] (C.L.) represent the maximum quadratic errors for the optimal derivative and the classical linearization, respectively. In the left column the initial conditions ([x.sub.0], [y.sub.0]) are given. The second column represents the optimal derivative [??].

The curve

[E.sub.Qmax] = h([parallel][x.sub.0][parallel])

in Figure 2 is obtained starting from a smoothing polynomial using the Origin software. The determination of the value [x.sub.0] for which the curve of error changes behavior will be calculated is performed using the Matlab software.

In Figure 3, a zoom of the part where there is inversion of the quality of the approximation to the profit of the classical linearization is represented.

[FIGURE 2 OMITTED]

5. Analysis of Results

The representation of the maximum quadratic error with respect to [parallel][x.sub.0][parallel] relating to the classical linearization and the optimal derivative enables us to divide our curve into two distinct parts:

* The first part, where the maximum quadratic error due to the classical linearization is lower than that due to the optimal derivative on an interval of [parallel][x.sub.0][parallel] < 0.43. In this case the classical linearization gives a better approximation than the optimal derivative.

* The second part where the maximum quadratic error due to the classical linearization becomes definitely higher than that due to the optimal derivative on an interval of [parallel][x.sub.0][parallel] > 0.43. Here it is the optimal derivative which is better. Namely, for a given initial condition [x.sub.0], approximation by the optimal derivative is better in a vicinity of the initial condition, while the classical linearization is better in the vicinity of the origin. These two aspects reflect the fact that the linearization by Frechet derivative (when it exists and when it is hyperbolic) is the best approximation in the vicinity of the origin.

6. Appendix

By making the change of variable v = [|ln u|.sup.r], we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[FIGURE 3 OMITTED]

We start by replacing [x.sub.0] by a particular sequence of reals tending to 0, the sequence [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], with k [member of] N, k [right arrow][infinity]. For that, we note that if [x.sub.0] is rather small, then there is a unique k such that

k < [|ln u|.sup.r] < k + 1.

After some calculations, we find that there is a constant C independent of [x.sub.0] such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

From this, we deduce in particular that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6.1)

We now will calculate the limit of the ratio, when [x.sub.0] [right arrow] 0, of

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Because of (6.1), the ratio of the terms except the integrals in the right-hand side of the equality tends to 1. We thus have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Suppose that 0 < m [less than or equal to] p(v) [less than or equal to]M <[infinity]. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6.2)

The integrals on the right-hand side of (6.2) can be calculated using

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6.3)

By (6.2) and (6.3), we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

since for q(x) = [x.sup.1/r] we have that for each k [member of] N there exists [xi](k) [member of] (k, k + 1) such that

[(k + 1).sup.1/r] - [k.sup.1/r] = q(k + 1) - q(k)/(k + 1) - k = q'([xi](k)) = 1/r [([xi](k)).sup.1/r-1] [right arrow] 0, k[right arrow][infinity]

due to r > 1. Thus

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Hence [[??].sub.r] ([MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) and [[??].sub.r]([x.sub.0]) have the same limit as [x.sub.0] [right arrow] 0. This leads us to the study of the behavior of

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

By using the Fourier series of the function p(v) = [bar.p] + [??] (v), where [bar.p] = [[integral].sup.1.sub.0] p(v)dv indicates the nonzero average value of p(v), we find

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

According to the relation (6.3),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and thus

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6.4)

The second term on the right-hand side of (6.4) can be written as -2[B.sub.k]/r, where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

By making the change of variable v = k + w, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and with the change w = kz, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6.5)

For the study of [B.sub.k], we break up the integral in (6.5) into a sum of two integrals as [B.sub.k] = [B.sup.(1).sub.k] + [B.sup.(2).sub.k], where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We study initially [B.sup.(2).sub.k] and show that it tends to 0. By using the inequality

1 - [(1 + z).sup.1/r] [less than or equal to] -C[(1 + z).sup.1/r] for z > [epsilon] > 0, C>0, C= C([epsilon])

and by changing the variable v = [k.sup.1/r]C[(1 + z).sup.1/r], we find

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It remains to evaluate [B.sup.(1).sub.k]. We choose an antiderivative of [??] denoted by [??] obtained by formally integrating the Fourier series of [??] (of which the average is zero). Integration of [??](kz) gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which tends to 0 as k[right arrow][infinity]provided

2/r - 1 < 0, i.e., for r > 2

since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For the case r < 2, we reiterate the preceding calculation. We can define successive primitives of [??], which we denote by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

obtained by formally integrating the Fourier series of [??] (whose average is zero). Successive integration of [??](kz) gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We stop as soon as we have r > n/n - 1. With r > 1, we have the convergence to 0 of the term

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Finally,

[[??].sub.r]([x.sub.0])[right arrow]-[bar.p] as [x.sub.0] [right arrow] 0, for every r > 1.

For more details concerning this proof, we refer to [3].

Received February 1, 2007; Accepted May 20, 2007

References

[1] F. Belkhouche. Contribution a l'etude de la stabilite asymptotique par la derivee optimale. Master's thesis, Universite Tlemcen, Algerie, 2001.

[2] Tayeb Benouaz. Least square approximation of a nonlinear ordinary differential equation: the scalar case. In Proceedings of the fourth international colloquium on numerical analysis, pages 19-22, Plovdiv, Bulgaria, 1995.

[3] Tayeb Benouaz. Contribution a l'approximation et la synthese de la stabilite d'une equation differentielle non lineaire. PhD thesis, Universite de Pau (France) and Universite Tlemcen (Algerie), 1996.

[4] Tayeb Benouaz. Optimal derivative of a nonlinear ordinary differential equation. In Equadiff 99, international conference on differential equations, volume 2, pages 1404-1407. World Scientific Publishing Co. Pte. Ltd., 2000.

[5] Tayeb Benouaz and OvideArino. Determination of the stability of a non-linear ordinary differential equation by least square approximation. Computational procedure. Appl. Math. Comput. Sci., 5(1):33-48, 1995.

[6] Tayeb Benouaz and Ovide Arino. Existence, unicite et convergence de l'approximation au sens des moindres. In Carres d'une equation differentielle ordinaire non-lineaire, number 94/19. Universite de Pau, CNRS URA 1204, 1995.

[7] Tayeb Benouaz and Ovide Arino. Relation entre l'approximation optimale et la stabilite asymptotique. Number 95/10. Publications de l'U.A., CNRS 1204, 1995.

[8] Tayeb Benouaz and Ovide Arino. Least square approximation of a nonlinear ordinary differential equation. Comput. Math. Appl., 31(8):69-84, 1996.

[9] Tayeb Benouaz and Ovide Arino. Optimal approximation of the initial value problem. Comput. Math. Appl., 36(1):21-32, 1998.

[10] Tayeb Benouaz and F. Bendahmane. Least-square approximation of a nonlinear O.D.E. with excitation. Comput. Math. Appl., 47(2-3):473-489, 2004.

[11] N. Brahmi. Relation entre la derivee optimale et la linearisation classique. Master's thesis, Universite Tlemcen, Algerie, 2002.

[12] A. Chikhaoui. Contribution a l'etude de la stabilite des systemes non lineaires. Master's thesis, Universite Tlemcen, Algerie, 2000.

[13] Earl A. Coddington and Norman Levinson. Theory of ordinary differential equations. McGraw-Hill Book Company, Inc., NewYork-Toronto-London, 1955.

[14] Jack K. Hale. Ordinary differential equations. Wiley-Interscience [John Wiley & Sons], NewYork, 1969. Pure and Applied Mathematics, Vol. XXI.

[15] R. E. Kalman and J. E. Bertram. Control system analysis and design via the "second method" of Lyapunov. I. Continuous-time systems. Trans. ASME Ser. D. J. Basic Engrg., 82:371-393, 1960.

[16] Anthony Ralston and Herbert S.Wilf. Mathematical methods for digital computers. pages 110-120. JohnWiley & Sons Inc., New York, 1960.

[17] N. Rouche and Jean Mawhin. Equations differentielles ordinaires. Masson et Cie, Editeurs, Paris, 1973. Tome I: Theorie generale.

(1) This work is supported by A.P., C.M.E.P. French-Algerian research program code 99 MDU 453 and C.N.E.P.R.U. code research program D1301/03/2003 in collaboration with O. Arino (U.R.GEODES I.R.D. Ile de France-Bondy, France). O. Arino passed away in September 2003. This work is dedicated to his memory.

Tayeb Benouaz Laboratoire de Modelisation, B.P. 119, Universite Tlemcen, Tlemcen, 13000, Algerie E-mail: t_benouaz@mail.univ-tlemcen.dz

Martin Bohner Department of Mathematics and Statistics, University of Missouri-Rolla, Rolla, MO 65409-0020, USA E-mail: bohner@umr.edu
 [MATHEMATICAL [MATHEMATICAL
 EXPRESSION NOT EXPRESSION NOT
([x.sub.0], REPRODUCIBLE REPRODUCIBLE
[y.sub.0]) [??] IN ASCII] IN ASCII]

(8e-01, 5e-01) [MATHEMATICAL 2.1302e-04 3.5140e-04
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
(8e-02, 5e-01) [MATHEMATICAL 7.5438e-06 1.0367e-05
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
(8e-02, 5e-02) [MATHEMATICAL 7.4729e-09 2.2644e-08
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
(8e-03, 5e-02) [MATHEMATICAL 8.5925e-10 1.0691e-09
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
(8e-03, 5e-03) [MATHEMATICAL 7.0425e-13 2.2132e-12
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
(8e-04, 5e-03) [MATHEMATICAL 9.0836e-14 1.0969e-13
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
(8e-04, 5e-04) [MATHEMATICAL 2.2657e-17 1.3572e-16
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
(8e-05, 5e-05) [MATHEMATICAL 3.249e-21 3.481e-21
 EXPRESSION NOT
 REPRODUCIBLE
 IN ASCII]
COPYRIGHT 2007 Research India Publications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Benouaz, Tayeb; Bohner, Martin
Publication:Advances in Dynamical Systems and Applications
Geographic Code:1USA
Date:Jun 1, 2007
Words:4516
Previous Article:Nonresonance conditions for a nonlinear hyperbolic problem.
Next Article:Convergence of the homotopy decomposition method for solving nonlinear equations.
Topics:


Related Articles
Projective differential geometry old and new; from the Schwarzian derivative to the cohomology of diffeomorphism groups.
Exotic option pricing and advanced Levy models.
Engineering optimization; methods and applications, 2d ed.
Systolic geometry and topology.
Contracts under wage compression: a case of beneficial collusion.
Market and welfare effects of the U.S. Livestock Mandatory Reporting Act.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |