Printer Friendly

Long Time Behavior for a System of Differential Equations with Non-Lipschitzian Nonlinearities.

1. Introduction

Of concern is the following system:

[mathematical expression not reproducible], (1)

with continuous data [X.sub.j](t) = [x.sub.0j](t), t [member of] (-[infinity], 0], coefficients [a.sub.i](t) [greater than or equal to] 0, and inputs [c.sub.i](t), i = l, ..., m. The functions [f.sub.ij] and [k.sub.ij]: are nonlinear continuous functions. This is a general nonlinear version of several systems that arise in many applications (see [1-9] and Section 4 below).

The literature is very rich of works on the asymptotic behavior of solutions for special cases of system (1) (see for instance [10-19]). Here the integral terms represent some kind of distributed delays but discrete delays may be recovered as well by considering delta Dirac distributions. Different sufficient conditions on the coefficients, the functions, and the kernels have been established ensuring convergence to equilibrium or (uniform, global, and asymptotic) stability. In applications it is important to have global asymptotic stability at a very rapid rate like the exponential rate.

Roughly speaking, it has been assumed that the coefficients [a.sub.i](t) must dominate the coefficients of some "bad" similar terms that appear in the estimations. For the nonlinearities (activation functions), the first assumptions of boundedness, monotonicity, and differentiability have been all weakened to a Lipschitz condition. According to [8, 20] and other references, even this condition needs to be weakened further. Unfortunately, we can find only few papers on continuous but not Lipschitz continuous activation functions. Assumptions like partially Lipschitz and linear growth, [alpha]-inverse Holder continuous or inverse Lipschitz, non-Lipschitz but bounded were used (see [16, 21, 22]).

For Holder continuous activation functions we refer the reader to [23], where exponential stability was proved under some boundedness and monotonicity conditions on the activation functions and the coefficients form a Lyapunov diagonally stable matrix (see also [24, 25] for other results without these conditions).

There are, however, a good number of papers dealing with discontinuous activation functions under certain stronger conditions like M-Matrix, the LMI condition (linear matrix inequality) and some extra conditions on the matrices and growth conditions on the activation functions (see [20,26-37]). Global asymptotic stability of periodic solutions have been investigated, for instance, in [38, 39].

Here we assume that the functions [f.sub.ij] and [K.sub.ij] are (or bounded by) continuous monotone nondecreasing functions that are not necessarily Lipschitz continuous and they may be unbounded (like power type functions with powers bigger than one). We prove that, for sufficiently small initial data, solutions decay to zero exponentially.

The local existence and global existence are standard; see the Gronwall-type Lemma 1 below and the estimation in our theorem. However, the uniqueness of the equilibrium is not an issue here (even in case of constant coefficients) as we are concerned with convergence to zero rather than stability of equilibrium.

After the Preliminaries section, where we present our main hypotheses and the main lemma used in our proof, we state and prove the convergence result in Section 3. The section is ended by some corollaries and important remarks. In the last section we give an application, where this type of systems (or special cases of it) appears in real world problems.

2. Preliminaries

Our first hypothesis (H1) is

[mathematical expression not reproducible], (2)

where [b.sub.ij] are nonnegative continuous functions, [l.sub.ij] are nonnegative continuously differentiable functions, [y.sub.ij] are nonnegative nondecreasing continuous functions, and [[alpha].sub.ij], [[beta].sub.ij] [greater than or equal to] 0, i,j = 1, ..., m. The interesting cases are when [[alpha].sub.ij] and [[beta].sub.tj] are all nonzero.

Let I [subset] R, and let [g.sub.1],[g.sub.2] :I [right arrow] R {0}. We write [g.sub.1] [varies] [g.sub.2] if [g.sub.2]/[g.sub.1] is nondecreasing in I. This ordering as well as the monotonicity condition may be dropped as is mentioned in Remark 8 below.

Lemma 1 (see [40]). Let a(t) be a positive continuous function in J := [[alpha], [beta]), [k.sub.j](t), j = 1, ..., n nonnegative continuous functions for [alpha] [less than or equal to] t < [beta], [g.sub.j](u), j = 1, ..., n nondecreasing continuous functions in [R.sub.+], with [q.sub.j](u) > 0 for u > 0, and u(t) a nonnegative continuous functions in J. If [g.sub.1] [varies] [g.sub.2] [varies] ... [varies] [g.sub.n] in (0, [infinity]), then the inequality

u(t) [less than or equal to] a (t) + [n.summation over (j=1)] [[integral].sup.t.sub.[alpha]] (s)[g.sub.j] (u(s))ds, t [member of] j, (3)

implies that

u(t)[less than or equal to][[omega].sub.n](t), [alpha][less than or equal to]t [less than or equal to] [[beta].sub.0], (4)

where [omega].sub.0] (t) := [sup.sub.0[less than or equal to]s[less than or equal to]]a (s),

[mathematical expression not reproducible]

and [[beta].sub.0] q is chosen so that the functions [[omega].sub.j] (t), j = 1, ..., n, are defined for a [less than or equal to] t < [[beta].sub.0]. In our case we will need the following notation and hypotheses. (5)

(H2) Assume that [[PSI].sub.ij](u) > 0 for u > 0 and the set of functions [mathematical expression not reproducible] maybe ordered as [h.sub.1] [varies] [h.sub.2] [varies] ... [varies] [h.sub.n] (after relabelling). Their corresponding coefficients [[??].sub.IJ](t) := exp [[integral].sup.t.sub.0]a([sigma])d[sigma]]([b.sub.ij](t) a ((t):= [min.sub.1[less than or equal to]i[less than or equal to]][a.sub.i](t)) and [l.sub.ij] will be renamed [[lambda].sub.k], k = 1, ..., n.

We define x(t) := [[summation].sup.m.sub.i=1], [absolute value of ([x.sub.i], (t) > 0, [x.sub.0](t) := [[x.sub.0] := [[summation].sup.m.sun.i=1] t [less than or equal to] 0,

[mathematical expression not reproducible], (6)

where [[??].sub.j] are the relabelled coefficients corresponding to [[??].sub.ij](t) and [l.sub.ij(0)] + [[integral].sup.[infinity].sub.0 ([sigma])] .

3. Exponential Convergence

In this section it is proved that solutions converge to zero in an exponential manner provided that the initial data are small enough.

Theorem 2. Assume that the hypotheses (H1) and (H2) hold and [[integral].sup.0.sub.-[infinity]] [l.sub.ij](-[sigma])[[psi].sub.ij]([x.sub.0]([sigma]))d[sigma] < [infinity], i, j = 1, ..., m. Then, (a) if [l'.sub.ij](t) [less than or equal to] 0, i, j = 1, ..., m, there exists [[beta].sub.0]> 0 such that

x (t) [less than or equal to] [[omega].sub.n] (t)exp [-[[integral].sup.t.sub.0]a(s)ds], 0 [less than or equal to] t [less than or equal to] [[beta].sub.0]. (7)

(b) If [l'.sub.ij] (t), i, j = 1, ..., m are of arbitrary signs, (t) are summable, and the integral term in [[??].sub.0](t) is convergent then there exists a [[beta].sub.1] > 0 such that the conclusion in (a) is valid on 0 [less than or equal to] t < [beta]1 with ion instead of [[omega].sub.n].

Proof. It is easy to see from (1) and the assumption (H1) that for t > 0 and i = 1, ..., m we have

[mathematical expression not reproducible], (8)

or, for t > 0,

[mathematical expression not reproducible], (9)

where [D.sup.+] denotes the right Dini derivative. Hence

[mathematical expression not reproducible]. (11)

Thus (by a comparison theorem in [41])

[mathematical expression not reproducible], (12)

where

[??](t):=x(t) exp[[[integral].sup.t.sub.0] a(s)ds] . (13)

Let y(t) denote the right hand side of (12). Clearly [??](t) < y(t), t > 0, and for t> 0

[mathematical expression not reproducible](14)

We designate by zij(t) the integral term in (14); that is,

[z.sub.ij](t) := [[integral].sup.t.sub.[infinity]][l.sub.ij](t-[sigma])[[PSI].sub.ij](x([sigma]))d[sigma] (15)

and z(t) := [[summation].sup.m.sub.i,j=1][z.sub.ij](t). A differentiation of z(t) gives

[mathematical expression not reproducible]. (16)

(a) Consider [l'.sub.ij](t) [less than or equal to] 0, i, j = 1, ..., m

In this situation (of fading memory) we see from (14) and (16) that if u(t) := y(t) + z(t), then

[mathematical expression not reproducible], (17)

Therefore

[mathematical expression not reproducible] (18)

where [mathematical expression not reproducible]. Now we can apply Lemma 1 to obtain

[??] (t) [less than or equal to] u (t) [less than or equal to] [[omega].sub.n] (t), U [less than or equal to] t [less than or equal to] [[beta].sub.0] (19)

with [[omega].sub.0](t) = u(0) + c(t) and [[omega].sub.n](t) is as in the "Preliminaries" section.

(b) Consider (t), [l.sub.ij] (t), i, j = l, ..., m of arbitrary signs. From expressions (14) and (16) we derive that

[mathematical expression not reproducible]. (20)

The derivative of the auxiliary function

[??](t) = u(t) + [m.summation over (i,j=1)][[integral].sup.[infinity].sub.0][absolute value of ([l'.sub.ij](s)][[integral].sup.t.sub.t-s][PSI](u([sigma]))d[sigma]ds, t [greater than or equal to] 0 (21)

is equal to (with the help of (20) and (21))

[mathematical expression not reproducible]. (22)

Therefore

[mathematical expression not reproducible]. (23)

with

[mathematical expression not reproducible], (24)

Applying Lemma 1 to (23) we obtain

[mathematical expression not reproducible]. (25)

and hence

[mathematical expression not reproducible], (26)

where [mathematical expression not reproducible] and

[mathematical expression not reproducible], (27)

and [[beta].sub.0] is chosen so that the functions [[??].sub.j](t), j = 1, ..., n, are defined for O [less than or equal to] t < [[beta].sub.1].

Corollary 3. If, in addition to the hypotheses of the theorem, we assume that

[mathematical expression not reproducible]. (28)

then we have global existence of solutions.

Corollary 4. If, in addition to the hypotheses of the theorem, we assume that [[omega].sub.n](t) ([[??].sub.n](t))grows up at the mostpolynomially (or just slower than exp[[[integral].sup.t.sub.0]a(s)ds]), then solutions decay at an exponential rate if [[integral].sup.t.sub.0]a(s)ds [right arrow] [infinity] as t [right arrow].

Corollary 5. In addition to the hypotheses of the theorem, assume that [l'.sub.ij] (t) [less than or equal to] [l.sub.ij] (t), i, j = 1, ..., m, for somepositive constants [L.sub.ij] and [[PSI].sub.ij](t) are in the class H (that is [[PSI].sub.ij]([alpha]u) [less than or equal to] [[xi].sub.ij][[PSI].sub.ij](u), [alpha] > 0, u > 0, i, j = 1, ..., m). Then solutions are bounded by a function of the form exp[-([[integral].sup.t.sub.0] a(s)ds-Lt)], where l = max{[L.sub.ij], i, j = 1, ..., m}.

Remark 6. We have assumed that [[alpha].sub.ij] and [[beta].sub.ij] are greater than one but the case when they are smaller than one may be treated similarly. When their sum is smaller than one we have global existence without adding any extra condition.

Remark 7. The decay rate obtained in Corollary 5 is to be compared with the one in the theorem (case (b)). It appears that the estimation in Corollary 5 holds for more general initial data (not as small as the ones in case (b)). However, the decay rate is smaller than the one in (b) besides assuming that [[integral].sup.t.sub.0] a(s)ds - Lt [right arrow] [infinity] as t [right arrow] [infinity].

Remark 8. If we consider the following new functions, then the monotonicity condition and the order imposed in the theorem may be dropped:

[mathematical expression not reproducible]. (29)

4. Application

(Artificial) Neural networks are built in an attempt to perform different tasks just as the nervous system. Typically, a neural network consists of several layers (input layer, hidden layers, and output layer). Each layer contains one or more cells (neurons) with many connections between them. The cells in one layer receive inputs from the previous layer, make some transformations, and send the results to the cells of the subsequent layer.

One may encounter neural networks in many fields such as control, pattern matching, settlement of structures, classification of soil, supply chain management, engineering design, market segmentation, product analysis, market development forecasting, signature verification, bond rating, recognition of diseases, robust pattern detection, text mining, price forecast, botanical classification, and scheduling optimization.

Neural networks not only can perform many of the tasks a traditional computer can do, but also excel in, for instance, classifying incomplete or noisy data, predicting future events, and generalizing.

The system (1) is a general version of simpler systems that appear in neural network theory [1-9] like

[x'.sub.i](t) = - [a.sub.i][x.sub.i](t) + [m.summation over (j=1)][f.sub.ij]([x.sub.j](t)) + [c.sub.i] (t), (30)

[x'.sub.i](t) = - [a.sub.i][x.sub.i](t) + [m.summation over (j=1)][[integral].sup.t.sub.[infinity]][l.sub.ij](t-s)[f.sub.ij]([x.sub.j](s))ds + [c.sub.i](t).

It is well established by now that (for constant coefficients and constant [c.sub.i](t)) solutions converge in an exponential manner to the equilibrium. Notice that zero in our case is not an equilibrium. This equilibrium exists and is unique in case of Lipschitz continuity of the activation functions. In our case the system is much more general and the activation functions as well as the nonlinearities are not necessarily Lipschitz continuous. However, in case of Lipschitz continuity and existence of a unique equilibrium we expect to have exponential stability using the standard techniques at least when we start away from zero. For the system

[mathematical expression not reproducible]. (32)

(where [[psi].sub.ij] may be taken as power functions; see also Corollary 5) our theorem gives sufficient conditions guaranteeing the estimation

x (t) [less than or equal to] [[omega].sub.n] (t) exp [-[[integral].sup.t.sub.0] a(s)ds], 0 [less than or equal to] t < [[beta].sub.0]. (33)

Then, Corollaries 3 and 4 provide practical situations where we have global existence and decay to zero at an exponential rate.

http://dx.doi.org/ 10.1155/2014/252674

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author is grateful for the financial support and the facilities provided by King Fahd University of Petroleum and Minerals through Grant no. IN111052.

References

[1] J. Cao, K. Yuan, and H.-X. Li, "Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays," IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1646-1651, 2006.

[2] B. Crespi, "Storage capacity of non-monotonic neurons," Neural Networks, vol. 12, no. 10, pp. 1377-1389, 1999.

[3] G. de Sandre, M. Forti, P. Nistri, and A. Premoli, "Dynamical analysis of full-range cellular neural networks by exploiting differential variational inequalities," IEEE Transactions on Circuits and Systems. I. RegularPapers, vol. 54, no. 8, pp. 1736-1749, 2007.

[4] C. Feng and R. Plamondon, "On the stability analysis of delayed neural networks systems," Neural Networks, vol. 14, no. 9, pp. 1181-1188, 2001.

[5] J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Sciences of the United States of America, vol. 79, no. 8, pp. 2554-2558, 1982.

[6] J. J. Hopfield and D. W. Tank, "Computing with neural circuits: a model," Science, vol. 233, no. 4764, pp. 625-633, 1986.

[7] J. I. Inoue, "Retrieval phase diagrams of non-monotonic Hopfield networks," Journal of Physics A: Mathematical and General, vol. 29, no. 16, pp. 4815-4826, 1996.

[8] B. Kosko, Neural Network and Fuzzy System--A Dynamical System Approach to Machine Intelligence, Prentice-Hall of India, New Delhi, India, 1991.

[9] H.-F. Yanai and S.-I. Amari, "Auto-associative memory with two-stage dynamics of nonmonotonic neurons," IEEE Transactions on Neural Networks, vol. 7, no. 4, pp. 803-815, 1996.

[10] X. Liu and N. Jiang, "Robust stability analysis of generalized neural networks with multiple discrete delays and multiple distributed delays," Neurocomputing, vol. 72, no. 7-9, pp. 17891796, 2009.

[11] S. Mohamad, K. Gopalsamy, and H. Akca, "Exponential stability of artificial neural networks with distributed delays and large impulses," Nonlinear Analysis. Real World Applications, vol. 9, no. 3, pp. 872-888, 2008.

[12] J. Park, "On global stability criterion for neural networks with discrete and distributed delays," Chaos, Solitons and Fractals, vol. 30, no. 4, pp. 897-902, 2006.

[13] J. Park, "On global stability criterion of neural networks with continuously distributed delays," Chaos, Solitons and Fractals, vol. 37, no. 2, pp. 444-449, 2008.

[14] Z. Qiang, M. A. Run-Nian, and X. Jin, "Global exponential convergence analysis of Hopfield neural networks with continuously distributed delays," Communications in Theoretical Physics, vol. 39, no. 3, pp. 381-384, 2003.

[15] Y. Wang, W. Xiong, Q. Zhou, B. Xiao, and Y. Yu, "Global exponential stability of cellular neural networks with continuously distributed delays and impulses," Physics Letters A, vol. 350, no. 1-2, pp. 89-95, 2006.

[16] H. Wu, "Global exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations," Nonlinear Analysis: Real World Applications, vol. 10, no. 4, pp. 2297-2306, 2009.

[17] Q. Zhang, X. P. Wei, and J. Xu, "Global exponential stability of Hopfield neural networks with continuously distributed delays," Physics Letters A, vol. 315, no. 6, pp. 431-436, 2003.

[18] H. Zhao, "Global asymptotic stability of Hopfield neural network involving distributed delays," Neural Networks, vol. 17, no. 1, pp. 47-53, 2004.

[19] J. Zhou, S. Li, and Z. Yang, "Global exponential stability of Hopfield neural networks with distributed delays," Applied Mathematical Modelling. Simulation and Computation for Engineering and Environmental Systems, vol. 33, no. 3, pp. 1513-1520, 2009.

[20] R. Gavalda and H. T. Siegelmann, "Discontinuities in recurrent neural networks," Neural Computation, vol. 11, no. 3, pp. 715-745, 1999.

[21] H. Wu, F. Tao, L. Qin, R. Shi, and L. He, "Robust exponential stability for interval neural networks with delays and non-Lipschitz activation functions," Nonlinear Dynamics, vol. 66, no. 4, pp. 479-487, 2011.

[22] H. Wu and X. Xue, "Stability analysis for neural networks with inverse Lipschitzian neuron activations and impulses," Applied Mathematical Modelling, vol. 32, no. 11, pp. 2347-2359, 2008.

[23] M. Forti, M. Grazzini, P. Nistri, and L. Pancioni, "Generalized Lyapunov approach for convergence of neural networks with discontinuous or non-LIPschitz activations," Physica D. Nonlinear Phenomena, vol. 214, no. 1, pp. 88-99, 2006.

[24] N.-E. Tatar, "Hopfield neural networks with unbounded monotone activation functions," Advances in Artificial Neural Systems, vol. 2012, Article ID 571358, 5 pages, 2012.

[25] N.-E. Tatar, "Control of systems with Holder continuous functions in the distributed delays," Carpathian Journal of Mathematics, vol. 30, no. 1, pp. 123-128, 2014.

[26] G. Bao and Z. Zeng, "Analysis and design of associative memories based on recurrent neural network with discontinuous activation functions," Neurocomputing, vol. 77, no. 1, pp. 101-107, 2012.

[27] M. Forti and P. Nistri, "Global convergence of neural networks with discontinuous neuron activations," IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 50, no. 11, pp. 1421-1435, 2003.

[28] Y. Huang, H. Zhang, and Z. Wang, "Dynamical stability analysis of multiple equilibrium points in time-varying delayed recurrent neural networks with discontinuous activation functions," Neurocomputing, vol. 91, pp. 21-28, 2012.

[29] L. Li and L. Huang, "Dynamical behaviors of a class of recurrent neural networks with discontinuous neuron activations," Applied Mathematical Modelling, vol. 33, no. 12, pp. 4326-4336, 2009.

[30] L. Li and L. Huang, "Global asymptotic stability of delayed neural networks with discontinuous neuron activations," Neurocomputing, vol. 72, no. 16-18, pp. 3726-3733, 2009.

[31] Y. Li and H. Wu, "Global stability analysis for periodic solution in discontinuous neural networks with nonlinear growth activations," Advances in Difference Equations, vol. 2009, Article ID 798685, 14 pages, 2009.

[32] X. Liu and J. Cao, "Robust state estimation for neural networks with discontinuous activations," IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 40, no. 6, pp. 14251437, 2010.

[33] J. Liu, X. Liu, and W.-C. Xie, "Global convergence of neural networks with mixed time-varying delays and discontinuous neuron activations," Information Sciences, vol. 183, pp. 92-105, 2012.

[34] S. Qin and X. Xue, "Global exponential stability and global convergence in finite time of neural networks with discontinuous activations," Neural Processing Letters, vol. 29, no. 3, pp. 189-204, 2009.

[35] J. Wang, L. Huang, and Z. Guo, "Global asymptotic stability of neural networks with discontinuous activations," Neural Networks, vol. 22, no. 7, pp. 931-937, 2009.

[36] Z. Wang, L. Huang, Y. Zuo, and L. Zhang, "Global robust stability of time-delay systems with discontinuous activation functions under polytopic parameter uncertainties," Bulletin of the Korean Mathematical Society, vol. 47, no. 1, pp. 89-102, 2010.

[37] H. Wu, "Global stability analysis of a general class of discontinuous neural networks with linear growth activation functions," Information Sciences, vol. 179, no. 19, pp. 3432-3441, 2009.

[38] Z. Cai and L. Huang, "Existence and global asymptotic stability of periodic solution for discrete and distributed time-varying delayed neural networks with discontinuous activations," Neurocomputing, vol. 74, no. 17, pp. 3170-3179, 2011.

[39] D. Papini and V. Taddei, "Global exponential stability of the periodic solution of a delayed neural network with discontinuous activations," Physics Letters A, vol. 343, no. 1-3, pp. 117-128, 2005.

[40] M. Pinto, "Integral inequalities of Bihari-type and applications," Funkcialaj Ekvacioj, vol. 33, no. 3, pp. 387-403, 1990.

[41] V. Lakshmikhantam and S. Leela, Differential and Integral Inequalities: Theory and Applications, vol. 55-I of Mathematics in Sciences and Engineering, Edited by Bellman, R., Academic Press, New York, NY, USA, 1969.

Nasser-Eddine Tatar

Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia

Correspondence should be addressed to Nasser-Eddine Tatar; tatarn@kfupm.edu.sa

Received 10 May 2014; Revised 7 September 2014; Accepted 8 September 2014; Published 14 September 2014

Academic Editor: Ozgur Kisi
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Tatar, Nasser-Eddine
Publication:Advances in Artificial Neural Systems
Date:Jan 1, 2014
Words:3688
Previous Article:A Hybrid Intelligent Method of Predicting Stock Returns.
Next Article:An Electronic Circuit Model of the Interpostsynaptic Functional LINK Designed to Study the Formation of Internal Sensations in the Nervous System.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters