# Analysis of Adaptive Synchronization for Stochastic Neutral-Type Memristive Neural Networks with Mixed Time-Varying Delays.

1. IntroductionDuring the last few years, as we know neural networks have been widely researched in control, image processing, associative memory design, pattern recognition, information science, and soon (see [1-3]). Chua firstly predicted the memristor as the fourth fundamental electrical circuit element in 1971 [4]. In 2008, Hewlett-Packard research team [5] obtained a practical memristor device and exhibited its characteristic, such as nanoscale and the memory ability. It has been shown that memristors can be used to work as biological synapses in artificial neural network and replace resistor to simulate the human brain in memristor-based neural networks (MNNs) model, which would benefit many practical applications (see [6, 7]).

It is well know that time delays present complex and unpredictable behaviors in practice often caused by finite switching speeds of the amplifiers, which may affect the stability of the system and even results in oscillation, divergence, and instability phenomena. Therefore, much effort has been devoted to analyze dynamic behaviors of MNNs with various types of times delays (see [8,9]); constant time delays and the time-varying delays have been studied in [10-12]. The investigations of MNNs discussed consider the discrete delays in [13]. However, since the neural signal propagation is often distributed during a certain time period in the presence of an amount of parallel pathways with a variety of axon sizes and lengths, hence, the authors in [14, 15] have concentrated on the mixed delays.

On the other hand, in reality, the fluctuations from the release of neurotransmitters or other probabilistic causes may affect the stability property in the nervous system and synaptic transmission. So the stability analysis with stochastic perturbation has aroused great interest of many researchers (see [16, 17]). It is natural and important that systems containing some information are not only related to the derivative of the current state, but also have a great relationship with the previous derivative, which is called neural-type neural networks (see [9, 18, 19]).

Recently synchronization and antisynchronization of memristor-based neural networks have received great attention due to their potential, such as secure communication information science and biological technology [20]. But the networks are not always able to synchronize by themselves. Then, various effective control approaches and techniques have been proposed for synchronization, such as impulsive control, feedback control, adaptive control, and intermittent control (see [21, 22]). And a lot of achievements have been made in the stability and synchronization problem of MNNs, including exponential synchronization, lag synchronization, and finite time synchronization (see [23-26]).

Motivated by the above discussion, even though the synchronization problem of stochastic MNNs has been studied, there are few studies on the synchronization problem of stochastic neutral-type MNNs. So in this paper we focus our minds on the adaptive synchronization for neutral-type MNNs with mixed time-varying delays to bridge the gap. By applying the stochastic differential inclusions theory, Lyapunov functional, and linear matrix inequalities method, we obtain some new adaptive synchronization criteria.

This paper is organized as follows. In Section 2, we introduce the model and some preliminaries. The main theoretical results are derived in Section 3. In Section 4, a numerical simulation is presented to verify our obtained results. Finally, conclusion is given in Section 5.

Throughout this paper, solutions of all the systems considered are intended in the Filippov's sense. [R.sup.n] and [R.sup.nxn] denote the n-dimensional Euclidean space and the set of all n x n real matrices, respectively. The superscript T denotes matrix transposition, tr(x) denotes the trace of the corresponding matrix, and I denotes the identity matrix. [[lambda].sub.max] and [[lambda].sub.min] denote the maximum and minimum eigenvalues of a real symmetric matrix. diag(...) stands for the block diagonal matrix. [mathematical expression not reproducible] denote the family of all [F.sub.0] measurable, C([-[tau], 0]; [R.sup.n])-valued stochastic variables [xi] = {[xi][x]([theta]) : -[tau] [less than or equal to] [theta] [less than or equal to] 0}, such that [[integral].sup.9.sub.-[tau]]E[[[absolute value of [xi[(s)].sup.2]]ds < [infinity], where E[x] stands for the correspondent expectation operator with respect to the given probability measure P. co{u, v} denotes the closure of a convex hull generated by real numbers u and v or real matrices u and v.

2. Preliminaries

In this paper, the following stochastic neutral-type memristive neural network with mixed time-varying delays is described by (i = 1,2, ..., n)

[mathematical expression not reproducible] (1)

with initial conditions [x.sub.i](i) = [[phi].sub.i](t), t [member of] [-[tau], 0],where [x.sub.i](t) is the voltage of the capacitor [mathematical expression not reproducible] are neuron activation functions, and [I.sub.i] is the external constant input. C = diag([c.sub.1],[c.sub.2], ..., [c.sub.n]) and D = diag([d.sub.1], [d.sub.2], ..., [d.sub.n]) are self-feedback connection matrices and [c.sub.i] > 0, [d.sub.i] > 0 (i = 1,2, ..., n), [a.sub.ij]([x.sub.i](t)), [b.sub.ij]([x.sub.i](t)), and [w.sub.ij]([x.sub.i](t)) represent memristor-based weights:

[mathematical expression not reproducible]. (2)

Here [W.sub.(k)ij] denote the memductances of memristors [R.sub.(k)ij], k = 1, 2, 3. According to the pinched hysteretic loops of property of memristors, we set

[mathematical expression not reproducible] (3)

where the switching jumps [mathematical expression not reproducible] (i, j = 1,2, ..., n) are constants. [mathematical expression not reproducible], are memristive connection weights, which represent the neuron interconnection matrix, respectively. If [mathematical expression not reproducible] are constants, system (1) will reduce to a general network. Let [mathematical expression not reproducible]. [[tau].sub.k](t) (k = 1,2,3) represent the time-varying transmission delays. Since [a.sub.ij]([x.sub.i](t)), [b.sub.ij]([x.sub.i](t)), [w.sub.ij([x.sub.i](t)) are discontinuous, in this paper, the solutions of all the following systems are illustrated in Filippov's sense. By applying theory of differential inclusions and set-valued maps in system (1), this can be written as follows:

[mathematical expression not reproducible], (4)

where the set-valued maps are defined as follows:

[mathematical expression not reproducible], (5)

or equivalently, there exist [mathematical expression not reproducible], such that

[mathematical expression not reproducible]. (6)

We consider system (6) as the drive system. Similarly, the response system is

[mathematical expression not reproducible], (7)

where u(t) = [[[u.sub.1](t),[u.sub.2](t), ..., [u.sub.n](t)].sup.T] [member of] [R.sup.n] is the controller, [omega](t) = [[[[omega].sub.1](t), [[omega].sub.2](t), ..., [[omega].sub.n](t)].sup.T] is an n-dimensional Brownian motion defined on the complete probability space ([OMEGA], F, P) with a natural filtration {[F.sub.t]}.sub.t[greater than or equal to]0], and [sigma] : [R.sup.+] x [R.sup.n] x [R.sup. n] x [R.sup.n] x [R.sup.n] [right arrow] [R.sup.nxn] is the noise intensity matrix, where [sigma] satisfies [sigma](t,0,0,0,0) [equivalent to] 0. Let e(t) = [[[e.sub.1](t), [e.sub.2](t), ..., [e.sub.n](t)].sup.T] be the synchronization error, where [mathematical expression not reproducible]. From (6) and (7), we can get the following synchronization error system:

[mathematical expression not reproducible], (8)

where [mathematical expression not reproducible]. To prove our main results, the following assumptions and lemmas are needed.

Assumption 1 (see [27]). There exist diagonal matrices [mathematical expression not reproducible], for all u, v [member of] R, u [not equal to] v, i = 1,2, ..., n.

Assumption 2. There exist positive constants [[tau].sub.1], [[tau].sub.2], [[tau].sub.3], [[mu].sub.1], [[mu].sub.2], and [[mu].sub.3], such that

[mathematical expression not reproducible]. (9)

Remark 3. The assumption strong condition can be weaken; please refer to [28, 29] Assumptions 2 and 1.

Assumption 4. [for all]a, b [member of] [R.sup.N], there exist positive constants [[L.bar].sub.i], [[bar.L].sub.i], [[M.bar].sub.i], [[bar.M].sub.i], [[N.bar].sub.i], [[bar.N].sub.i], such that

[mathematical expression not reproducible], (10)

where [mathematical expression not reproducible].

Assumption 5 (see [30]). There exist positive matrices [R.sub.1], [R.sub.2], [R.sub.3], and [R.sub.4], such that

[mathematical expression not reproducible] (11)

for all [x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4] [member of] [R.sup.n] and t [member of] [R.sup.+].

Assumption 6. The matrix D satisfies [rho](D) < 1, where [rho](D) is the spectral radius of D.

Definition 7 (see [31]). The two coupled memristive neural networks (6) and (7) are said to be stochastic synchronization for almost every initial data if for every [mathematical expression not reproducible] a.s.

Lemma 8 (see [32]). For any vectors a, b [member of] [R.sup.n], the inequality [+ or -] 2[a.sup.T] b [less than or equal to] [a.sup.T] Sa + [b.sup.T] [S.sup.-1] b holds, in which S is any matrix with S > 0.

Lemma 9 (see [33]). For any positive definite matrix M [member of] [R.sup.nxn], scalar [gamma] > 0, and vector function [eta] : [0, [gamma]] [right arrow] [R.sup.n] such that the integration concerned is well defined, then

[mathematical expression not reproducible]. (12)

Lemma 10 (see [34]). If [a.sub.1] [greater than or equal to] [a.sub.2] [greater than or equal to] [a.sub.3], [b.sub.1] [greater than or equal to] [b.sub.2] [greater than or equal to] [b.sub.3], then 2([a.sub.1][b.sub.1] + [a.sub.2][a.sub.2] + [b.sub.3][b.sub.3]) [greater than or equal to] [a.sub.1][b.sub.2] + [a.sub.1][b.sub.3] + [a.sub.2][b.sub.1] + [a.sub.2][b.sub.3] + [a.sub.3][b.sub.1] + [a.sub.3][b.sub.2], [for all][a.sub.i], [b.sub.i] (i = 1,2,3) [member of] R.

Remark 11. There are some other convenient and useful inequality techniques; refer to [1] Lemmas 2, 3, 7, and 8.

Lemma 12 (see [35]). Given matrices [[OMEGA].sub.1], [[OMEGA].sub.2], [[OMEGA].sub.3], where [[OMEGA].sub.1] = [[OMEGA].sup.T.sub.1] and [[OMEGA].sub.2] > 0, then [[OMEGA].sub.1] + [[OMEGA].sup.T.sub.3] [[OMEGA].sup.-1.sub.2] [[OMEGA].sub.3] < 0 if and only if

[mathematical expression not reproducible]. (13)

3. Main Results

In this section, the stochastic synchronization for the two coupled memristive neural networks (6) and (7) is investigated under Assumptions 1-6.

3.1. Stochastic Adaptive Synchronization for the Two Coupled Memristive Neural Networks via the Adaptive Feedback Control

Theorem 13. Under Assumptions 1-6, the two coupled memristive neural networks (6) and (7) can be synchronized for almost every initial data, if there exist positive diagonal matrices [H.sub.1], [H.sub.2], [H.sub.3], P = diag([p.sub.1], [p.sub.2], ..., [p.sub.n]), positive definite matrices [Q.sub.1], [Q.sub.2], [Q.sub.3], [Q.sub.4], [Q.sub.5], [Q.sub.6], [Q.sub.7], [Q.sub.8], [S.sub.1], [S.sub.2], and a positive scalar [lambda] such that the LMIs hold:

P [less than or equal to] [lambda]I, (14)

[[tau].sub.3] ([S.sub.1] + [S.sub.2]) [less than or equal to] [Q.sub.8], (15)

[mathematical expression not reproducible], (16)

where

[mathematical expression not reproducible]. (17)

And the adaptive feedback controller is designed as

u (t) = -Ke (t), (18)

where the feedback strength K = diag([k.sub.1], [k.sub.2], ..., [k.sub.n]) is updated by the following law:

[[??].sub.i] = [[phi].sub.i][e.sup.2.sub.i](t) - [[phi].sub.i][d.sub.i][e.sub.i](t)[e.sub.i](t - [[tau].sub.i](t)), (19)

with arbitrary constant [[phi].sub.i] > 0 (i = 1, 2, ..., n).

Proof. We consider the following Lyapunov-Krasovskii functions:

V(t, e(t)) = [10.summation over (i=1)] [V.sub.i] (t, [e.sub.i] (t)), (20)

where

[mathematical expression not reproducible]. (21)

By Ito formula, it follows that

[mathematical expression not reproducible], (22)

where [mathematical expression not reproducible], and

[mathematical expression not reproducible]. (23)

From Lemma 8, we get

[mathematical expression not reproducible]. (24)

Utilizing Lemma 9 yields

[mathematical expression not reproducible]. (25)

It follows from Assumption 5 and (14) that

[mathematical expression not reproducible]. (26)

By Ito formula, we have

[mathematical expression not reproducible], (27)

From Assumption 1, it follows that

[mathematical expression not reproducible], (28)

where [H.sub.1], [H.sub.2], [H.sub.3] are positive diagonal matrices and [L.sub.j]= diag([l.sub.j1], [l.sub.j2], ..., [l.sub.jn]), [l.sub.ji] = max{[l.sup.-.sub.ji], [l.sup.+.sub.ji]} (j = 1,2,3) for i = 1, 2, ..., n.

[mathematical expression not reproducible]. (29)

Condition (15) yields

[mathematical expression not reproducible]. (30)

Substituting inequalities (23)-(30) into (22), we obtain

[mathematical expression not reproducible], (31)

where

[mathematical expression not reproducible], (32)

with

[mathematical expression not reproducible]. (33)

Using Lemma 12, if [PSI] < 0, let [zeta] = [[lambda].sub.min](-[PSI]), and, clearly, the constant [zeta] >0. This fact together with (31) gives

[mathematical expression not reproducible], (34)

where [[omega].sub.1](e(t)) = [e.sup.T](t)([lambda][R.sub.4] + [zeta]I)e(t) and [[omega].sub.2](e(t)) = [e.sup.T](t)([lambda][R.sub.4] - [zeta]I)e(t).

It is obvious that [[omega].sub.1](e(t)) > [[omega].sub.2](e(t)) for any e(t) [not equal to] 0. Therefore, applying LaSalle-type invariance principle (see [27, 36]) for the stochastic differential delay equations, we can conclude that the coupled memristive neural networks (6) and (7) can be synchronized for almost every initial data. This completes the proof. []

When D = 0, from Theorem 13, we obtain the following corollary.

Corollary 14. Under Assumptions 1-5, the two coupled memristive neural networks (6) and (7) with D = 0 can be synchronized for almost every initial data, if there exist positive diagonal matrices [H.sub.1], [H.sub.2], [H.sub.3], P = diag([p.sub.1], [p.sub.2], ..., [p.sub.n]), positive definite matrices [Q.sub.2], [Q.sub.3], [Q.sub.4], [Q.sub.5], [Q.sub.7], [Q.sub.8], [S.sub.1], and a positive scalar [lambda] such that the LMIs hold:

[mathematical expression not reproducible], (35)

where

[mathematical expression not reproducible], (36)

And the adaptive feedback controller is designed as

u(t) = -Ke(t), (37)

where the feedback strength K = diag([k.sub.1], [k.sub.2], ..., [k.sub.n]) is updated by the following law:

[[??].sub.i] = [[phi].sub.i], [e.sup.2.sub.i](t), (38)

with arbitrary constant [[phi].sub.i] > 0 (i = 1, 2, ..., n).

Remark 15. When we remove the stochastic perturbations, our models become the model in [37], so our models are the extension of the model in [37]. Because the stochastic perturbations are unavoidable in practice, our models are more general and useful in practice.

Remark 16. If A([x.sub.i](t)) = [([a.sub.ij]([x.sub.i](t))).sub.nxn], B([x.sub.i](t)) = [([b.sub.ij]([x.sub.i](t))).sub.nxn] and W([x.sub.i](t)) = [([w.sub.ij]([x.sub.i](t))).sub.nxn] are constants, system (1) will reduce to a general network. What is more, when we remove the neutral terms, our models become the model in [10, 27], so our models are the extension of the model in [10, 27]. Because the neutral terms are important and complicated, our models are more general and useful in practice.

3.2. Stochastic Adaptive Synchronization for the Two Coupled Memristive Neural Networks via the Linear Feedback Control

Theorem 17. UnderAssumptions 1-6, two coupled memristive neural networks (6) and (7) can be synchronized for almost every initial data, if there exist positive diagonal matrices [H.sub.1], [H.sub.2], [H.sub.3],P = diag([p.sub.1], [p.sub.2], ..., [p.sub.n]), positive definite matrices [Q.sub.1], [Q.sub.2], [Q.sub.3], [Q.sub.4], [Q.sub.5], [Q.sub.6], [Q.sub.7], [Q.sub.8], [S.sub.1], [S.sub.2], and a positive scalar [lambda] such that the LMIs hold:

[mathematical expression not reproducible], (39)

where

[mathematical expression not reproducible]. (40)

And the linear feedback controller is designed as

u(t) = -Ke(t), (41)

where K = diag([k.sub.1], [k.sub.2], ..., [k.sub.n]), [k.sub.i] > 0 is the feedback gain.

Proof. We consider the following Lyapunov-Krasovskii functions:

V(t,e(t)) = [9.summation over (i=1)] [V.sub.i] (t, [e.sub.i], (t)), (42)

where

[mathematical expression not reproducible]. (43)

Then, the proof of Theorem 17 is similar to Theorem 13, so the proof process is omitted here. []

Corollary 18. Under Assumptions 1-5, two coupled memristive neural networks (6) and (7) with D = 0 can be synchronized for almost every initial data, if there exist positive diagonal matrices [H.sub.1], [H.sub.2], [H.sub.3], P = diag([p.sub.1], [p.sub.2], ..., [p.sub.n]), positive definite matrices [Q.sub.2], [Q.sub.3], [Q.sub.4], [Q.sub.5], [Q.sub.7], [Q.sub.8], [S.sub.1], and a positive scalar [lambda] such that the LMIs hold:

[mathematical expression not reproducible], (44)

where

[mathematical expression not reproducible]. (45)

And the linear feedback controller is designed as

u(t) = -Ke (t), (46)

where K = diag([k.sub.1], [k.sub.2], ..., [k.sub.n]), [k.sub.i] > 0 is the feedback gain.

Remark 19. When D = 0, the systems are no longer neutral-type neural networks. We find that adaptive synchronization of other types of neural networks model has been researched (see [38, 39]). We can also get the synchronization results from Theorem 17 when D = 0.

Remark 20. When [??](t) = 0, the systems no longer have distributed time-varying delays. Our models become the model in [40]; we can also get the synchronization results, so our models are the extension of the model in [40] and they are more general than that.

4. Numerical Simulation

In this section, a numerical example is given to illustrate the effectiveness of Theorem 13. Consider a two-dimensional synchronization error system (8) with u(t) = -Ke(t) such that [[??].sub.i] = [[phi].sub.i][e.sup.2.sub.i](t) - [[phi].sub.i][d.sub.i][e.sub.i](t)[e.sub.i] (t - [[tau].sub.1](t)).

Take f(e(t)) = g(e(t)) = h(e(t)) = [[tanh([e.sub.1](t)), tanh([e.sub.1](t))].sup.T], [[tau].sub.1] (t) = 0.6, [[tau].sub.2](t) = 0.1, [[tau].sub.3](t) = 0.2, [L.sub.1] = [L.sub.2] = [L.sub.3] = I, and [sigma](t,e(t),e(t - [[tau].sub.1](t)),e(t - [[tau].sub.2](t)),e(t - [[tau].sub.3](t))) =

[mathematical expression not reproducible], (47)

and [R.sub.1] = 0.3I, [R.sub.2] = 0.2I, [R.sub.3] = 0.3I, [R.sub.4] = 0.1I, [[mu].sub.1] = [[mu].sub.2] = [[mu].sub.3] = 0.2. Then

[mathematical expression not reproducible]. (48)

Letting [alpha] = 20, using LMI toolbox in MATLAB, we obtain the following feasible solutions to LMIs in Theorem 13:

[mathematical expression not reproducible]. (49)

So the conditions of Theorem 13 are satisfied, and we conclude that two coupled memristive neural networks (6) and (7) can be synchronized for almost every initial data.

Now by taking the initial date as e(0) = [[0.4,0.5].sup.T], K(0) = [[10, 15].sup.T] and [[phi].sub.1] = 0.1, [[phi].sub.2] = 0.2, we can draw the dynamic curves of the error system, the evolution of adaptive coupling strength [k.sub.1], [k.sub.2], and the Brownian motion [omega](t), respectively, as Figures 1-3. Figure 1 shows that two coupled memristive neural networks (6) and (7) are synchronized.

5. Conclusions and Discussion

In this paper, by applying LaSalle-type invariance principle for stochastic differential delays equations, the stochastic differential inclusions theory, Lyapunov functional, and linear matrix inequalities method, linear feedback control and adaptive feedback control are proposed to achieve the synchronization of stochastic neutral-type memristive neural networks with mixed time-varying delays. Even though the synchronization problem of stochastic MNNs has been studied, there are few studies on the synchronization problem of stochastic neutral-type MNNs. Neutral terms are taken into account in this paper, which make the model have wider application and make research more meaningful. So we generalized the synchronization problem of MNNs. The effectiveness of our results has been illustrated by a numerical example. Furthermore, exponential synchronization and passivity of this model can be discussed in the near future.

https://doi.org/10.1155/2018/8126127

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors acknowledge National Natural Science Foundation of China (no. 61563033, no. 11563005, and no. 11501281), Natural Science Foundation of Jiangxi Province (no. 20122BAB201002, no. 20151BAB212011), and Innovation Fund Designated for Graduate Students of Jiangxi Province (no. YC2016-S041) for their financial support.

References

[1] Z. Zhang, W. Liu, and D. Zhou, "Global asymptotic stability to a generalized Cohen-Grossberg BAM neural networks of neutral type delays," Neural Networks, vol. 25, pp. 94-105, 2012.

[2] X. Zeng, Z. Xiong, and C. Wang, "Hopf bifurcation for neutral-type neural network model with two delays," Applied Mathematics and Computation, vol. 282, pp. 17-31, 2016.

[3] B. Du, W. Zhang, and Q. Yang, "Robust state estimation for neutral-type neural networks with mixed time delays," Journal of Nonlinear Sciences and Applications. JNSA, vol. 10, no. 5, pp. 2565-2578, 2017.

[4] L. O. Chua, "Memristor--the missing circuit element," IEEE Transactions on Circuit Theory, vol. 18, no. 5, pp. 507-519, 1971.

[5] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, "The missing memristor found," Nature, vol. 453, pp. 80-83, 2008.

[6] S. Ding, Z. Wang, N. Rong, and H. Zhang, "Exponential Stabilization of Memristive Neural Networks via Saturating Sampled-Data Control," IEEE Transactions on Cybernetics, vol. 47, no. 10, pp. 3027-3039, 2017.

[7] K. D. Cantley, A. Subramaniam, H. J. Stiegler, R. A. Chapman, and E. M. Vogel, "Neural learning circuits utilizing nanocrystalline silicon transistors and memristors," IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 4, pp. 565-573, 2012.

[8] Q. Zhu and J. Cao, "Pth moment exponential synchronization for stochastic delayed Cohen-Grossberg neural networks with Markovian switching," Nonlinear Dynamics, vol. 67, no. 1, pp. 829-845, 2012.

[9] R. Rakkiyappan, S. Dharani, and Q. Zhu, "Stochastic sampled-data [H.sub.[infinity]] synchronization of coupled neutral-type delay partial differential systems," Journal of The Franklin Institute, vol. 352, no. 10, pp. 4480-4502, 2015.

[10] X. Li, J. A. Fang, and H. Li, "Exponential adaptive synchronization of stochastic memristive chaotic recurrent neural networks with time-varying delays," Neurocomputing, vol. 267, pp. 396405, 2017.

[11] R. Rakkiyappan, S. Dharani, and Q. Zhu, "Synchronization of reaction-diffusion neural networks with time-varying delays via stochastic sampled-data controller," Nonlinear Dynamics, vol. 79, no. 1, pp. 485-500, 2015.

[12] S. Ding and Z. Wang, "Stochastic exponential synchronization control of mem-ristive neural networks with multiple time-varying delays," Neurocomputing, vol. 162, pp. 16-25, 2015.

[13] S. Ding, Z. Wang, J. Wang, and H. Zhang, "[H.sub.[infinity]] State estimation for memristive neural networks with time-varying delays: The discrete-time case, Neural Networks," Neural Networks, vol. 84, pp. 47-56, 2016.

[14] C. Zhang, F. Deng, X. Zhao, and B. Zhang, "p-th exponential synchronization of Cohen-Grossberg neural network with mixed time-varying delays and unknown parameters using impulsive control method," Neurocomputing, vol. 218, pp. 432-438, 2016.

[15] Q. Zhu and J. Cao, "Adaptive synchronization of chaotic Cohen-Crossberg neural networks with mixed time delays," Nonlinear Dynamics, vol. 61, no. 3, pp. 517-534, 2010.

[16] J. Gao, P. Zhu, W. Xiong, J. Cao, and L. Zhang, "Asymptotic synchronization for stochastic memristor-based neural networks with noise disturbance," Journal of The Franklin Institute, vol. 353, no. 13, pp. 3271-3289, 2016.

[17] B. Du, M. Hu, and X. Lian, "Dynamical behavior for a stochastic predator-prey model with HV type functional response," Bulletin of the Malaysian Mathematical Sciences Society, vol. 40, no. 1, pp. 487-503, 2017.

[18] D. Liu and Y. Du, "New results of stability analysis for a class of neutral-type neural network with mixed time delays," in Proceedings of the International Journal of Machine Learning Cybernetics 6 (4, vol. 6, 2014.

[19] B. Du, Y. Liu, and D. Cao, "Stability analysis for neutral-type impulsive neural networks with delays," Kybernetika, vol. 53, no. 3, pp. 513-529, 2017.

[20] S. Ding, Z. Wang, H. Niu, and H. Zhang, "Stop and go adaptive strategy for synchronization of delayed memristive recurrent neural networks with unknown synaptic weights," Journal of The Franklin Institute, vol. 354, no. 12, pp. 4989-5010, 2017.

[21] A. Chandrasekar and R. Rakkiyappan, "Impulsive controller design for exponential synchronization of delayed stochastic memristor-based recurrent neural networks," Neurocomputing, vol. 173, pp. 1348-1355, 2016.

[22] T. Wang, H. Gao, and J. Qiu, "A combined adaptive neural network and nonlinear model predictive control for multirate networked industrial process control," IEEE Transactions on Neural Networks and Learning Systems, no. 99, 2015.

[23] S. Ding, Z. Wang, Z. Huang, and H. Zhang, "Novel switching jumps dependent exponential synchronization criteria for memristor-based neural networks," Neural Processing Letters, vol. 45, no. 1, pp. 15-28, 2017.

[24] G. Zhang, J. Hu, and Y. Shen, "Exponential lag synchronization for delayed memristive recurrent neural networks," Neurocomputing, vol. 154, pp. 86-93, 2015.

[25] R. Rakkiyappan, V Preethi Latha, Q. Zhu, and Z. Yao, "Exponential synchronization of Markovian jumping chaotic neural networks with sampled-data and saturating actuators," Nonlinear Analysis: Hybrid Systems, vol. 24, pp. 28-44, 2017.

[26] C. Chen, L. Li, H. Peng, Y. Yang, and T. Li, "Finite-time synchronization of memristor-based neural networks with mixed delays," Neurocomputing, vol. 235, pp. 83-89, 2017.

[27] Q. Zhu and J. Cao, "Adaptive synchronization under almost every initial data for stochastic neural networks with time-varying delays and distributed delays," Communications in Nonlinear Science & Numerical Simulation, vol. 16, no. 4, pp. 2139-2159, 2011.

[28] Q. Zhu and J. Cao, "Stability of Markovian jump neural networks with impulse control and time varying delays," Nonlinear Analysis: Real World Applications, vol. 13, no. 5, pp. 2259-2270, 2012.

[29] Q. Zhu and J. Cao, "Stability analysis of markovian jump stochastic BAM neural networks with impulse control and mixed time delays," IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 3, pp. 467-479, 2012.

[30] Q. Zhu and J. Cao, "Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 41, no. 2, pp. 341-353, 2011.

[31] Q. Zhu, W. Zhou, D. Tong, and J. Fang, "Adaptive synchronization for stochastic neural networks of neutral-type with mixed time-delays," Neurocomputing, vol. 99, pp. 477-485, 2013.

[32] X. Liao, G. Chen, and E. N. Sanchez, "LMI-based approach for asymptotically stability analysis of delayed neural networks," IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 49, no. 7, pp. 1033-1039, 2002.

[33] K. Gu, "An integral inequality in the stability problem of time-delay systems," in Proceedings of the Proceedings of the 39th IEEE Confernce on Decision and Control, pp. 2805-2810, Sydney, Australia, December 2000.

[34] W. Peng, Q. Wu, and Z. Zhang, "LMI-based global exponential stability of equilibrium point for neutral delayed BAM neural networks with delays in leakage terms via new inequality technique," Neurocomputing, vol. 199, pp. 103-113, 2016.

[35] E. Yaz, "Linear Matrix Inequalities In System And Control Theory," Proceedings of the IEEE, vol. 86, no. 12, pp. 2473-2474, 1998.

[36] X. Mao, "A note on the LaSalle-type theorems for stochastic differential delay equations," Journal of Mathematical Analysis and Applications, vol. 268, no. 1, pp. 125-142, 2002.

[37] W. Wang, L. Li, H. Peng et al., "Anti-synchronization of coupled memristive neutral-type neural networks with mixed time-varying delays via randomly occurring control," Nonlinear Dynamics, vol. 83, no. 4, pp. 2143-2155, 2016.

[38] C. Zhang, F. Deng, Y. Peng, and B. Zhang, "Adaptive synchronization of Cohen-Grossberg neural network with mixed time-varying delays and stochastic perturbation," Applied Mathematics and Computation, vol. 269, pp. 792-801, 2015.

[39] Q. Gan, "Adaptive synchronization of cohen-grossberg neural networks with unknown parameters and mixed time-varying delays at," Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 7, pp. 3040-3049, 2012.

[40] X. Wang, K. She, S. Zhong, and J. Cheng, "Exponential synchronization of memristor-based neural networks with time-varying delay and stochastic perturbation," Neurocomputing, vol. 242, pp. 131-139, 2017.

Desheng Hong, Zuoliang Xiong (iD), and Cuiping Yang

Department of Mathematics, Nanchang University, Nanchang, Jiangxi 330031, China

Correspondence should be addressed to Zuoliang Xiong; xiong1601@163.com

Received 21 November 2017; Revised 21 January 2018; Accepted 30 January 2018; Published 19 April 2018

Academic Editor: Zhengqiu Zhang

Caption: Figure 1: The curve of the synchronization errors [e.sub.1], [e.sub.2].

Caption: Figure 2: The evolution graph of the adaptive coupling strengths [k.sub.1], [k.sub.2].

Caption: Figure 3: The evolution graph of the Brownian motions [[omega].sub.1], [[omega].sub.2].

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Hong, Desheng; Xiong, Zuoliang; Yang, Cuiping |

Publication: | Discrete Dynamics in Nature and Society |

Geographic Code: | 7IRAN |

Date: | Jan 1, 2018 |

Words: | 4865 |

Previous Article: | Corrigendum to "Quality and Pricing Decisions in a Two-Echelon Supply Chain with Nash Bargaining Fairness Concerns". |

Next Article: | Minimizing the Makespan for a Two-Stage Three-Machine Assembly Flow Shop Problem with the Sum-of-Processing-Time Based Learning Effect. |

Topics: |