# Analysis and Design of Associative Memories for Memristive Neural Networks with Deviating Argument.

1. IntroductionIn recent decades, analysis and design of neurodynamic systems have received much attention [1-16]. Specifically, neurodynamics of associative memories is a hot research issue [9-16]. Associative memories refer to brain-inspired computing designed to store a set of prototype patterns such that the stored patterns can be retrieved with the recalling probes containing sufficient information about the contents of patterns. In associative memories, for any given probe (e.g., noisy or corrupted version of a prototype pattern), the retrieval dynamics can converge to an ideal equilibrium point representing the prototype pattern. At present, system analysts have two main theories for explaining strange dynamical behaviors in recalling processes: autoassociative memories and heteroassociative memories [12,13]. Two types of methods are usually used for solving problems in regard to analysis and synthesis of associative memories. The first is when neurodynamic systems are multistable, and then system states may converge to locally stable equilibrium points, and these equilibrium points are encoded as the memorized patterns. The second is when neurodynamic systems are globally monostable, and then the memorized patterns are associated with the external input patterns.

As we know, the human brain is believed to be the most powerful information processor because of its structure of synapses. Actually, there is always a high demand for a single device to emulate artificial synaptic functions. Using a memristor, essential synaptic plasticity and learning behaviors can be mimicked. By integrating memristors into a large-scale neuromorphic circuit, memristive neural networks based neuromorphic circuits demonstrate spike-timing-dependent plasticity (a form of the Hebbian learning rule) [17-21]. From the viewpoint of theory and experiment, a memristive neural network model is a state-dependent switching system. On the other hand, this new dynamical system shows many of the characteristic features of magnetic tunnel junction [19, 20]. For these reasons, system analysis and integration for memristive neural networks will be extremely tricky. For the moment, there are two major methods for qualitative analysis of memristive neural networks. One is the differential inclusions approach [17-20], and the other is the fuzzy uncertainty approach [21]. For the differential inclusions approach, the core idea is to integrate a switched network cluster. For the fuzzy uncertainty approach, the central idea is to divide multijump network flows into a continuous subsystem and bounded subsystem. The two kinds of analysis frameworks are just the type and level of points, not the merits of good points of difference. From the perspective of cybernetics, many reported analytical skills are merely a breakthrough of system theory more than substance. How to develop more effective analysis methods for memristive neural networks is worth studying.

For the past few years, hybrid dynamic systems have remained one of the most active fields of research in the control community [22-30]. For instance, to describe the stationary distribution of temperature along the length of a wire that is bended, the nonlinear dynamic model with deviating argument is often used. The right-hand side in nonlinear systems with deviating argument features a combination of continuous and discrete systems. Thus, nonlinear systems with deviating argument unify the advanced and retarded systems. Because of this property, related works on the control strategies for nonlinear systems with deviating argument are relatively rare. Viewed from systems involving the interplay, differential equations and difference equations are included in the nonlinear systems with deviating argument [28-30]. However, it is still obvious that many basic issues on nonlinear systems with deviating argument remain to be addressed, such as nonlinear dynamics, systems design, and analysis.

Inspired by the above discussions, the goal of this paper is to formulate the analysis and design of associative memories for a class of memristive neural networks with deviating argument. On the whole, the highlights of this paper can be outlined as follows:

(1) Sufficient conditions are derived to ascertain the existence, uniqueness, and global exponential stability of the solution for memristive neural networks with deviating argument.

(2) The synthetic mechanism of the analysis and design of associative memories for a general class of memristive neural networks with deviating argument is revealed.

(3) A uniform associative memories model, which unites autoassociative memories and heteroassociative memories, is proposed for memristive neural networks with deviating argument.

In addition, it should be noted that when special and strict conditions are exerted, associative memories' performance of neural networks is usually limited. Therefore, the methods applicable to conventional stability analysis of neural networks cannot be directly employed to investigate the associative memories for memristive neural networks. Moreover, the study of dynamics for memristive neural networks with deviating argument is not an easy work, since the conditions for dynamics of neural networks cannot be simply utilized to analyze hybrid dynamic systems with deviating argument. In this paper, according to the fuzzy uncertainty approach, in combination with the theories of set-valued maps and differential inclusions, analysis and design of associative memories for memristive neural networks with deviating argument are described in detail.

The rest of the paper is arranged as follows. Design problem and preliminaries are given in Section 2. Main results are stated in Section 3. In Section 4, several numerical examples are presented. Finally, concluding remarks are given in Section 5.

2. Design Problem and Preliminaries

2.1. Problem Description. Let [ohm] = {[beta] = [([[beta].sub.1], [[beta].sub.1], ..., [[beta].sub.n]).sup.T] | [[beta].sub.i] = {-p, p}, i = 1,2, ..., n} be the set of n-dimensional bipolar vectors, where p is a positive constant and T denotes the transpose of a vector or matrix. The problem considered is described as follows.

Given l (l [less than or equal to] [2.sup.n]) pairs of bipolar vectors ([u.sup.(1)], [q.sup.(1)]), ([u.sup.(2)], [q.sup.(2)]), ..., [u.sup.(l)], [q.sup.(l)]), where [u.sup.(r)], [q.sup.(r)] [member of] [OMEGA], for r = 1,2, ..., l, design an associative memory model which satisfies the notion that the output of the model will converge to the corresponding pattern [q.sup.(r)] when [u.sup.(r)] is fed to the model input as a memory pattern.

Definition 1. The neural network is said to be an autoassociative memory if [u.sup.(r)] = [q.sup.(r)] and a heteroassociative memory if [u.sup.(r)] [not equal to] [q.sup.(r)], for r = 1,2, ..., l.

In this paper, let N be the natural number set; the norm of vector x = [([x.sub.1], [x.sub.2], ..., [x.sub.n]).sup.T] is defined as [parallel]x[parallel] = [[summation].sup.n.sub.i=1] [absolute value of [x.sub.i]]. Denote [[Real part].sup.n] as the n-dimensional Euclidean space. Fix two real number sequences {[[eta].sub.k]}, {[[??].sub.k]}, k [member of] N, satisfying [[eta].sub.k] < [[eta].sub.k+1], [[eta].sub.k] [less than or equal to] [[??].sub.k] [less than or equal to] [[eta].sub.k+1], and [lim.sub.k[right arrow]+[infinity]][[eta].sub.k] = + [infinity].

2.2. Model. Consider the following neural network model:

[??](t) = -Ax (t) + B (x(t)) f (x(t))

+ C(x(t)) f (x([gamma](t))) + E[u.sup.(r)],

w(t) = f(x(t)),

r = 1,2, ..., l,

where x(f) = [([x.sub.1](t), [x.sub.2](t), .... [x.sub.n](t)).sup.T] [member of [[Real part].sup.n] is the neuron state, A = diag([a.sub.1], [a.sub.2], ..., [a.sub.n]), which is a constant diagonal matrix with [a.sub.i] > 0, i [member of] {1,2, ..., n}, indicates the self-inhibition matrix, B(x(t)) = [([b.sub.ij]([x.sub.j](t))).sub.n x n] and C(v(t)) = [([c.sub.ij]([x.sub.j](t))).sub.n x n] are connection weight matrices at time t and [gamma](t), respectively, [b.sub.ij]([x.sub.j](t)) and [c.sub.ij]([x.sub.j](t)) are defined by

[mathematical expression not reproducible], (2)

for i, j = 1,2, ..., n, and [mathematical expression not reproducible] where [T.sub.j] > 0 represents the switching jump, and [mathematical expression not reproducible], and Cq are all constants. f(x(x)) = ([f.sub.1]([x.sub.1](x), [f.sub.2]([x.sub.2](x), .... [f.sub.n][([x.sub.n](x)).sup.T] is the activation function. The deviating function [gamma](t) = [[??].sub.k], when t [member of [[[eta].sub.k], [[eta].sub.k+1]) for any k [member of] N. E stands for a transform matrix of order n satisfying E[u.sup.(r)] = [q.sup.(r)] for r = 1,2, ..., l. [u.sp.(r)] and indicate the external input pattern and the corresponding memorized pattern, respectively. w(t) = [([w.sub.1](t), [w.sub.2](t), .... [w.sub.n](t)).sup.T] is the output of (1) corresponding to [u.sup.(r)].

Neural networkmodel (1) is called an associative memory if the output w(t) converges to [q.sup.(r)].

Obviously, neural network model (1) is of hybrid type: switching and mixed. When [absolute value of [mathematical expression not reproducible]. When [mathematical expression not reproducible]. From this perspective, (1) is a switching system. On the other hand, for any fixed interval [[[eta].sub.k], [[eta].sub.k+1]), when [gamma](t) = [[??].sub.k] < t < [[eta].sub.k+1], neural networkmodel (1) is a retarded system. When [[eta].sub.k] [less than or equal to] t < [gamma](t) = [[??].sub.k], neural network model (1) is an advanced system. Then. from this point of view, (1) is a mixed system.

For neural network model (1), the conventional definition of solution for differential equations cannot apply here. To tackle this problem, the solution concept for differential equations with deviating argument is introduced [24-30]. According to this theory, a solution x(t) = [([x.sub.1](t),[x.sub.2](t), ..., [x.sub.n](t)).sup.T] of (1) is a consecutive function such that (1) [??](t) exists on [0, + [infinity]), except at the points [[eta].sub.k], k [member of] N, where a one-sided derivative exists, and (2) x(t) satisfies (1) within each interval ([[eta].sub.k], [[eta].sub.k+1]), k [member of] N.

In this paper, the activation functions are selected as

[mathematical expression not reproducible], (3)

where p > 0 and d > 0 are used to adjust the slope and corner point of activation functions and [z.sup.(i).sub.1] = [T.sub.i], [z.sup.(i).sub.2] = [T.sub.i] + d.

It is easy to see that [f.sub.i(x) satisfies [f.sub.i]([+ or -][T.sub.i]) = 0 and

[absolute value of [f.sub.i]([l.sub.1]) - [f.sub.i]([l.sub.2])] [less than or equal to] [alpha] [absolute value of [l.sub.1] - [l.sub.2]], I = 1,2, ..., n, (4)

for any [l.sub.1] - [l.sub.2] [member of](-[infinity], +[infinity]), where [alpha] = p/d.

For technical convenience, denote [mathematical expression not reproducible] denotes the convex closure of a set constituted by real numbers [??] and [??].

2.3. Autoassociative Memories. From Definition 1, matrix E is selected as an identity matrix when designing autoassociative memories (i.e., E[u.sup.(r)] = [u.sup.(r)] = [q.sup.(r) for r = 1,2, ..., l). For the convenience of analysis and formulation in autoassociative memories, neural network model (1) can be rewritten in component form:

[mathematical expression not reproducible]. (5)

2.4. Properties. For neural network model (1), we consider the following set-valued maps:

[mathematical expression not reproducible], (6)

[mathematical expression not reproducible], (7)

for i, j = 1,2, ..., n.

Based on the theory of differential inclusions [17-20], from (5), we can get

[mathematical expression not reproducible], (8)

or, equivalently, there exist [b.sup.*.sub.ij]([x.sub.j](t)) [member of] K[[b.sub.ij]([x.sub.j](t))] and [c.sup.*.sub.ij]([x.sub.j](t)) [member of] K[[c.sub.ij]([x.sub.j](t))] such that

[mathematical expression not reproducible], (9)

for almost all t [member of] [0, +[infinity]).

Employing the fuzzy uncertainty approach [21], neural network model (5) can be expressed as follows:

[mathematical expression not reproducible], (10)

where

[mathematical expression not reproducible], (11)

and sgn(x) is the sign function.

Definition 2. A constant vector [x.sup.*] = [([x.sup.*.sub.1], [x.sup.*.sub.2], ..., [x.sup.*.sub.n]).sup.T] is said to be an equilibrium point of neural network model (5) if and only if

[mathematical expression not reproducible], (12)

or, equivalently, there exist [mathematical expression not reproducible] such that

[mathematical expression not reproducible]. (13)

Definition 3. The equilibrium point [x.sup.*] = [([x.sup.*.sub.1], [x.sup.*.sub.2], ..., [x.sup.*.sub.n]).sup.T] of neural network model (5) is globally exponentially stable if there exist constants M > 0 and [member of] > 0 such that

[mathematical expression not reproducible], (14)

for any t [greater than or equal to] 0, where x(t) is the state vector of neural network model (5) with initial condition [x.sup.0].

Lemma 4. For neural network model (5), one has

[mathematical expression not reproducible], (15)

for i, j = 1,2, ..., n, [x.sub.j], [y.sub.j] [member of] (-[infinity], +[infinity]), where K[[b.sub.ij](x)] and K[[c.sub.ij](x)] are defined as those in (6) and (7), respectively.

Lemma 4 is a basic conclusion. To get more specific, please see [Neural Networks, vol. 51, pp. 1-8, 2014], or [Information Sciences, vol. 279, pp. 358-373, 2014], or others.

In the following, we end this section with some basic assumptions:

(A1) There exists a positive constant [eta] satisfying

[[eta].sub.k+1] - [[eta].sub.k] < [eta], k [member of] N. (16)

(A2) [eta]([S.sub.b] + 2[S.sub.c]) exp{[S.sub.b][eta]} < 1.

(A3) [eta]([S.sub.c] + [S.sub.b])(1 + [S.sub.c][eta])exp{[S.sub.b][eta]}] < 1.

3. Main Results

In this section, we first present the conditions ensuring the existence and uniqueness of solution for neural network model (5) and then analyze its global exponential stability; finally, we discuss how to design the procedures of autoassociative memories and heteroassociative memories.

3.1. Existence and Uniqueness of Solution

Theorem 5. Let (A1) and (A2) hold. Then, there exists a unique solution x(t) = x(t, [t.sub.0], [x.sup.0]), t [greater than or equal to] [t.sub.0], of (5) for every ([t.sub.0], [x.sup.0]) [member of] ([[Real part].sup.+], [[Real part].sub.n])

Proof. The proof of the assertion is divided into two steps. Firstly, we discuss the existence of solution.

For a fixed k [member of] N, assume that [[eta].sub.k] [less than or equal to] [t.sub.0] [less than or equal to] [[??].sub.k] = [gamma](t) < t < [[eta].sub.k+1] and the other case [[eta].sub.k] [less than or equal to] [t.sub.0] < t < [[??].sub.k] = [gamma](t) < [[eta].sub.k+i] can be discussed in a similar way.

Let v(t) = x(t, [t.sub.0], [x.sup.0]), and consider the following equivalent equation:

[mathematical expression not reproducible]. (17)

Construct the following sequence {[v.sup.m.sub.i](t)}, m [member of] N, i = 1,2, ..., n:

[mathematical expression not reproducible], (18)

and [v.sup.0.sub.i](t) = [x.sup.0.sub.i]

Define a norm [mathematical expression not reproducible], and we can get

[mathematical expression not reproducible], (19)

where [mathematical expression not reproducible].

From (A2), we have ([S.sub.b] + [S.sub.c])[eta] < 1, and hence

[mathematical expression not reproducible]. (20)

Therefore, there exists a solution v(t) = x(t, [t.sub.0], [x.sup.0]) on the interval [[t.sub.0], [[??].sub.k] for (17). From (4), the solution can be consecutive to [[eta].sub.k+1]. Utilizing a similar method, x(t) can be consecutive to [[??].sub.k+1] and then to [[eta].sub.k+2]. Applying the mathematical induction, the proof of existence of solution for (5) is completed.

Next, we analyze the uniqueness of solution.

Fix k [member of] N and choose [t.sub.0] [member of] [[[eta].sub.k], [[eta].sub.k+1]], [x.sup.1] [member of] [[Real part].sup.n], [x.sup.2] [member of] [[Real part].sup.n] Denote [x.sup.1](t) = x(t, [t.sub.0], [x.sup.1]) and [x.sup.2](t) = x(t, [t.sub.0], [x.sup.2]) as two solutions of (5) with different initial conditions ([t.sub.0], [x.sup.1]) and ([t.sub.0], [x.sup.2]), respectively.

Then, we get

[mathematical expression not reproducible]. (21)

Based on the Gronwall-Bellman inequality,

[mathematical expression not reproducible], (22)

and, particularly,

[mathematical expression not reproducible]. (23)

Hence,

[mathematical expression not reproducible]. (24)

Suppose that there exists some [??] [member of] [[[eta].sub.k], [[eta].sub.k+1]] satisfying [x.sup.1]([??]) = [x.sup.2]([??]); then,

[mathematical expression not reproducible]. (25)

From (A2), we get

[mathematical expression not reproducible]; (26)

that is,

[mathematical expression not reproducible]. (27)

Substituting (24) into (25), from (27), we obtain

[mathematical expression not reproducible]. (28)

This poses a contradiction. Therefore, the uniqueness of solution is true. Taken together, the proof of the existence and uniqueness of solution for (5) is completed.

3.2. Invariant Manifolds

Lemma 6. The solution x(t) of neural network model (5) will fall into [[PI].sup.n.sub.i=1] [d + [T.sub.i], +[infinity]) ultimately if, for i = 1,2, ..., n, the following conditions hold:

[mathematical expression not reproducible]. (29)

Proof. We distinguish four cases to prove the lemma.

(1) If, for some t, [x.sub.i](t) [member of] [[T.sub.i], d + [T.sub.i]), i [member of] {1,2, ..., n}, then

[mathematical expression not reproducible]. (30)

That is to say, [[??].sub.i](t) > 0, when [x.sub.i](t) [member of] [[T.sub.i], d + [T.sub.i]).

(2) If, for some t, [x.sub.i](t) [member of] (-[T.sub.i], [T.sub.i]) i [member of] {1,2, ..., n}, then

[mathematical expression not reproducible]. (31)

That is to say, [[??].sub.i](t) > 0, when [x.sub.i](t) [member of] (-[T.sub.i] - [T.sub.i]).

(3) If, for some t, [x.sub.i](t) [member of] (-[T.sub.i] - d, -[T.sub.i]), i [member of] {1,2, ..., n}, then

[mathematical expression not reproducible]. (32)

That is to say, [[??].sub.i](t) > 0, when [x.sub.i](t) [member of] (-[T.sub.i] - [d.sub.i], -[T.sub.i]).

(4) If, for some t, [x.sub.i](t) [member of (-[infinity], -[T.sub.i] - [d.sub.i]), i [member of {1,2, ..., n}, then

[mathematical expression not reproducible]. (33)

That is to say, [[??].sub.i](t) > 0, when [x.sub.i](t) [member of] (-[infinity], -[T.sub.i] - [d.sub.i]).

From the above analysis, we can get that any component [x.sub.i](t) (i = 1,2, ..., n) of the solution x(t) of neural network model (5) will fall into [d + [T.sub.i], +[infinity]) ultimately. Hence, the proof is completed.

Lemma 7. The solution x(t) of neural network model (5) will fall into [[PI].sup.n.sub.i=1](-[infinity], -d - [T.sub.i]] ultimately if, for i = 1,2, ..., n, the following conditions hold:

[mathematical expression not reproducible]. (34)

Proof. We distinguish four cases to prove the lemma.

(1) If, for some t, [x.sub.i](t)[member of] [d + [T.sub.i, +[infinity]), i [member of] {1,2, ..., n}, then

[mathematical expression not reproducible]. (35)

That is to say, [[??].sub.i](t) < 0, when [x.sub.i](t) [member of] [d + [T.sub.i] +[infinity]).

(2) If, for some t, [x.sub.i](t) [member of] ([T.sub.i], d + [T.sub.i]), i [member of] {1,2, ..., n}, then

[mathematical expression not reproducible]. (36)

That is to say, [[??].sub.i](t) < 0, when [x.sub.i](t) [member of] ([T.sub.i], d + [T.sub.i]).

(3) If, for some t, [x.sub.i](t) [member of] (-[T.sub.i], [T.sub.i]), i [member of] {1,2, ..., n}, then

[mathematical expression not reproducible]. (37)

That is to say, [[??].sub.i](t) < 0, when [x.sub.i](t) [member of] (-[T.sub.i], [T.sub.i]).

(4) If, for some t, [x.sub.i](t) [member of] (-[T.sub.i] - d, -[T.sub.i]), i [member of] {1,2, ..., n}, then

[mathematical expression not reproducible]. (38)

That is to say, [[??].sub.i](t) < 0, when [x.sub.i](t) [member of] (-[T.sub.i] - [d.sub.i], -[T.sub.i]).

From the above analysis, we can get that any component [x.sub.i](t) (i = 1, 2, ..., n) of the solution x(t) of neural network model (5) will fall into (-[infinity], -d - [T.sub.i]] ultimately. Hence, the proof is completed. ?

3.3. Global Exponential Stability Analysis. Denote

[mathematical expression not reproducible]. (39)

In order to guarantee the existence of equilibrium point for neural network model (5), the following assumption is needed:

(A4) For any i [member of] [1,2, ..., n], i [member of] [[DELTA].sub.1] [union] [[DELTA].sub.2].

Theorem 8. Let (A4) hold. Then, neural network model (5) has at least one equilibrium point [x.sup.*] [member of] [THETA], where

[mathematical expression not reproducible]. (40)

Proof. Define a mapping

F(x) = ([F.sub.1] (x), [F.sub.2] (x), ..., [F.sub.n] [(x)).sup.T], (41)

where

[mathematical expression not reproducible]. (42)

It is obvious that F(x) is equicontinuous.

When i [member of [[DELTA].sub.1], there exists a small enough positive number [[epsilon].sub.1] satisfying

[mathematical expression not reproducible], (43)

and when i [member of [[DELTA].sub.2], there exists a small enough positive number [[epsilon].sub.2] satisfying

[mathematical expression not reproducible]. (44)

Take 0 < [epsilon] < min([[epsilon].sub.1], [[epsilon].sub.2]) and denote

[mathematical expression not reproducible], (45)

which is a bounded and closed set.

When [x.sub.i] [member of [d + [T.sub.i] + [epsilon],(d + [T.sub.i])/[epsilon]], i [member of] [[DELTA].sub.1],

[mathematical expression not reproducible]. (46)

Similarly, we can obtain

[F.sub.i](x) < -[T.sub.i] - d - [epsilon] (47)

for [x.sub.i] [member of [-(d + [T.sub.i])/[epsilon], -d - [T.sub.i] - [epsilon]], i [member of] [[DELTA].sub.2].

From the above discussion, we can get F(x) [member of] [THETA] for x [member of] [THETA]. Based on the generalized Brouwer's fixed point theorem, there exists at least one equilibrium [x.sup.*] [member of] [THETA]. This completes the proof.

Recalling (10), for i = 1,2, ..., n, then

[mathematical expression not reproducible]. (48)

[mathematical expression not reproducible], (49)

where [mathematical expression not reproducible], or there exists [mathematical expression not reproducible], such that

[mathematical expression not reproducible]. (50)

According to Lemma 4,

[mathematical expression not reproducible]. (51)

Lemma 9. Let (A1)-(A3) hold and y(t) = ([y.sub.1](t), [y.sub.2](t), ..., [y.sub.n][(t)).sup.T] foe a solution of (49) or (50). Then,

[mathematical expression not reproducible]. (52)

Proof. For any t > 0, there exists a unique k [member of] N, such that t [member of] [[[eta].sub.k], [[eta].sub.k+1]); then,

[mathematical expression not reproducible]. (53)

Taking the absolute value on both sides of (53),

[mathematical expression not reproducible], (54)

and then

[mathematical expression not reproducible]. (55)

Applying Lemma 4,

[mathematical expression not reproducible]; (56)

that is

[mathematical expression not reproducible]; (57)

then

[mathematical expression not reproducible]. (58)

According to the Gronwall-Bellman inequality,

[mathematical expression not reproducible]. (59)

Exchanging the location of [y.sub.i](t) and [y.sub.i]([[??].sub.k]) in (53), we get

[mathematical expression not reproducible], (60)

and then

[parallel]y([[??].sub.k])[parallel] [less than or equal to] [mu] [parallel]y(t)[parallel] (61)

holdsfort [mathematical expression not reproducible]. This completes the proof.

Remark 10. Neural network model (49) is also of a hybrid type. The difficulty in investigating this class of neural networks lies in the existence of a switching jump and deviating argument. For the switching jump, we introduce the differential inclusions and fuzzy uncertainty approach to compensate state-jump uncertainty. For the deviating argument, by the aid of effective computational analysis, we estimate the norm of deviating state y([gamma](t)) by the corresponding state y(t).

In the following, the criterion to guarantee the global exponential stability of neural network model (5) based on the Lyapunov method is established.

Theorem 11. Let (A1)-(A4) hold. The origin of neural network model (49) is globally exponentially stable, which implies that the equilibrium point [x.sup.*] = [([x.sup.*.sub.1], [x.sup.*.sub.2], ..., [x.sup.*.sub.n]).sup.T] of neural network model (5) is globally exponentially stable, if the following condition is satisfied:

[mu][S.sub.c] - [S.sup.-.sub.b] < 0. (62)

Proof. Denote

[z.sub.i](t) = exp {[lambda]t}[absolute value of [y.sub.i](t)] i = 1,2, ..., n, (63)

where [lambda] = -(1/2)([mu][S.sub.c] - [S.sup.-.sub.b]) > 0 is a positive constant satisfying

[lambda] + [mu][S.sub.c] - [S.sup.-.sub.b] = 1/2 ([mu[S.sub.c] - [S.sup.-.sub.b]) < 0. (64)

Then, for t [not equal to] [[eta].sub.k], k [member of] N, calculate the derivative of [z.sub.i](t) along the trajectory of (49) or (50):

[mathematical expression not reproducible]. (65)

Applying Lemma 4,

[mathematical expression not reproducible]. (66)

Let

V (t) = [n.summation over (i=1)][z.sub.i](t); (67)

then, for t [not equal to] [[eta].sub.k], k [member of] N,

[mathematical expression not reproducible]. (68)

Based on Lemma 9,

V(t) [less than or equal to] exp {[lambda]t} ([lambda] - [S.sup.-.sub.b] + [mu][S.sub.c]) [parallel]y(t)[parallel]. (69)

From (64),

[lambda] - [S.sup.-.sub.b] + [mu][S.sub.c] < 0; (70)

then,

[??](t) [less than or equal to] exp {[lambda]t} ([lambda] - [S.sup.-.sub.b] + [mu][S.sub.c]) [parallel]y(t)[parallel] [less than or equal to] 0. (71)

It is easy to see that

[mathematical expression not reproducible]; (72)

hence

[parallel]y(t)[parallel] [less than or equal to] [parallel]y(0)[parallel] exp {-[lambda]t}; (73)

that is

[parallel]x(t) - [x.sup.*][parallel] [less than or equal to]x(0) - [x.sup.*][parallel] exp {-[lambda]t}, (74)

for t [greater than or equal to] 0. This completes the proof.

3.4. Design Procedure of Autoassociative Memories. Given l (l [less than or equal to] [2.sup.n]) vectors [u.sup.(1)], [u.sup.(2)], ..., [u.sup.(l)] to be memorized, where [u.sup.(k)] [member of] [OMEGA], for k = 1,2, ..., l, the design procedure of autoassociative memories is stated as follows:

(1) Select matrix E as an identity matrix.

(2) According to (29) and (34), choose appropriate [S.sub.i], [a.sub.i], [T.sub.i], [[bar.b].sub.ij], and [[bar.c].sub.ij].

(3) Based on [mathematical expression not reproducible], the values of [mathematical expression not reproducible] can be selected.

(4) Calculate [S.sub.b], [S.sup.-.sub.b], and [S.sub.c]. Adjust the values of [mathematical expression not reproducible] to make sure that [S.sub.c] - [S.sup.-.sub.b] < 0.

(5) Solving the following inequalities: [eta]([S.sub.b] + 2[S.sub.c]) exp{[S.sub.b][eta]} < 1, [eta]([S.sub.c] + [S.sub.b](1 + [S.sub.c][eta]) exp{[S.sub.b[eta]}) < 1, we can get 0 < [eta] < [bar.[eta]].

(6) Take a proper [eta] [member of (0, [[bar.[eta]]) such that [mu][S.sub.c] - [S.sup.-.sub.b] < 0.

The output w(t) will converge to the related memory pattern when matrices A, B, C, and E and scalar [eta] are chosen as above.

3.5. Design Procedure of Heteroassociative Memories. Given l (l [less than or equal to] [2.sup.n]) vectors to be memorized as [q.sup.(1)], [q.sup.(2)], ..., [q.sup.(l)], which correspond to the external input patterns [u.sup.(1)], [u.sup.(2)], ... [u.sup.(l)], where [q.sup.(k)] [member of [OMEGA], [u.sup.(k)] [member of [OMEGA], k = 1,2, ..., l, set EU = Q, where U = [[u.sup.(1)], [u.sup.(2)], ..., [u.sup.(l)]], Q = [[q.sup.(1)], [q.sup.(2)], ..., [q.sup.(l)]]. It is clear that heteroassociative memories can be treated as autoassociative memories when transform matrix E is obtained. Matrices A, B, and C and scalar [eta] can be selected as the method used in autoassociative memories. The transform matrix E is obtained by the following steps:

(1) When l = n, that is, Q and U are square matrices, utilizing Matlab Toolbox, we can get inverse matrix of U as [U.sup.-1], and E can be obtained as E = Q[U.sup.-1].

(2) When l < n, add n - l proper column vectors to U and Q, which constructs new matrices [bar.U] and [bar.Q], respectively, such that the ranks of matrices [bar.U] and [bar.Q] are full. Anhh the transform matrix E can be obtained by E = [bar/Q][[bar.U].sup.-1].

(3) When n < Z, add l - n proper row vectors to U and Q, which constructs new matrices [??] and [??], such that the ranks of matrices [??] and [??] are full. And the transform matrix E can be obtained by E = [??][[??].sup.-1]. Note in particular, in this circumstance, that the dimensionality of each input and output pattern increases from n to Z. And the front n values of each external input pattern carry out associative memories.

The output w(t) will converge to the related memory pattern when matrices A, B, C, and E and scalar [eta] are chosen as above.

4. Experimental Verification

In this section, in order to verify the effectiveness of the proposed results, several numerical examples are provided.

Example 12. The first example is state response of autoassociative memory.

Consider neural network model (5) with n = 12, where p = d = 1, [eta] = 2/9, [T.sub.i] = 0.1, [a.sub.i] = 0.4, and

[mathematical expression not reproducible], (75)

with the activation functions

[mathematical expression not reproducible] (76)

for i, j = 1,2, ..., 12. Denote u = q = [(1, 1, 1, 1, -1, -1, 1, -1, -1, 1, 1, 1).sup.T] as the external input vector and the memorized vector, respectively.

It follows from Theorem 11 that neural network model (5) has a unique equilibrium point, which is globally exponentially stable. The simulation result is shown in Figure 1.

It can be calculated easily that the equilibrium point in Figure 1 is

[x.sup.*] = [(2.71,2.71,2.71,2.71,-2.31,-2.31,2.71, -2.31,-2.31,2.71,2.71,2.71).sup.T]. (77)

Clearly, the output pattern is

[w.sup.*] = [(1,1,1,1,-1,-1,1,-1,-1,1,1,1).sup.T], (78)

which is equivalent to the memorized pattern q. Hence, neural network model (5) can be implemented effectively as an autoassociative memory.

Remark 13. Because the equilibrium point is globally exponentially stable, the initial values can be selected randomly, and there is no influence on the performance of associative memories.

Example 14. The second example includes influences of [a.sub.i], d, and p on the location of equilibrium points.

The influences are divided into two cases to discuss.

Case 1. Let p = 1 be a constant and denote u = q = [(1, 1, 1, 1, -1, -1, 1, -1, -1, 1, 1, 1).sup.T] as the external input vector and the memorized vector, respectively. Consider [a.sub.i] and d as variables in neural networkmodel (5), taking [eta] = 1/9, and the rest of the parameters are the same as those in Example 12. The results are shown in Table 1, from which we can see that the smaller the value of [a.sub.i], the bigger the value range d can take for the sake of memorized pattern, and vice versa.

Case 2. Let d = 0.25 be a constant and denote u = q = [(p, P, P, P, -p, -p, P, -p, -p, p, p, p).sup.T] as the external input vector and the memorized vector, respectively, where p >0. Consider at and p (or [alpha] = p/d) as variables in neural network model (5), taking [eta] = 1/9, and the rest of the parameters are the same as those in Example 12. The results are shown in Table 2, from which we can see that [alpha] (or p) should increase when [a.sub.i] increases for the sake of memorized pattern, and vice versa.

Remark 15. From Tables 1 and 2, it can be concluded that the associative memories performance becomes better with the increase of a = p/d when is fixed, but not vice versa.

Example 16. The third example includes the influences of and perturbed u on the recall success rate of memorized patterns.

Let u = [(1, 1, 1, 1, -1, -1, 1, -1, -1, 1, 1, 1).sup.T], [eta] = 1/9, and consider [a.sub.i] (i = 1,2, ..., 12) as variables in neural network model (5). The other parameters are the same as those in Example 12. In this example, the influences of [a.sub.i] and perturbed u are analyzed on the recall success rate of memorized vectors. Experimental results are illustrated in Table 3, which makes it clear that the robustness of neural networkmodel (5) for perturbed input u will become stronger as [a.sub.i] is small enough.

Remark 17. Table 3 illustrates that the robustness of associative memories on perturbed u becomes stronger with the decrease of [a.sub.i].

Example 18. The fourth example includes the application of autoassociative memory without input perturbation.

According to neural network model (5), design an autoassociative memory to memorize pattern "T" which is represented by a 4 x 3-pixel image (black pixel = 1; white pixel = -1) and the corresponding input pattern is u = [(1,1,1,-1,1,-1,-1,1,-1,-1,1,-1).sup.T]. The parameters are the same as those in Example 12.

Simulation results with three random initial values are depicted in Figure 2, where the dark grey block indicates that the value of the related component of output w(t) is in (0,1) and the light grey block indicates that the value of the related component of output w(t) is in (-1,0). Obviously, the output pattern w(t) converges to the memorized pattern correctly.

Example 19. The fifth example includes the application of autoassociative memory with input perturbation.

The aim of this experiment is to discuss the influence of input pattern perturbation on the output pattern. The parameters are the same as those in Example 12. Choose pattern "T" to be memorized, and the perturbed vector is selected as u = k * q, where q = [(1,1,1,-1,1,-1,-1,1,-1,-1,1,-1).sup.T] is the corresponding memorized vector and k is a perturbation coefficient.

In what follows, the perturbed vector u is imposed on neural network model (5) as input vector with k = 0.52 and k = 0.41, respectively. Simulation results are shown in Figures 3 and 4. In Figure 3, the output pattern is exactly the same as the memorized pattern, and in Figure 4 the output pattern is different from the memorized pattern although they can be distinguished from the outlines.

Remark 20. From the analysis above, we can see that the output pattern will converge to the memorized pattern when the perturbation coefficient is not small enough. In fact, neural network model (5) is implemented well as an associative memory if the coefficient satisfying k [greater than or equal to] 0.52 and the other parameters are the same as those in Example 19.

Example 21. The sixth example includes the application of heteroassociative memory.

Consider neural network model (1), and the parameters are the same as those in Example 12 except the transform matrix E, which will be given later. The aim of this experiment is to design a heteroassociative memory to memorize pattern "U" when the input pattern is "C." It is clear that the input vector is u = [(1, 1, 1, 1, -1, -1, 1, -1, -1, 1, 1, 1).sup.T] and the corresponding output vector is q = [(1,-1,1,1,-1,1, 1, -1, 1, 1, 1, 1,).sup.T].

According to the heteroassociative memories design procedure, the matrices [bar.U] and [bar.Q] are constructed as

[mathematical expression not reproducible], (79)

respectively.

Using the Matlab Toolbox, we can have

[mathematical expression not reproducible]. (80)

The simulation results are depicted in Figure 5, which shows that the output pattern "U" is memorized when the input pattern is "C" and the transform matrix E is selected as above. This indicates that neural network model (1) can perform well as a heteroassociative memory under a proper transform matrix E, which can be easily obtained by using the Matlab Toolbox.

Remark 22. Simulation results in Figure 5 show that the associative memories model presented in this paper can be implemented well as autoassociative memories and heteroassociative memories. This model possesses the resistance capacity against external input perturbations and can be regarded as an extension and complement to the relevant results [10,11,16].

5. Concluding Remarks

Analysis and design of associative memories for recurrent neural networks have drawn considerable attention, whereas analysis and design of associative memories for memristive neural networks with deviating argument have not been well investigated. This paper presents a general class of memristive neural networks with deviating argument and discusses the analysis and design of autoassociative memories and heteroassociative memories. Meanwhile, the robustness of this class of neural networks is demonstrated when external input patterns are seriously perturbed. To the best of the author's knowledge, this is the first theoretical result on associative memories for memristive neural networks with deviating argument. Future works may aim at (1) studying associative memories based on multistability of memristive neural networks with deviating argument; (2) analyzing associative memories based on discrete-time memristive neural networks with deviating argument; (3) exploring high-capacity associative memories for memristive neural networks with deviating argument.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the Research Project of Hubei Provincial Department of Education of China under Grant T201412.

https://doi.org/10.1155/2017/1057909

References

[1] J. Tian and S. Zhong, "Improved delay-dependent stability criterion for neural networks with time-varying delay," Applied Mathematics and Computation, vol. 217, no. 24, pp. 10278-10288, 2011.

[2] Y. Du, S. Zhong, and N. Zhou, "Global asymptotic stability of Markovian jumping stochastic Cohen-Grossberg BAM neural networks with discrete and distributed time-varying delays," Applied Mathematics and Computation, vol. 243, pp. 624-636, 2014.

[3] W.-H. Chen, X. Lu, and W. X. Zheng, "Impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks," IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 4, pp. 734-748, 2015.

[4] H. Zhang, F. Yang, X. Liu, and Q. Zhang, "Stability analysis for neural networks with time-varying delay based on quadratic convex combination," IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 4, pp. 513-521, 2013.

[5] S. Wen, T. Huang, Z. Zeng, Y. Chen, and P. Li, "Circuit design and exponential stabilization of memristive neural networks," Neural Networks, vol. 63, pp. 48-56, 2015.

[6] A. L. Wu, L. Liu, T. W. Huang, and Z. G. Zeng, "Mittag-Leffler stability of fractional-order neural networks in the presence of generalized piecewise constant arguments," Neural Networks, vol. 85, pp. 118-127, 2017.

[7] C. Hu, J. Yu, Z. H. Chen, H. J. Jiang, and T. W. Huang, "Fixed-time stability of dynamical systems and fixed-time synchronization of coupled discontinuous neural networks," Neural Networks, vol. 89, pp. 74-83, 2017

[8] T. W. Huang, "Robust stability of delayed fuzzy Cohen-Grossberg neural networks," Computers & Mathematics with Applications, vol. 61, no. 8, pp. 2247-2250, 2011.

[9] G. Grass, "On discrete-time cellular neural networks for associative memories," IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 48, no. 1, pp. 107-111, 2001.

[10] Q. Han, X. Liao, T. Huang, J. Peng, C. Li, and H. Huang, "Analysis and design of associative memories based on stability of cellular neural networks," Neurocomputing, vol. 97, pp. 192-200, 2012.

[11] M. S. Tarkov, "Oscillatory neural associative memories with synapses based on memristor bridges," Optical Memory and Neural Networks, vol. 25, no. 4, pp. 219-227, 2016.

[12] M. Baghelani, A. Ebrahimi, and H. B. Ghavifekr, "Design of RF MEMS based oscillatory neural network for ultra high speed associative memories," Neural Processing Letters, vol. 40, no. 1, pp. 93-102, 2014.

[13] M. Kobayashi, "Gradient descent learning rule for complex-valued associative memories with large constant terms," IEEJ Transactions on Electrical and Electronic Engineering, vol. 11, no. 3, pp. 357-363, 2016.

[14] A. R. Trivedi, S. Datta, and S. Mukhopadhyay, "Application of silicon-germanium source tunnel-fet to enable ultralow power cellular neural network-based associative memory," IEEE Transactions on Electron Devices, vol. 61, no. 11, pp. 3707-3715, 2014.

[15] Y. Wu and S. N. Batalama, "Improved one-shot learning for feedforward associative memories with application to composite pattern association," IEEE Transactions on Systems, Man, and Cybernetics, PartB: Cybernetics, vol. 31, no. 1, pp. 119-125, 2001.

[16] C. Zhou, X. Zeng, J. Yu, and H. Jiang, "A unified associative memory model based on external inputs of continuous recurrent neural networks," Neurocomputing, vol. 186, pp. 44-53, 2016.

[17] R. Zhang, D. Zeng, S. Zhong, and Y. Yu, "Event-triggered sampling control for stability and stabilization of memristive neural networks with communication delays," Applied Mathematics and Computation, vol. 310, pp. 57-74, 2017

[18] G. Zhang and Y. Shen, "Exponential stabilization of memristor-based chaotic neural networks with time-varying delays via intermittent control," IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 7, pp. 1431-1441, 2015.

[19] S. Wen, Z. Zeng, T. Huang, and Y. Zhang, "Exponential adaptive lag synchronization of memristive neural networks via fuzzy method and applications in pseudo random number generators," IEEE Transactions on Fuzzy Systems, 2013.

[20] A. Wu and Z. Zhigang, "Lagrange stability of memristive neural networks with discrete and distributed delays," IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 4, pp. 690-703, 2014.

[21] S. B. Ding, Z. S. Wang, J. D. Wang, and H. G. Zhang, "[H.sub.[infinity]] state estimation for memristive neural networks with time-varying delays: The discrete-time case," Neural Networks, vol. 84, pp. 47-56, 2016.

[22] Q. Zhong, J. Cheng, Y. Zhao, J. Ma, and B. Huang, "Finite-time [H.sub.[infinity]] filtering for a class of discrete-time Markovian jump systems with switching transition probabilities subject to average dwell time switching," Applied Mathematics and Computation, vol. 225, pp. 278-294, 2013.

[23] K. Shi, X. Liu, H. Zhu, S. Zhong, Y. Zeng, and C. Yin, "Novel delay-dependent master-slave synchronization criteria of chaotic Lure systems with time-varying-delay feedback control," Applied Mathematics and Computation, vol. 282, pp. 137-154, 2016.

[24] M. U. Akhmet, D. Arugaslan, and E. Yilmaz, "Stability analysis of recurrent neural networks with piecewise constant argument of generalized type," Neural Networks, vol. 23, no. 7, pp. 805-811, 2010.

[25] M. U. Akhmet, D. Arugaslan, and E. Yilmaz, "Stability in cellular neural networks with a piecewise constant argument," Journal of Computational and Applied Mathematics, vol. 233, no. 9, pp. 2365-2373, 2010.

[26] M. U. Akhmet and E. Yilmaz, "Impulsive Hopfield-type neural network system with piecewise constant argument," Nonlinear Analysis: Real World Applications, vol. 11, no. 4, pp. 2584-2593, 2010.

[27] K.-S. Chiu, M. Pinto, and J.-C. Jeng, "Existence and global convergence of periodic solutions in recurrent neural network models with a general piecewise alternately advanced and retarded argument," Acta Applicandae Mathematicae, vol. 133, pp. 133-152, 2014.

[28] L. G. Wan and A. L. Wu, "Stabilization control of generalized type neural networks with piecewise constant argument," Journal of Nonlinear Science and Applications, vol. 9, no. 6, pp. 3580-3599, 2016.

[29] L. Liu, A. L. Wu, Z. G. Zeng, and T. W. Huang, "Global mean square exponential stability of stochastic neural networks with retarded and advanced argument," Neurocomputing, vol. 247, pp. 156-164, 2017

[30] A. Wu and Z. Zeng, "Output convergence of fuzzy neurodynamic system with piecewise constant argument of generalized type and time-varying input," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 12, pp. 1689-1702, 2016.

Jin-E Zhang

Hubei Normal University, Hubei 435002, China

Correspondence should be addressed to Jin-E Zhang; zhang86021205@163.com

Received 19 April 2017; Revised 20 June 2017; Accepted 2 July 2017; Published 9 August 2017

Academic Editor: Radek Matusu

Caption: Figure 1: State response in neural network model (5) with external input u.

Caption: Figure 2: Typical evolutions of output pattern with input vector u = q under three random initial values.

Caption: Figure 3: Typical evolutions of output pattern with input vector u = 0.52 * q under three random initial values.

Caption: Figure 4: Typical evolutions of output pattern with input vector u = 0. 41 * q under three random initial values.

Caption: Figure 5: Typical evolutions of pattern "U" with input pattern "C" under three random initial values.

Table 1: The influences of [a.sub.i] and d on the location of equilibrium points when p = 1 is a constant. d 1.2 1.4 1.6 2.0 2.4 3.0 0.3 v v v v v -- 0.4 v v v v -- -- [a.sub.i] 0.5 v v v -- -- -- 0.6 v v -- -- -- -- 0.7 v -- -- -- -- -- "v" ("--") denotes the equilibrium points located (not located) in the saturation regions. Table 2: The influences of [a.sub.i]; and [alpha]= p/d on the Location of equilibrium points when d = 0.25 is a constant. [alpha] = p/d 1/2 2/3 3/4 1 1.2 1.5 0.40 -- v v v v v 0.45 -- -- v v v v [a.sub.i] 0.60 -- -- -- v v v 0.70 -- -- -- -- v v 0.90 -- -- -- -- -- v "v" ("--") denotes the equilibrium points located (not located) in the saturation region. Table 3: The influences of [a.sub.i]; and perturbed u on the recall success rate. [a.sub.i] 0.3 0.4 0.5 0.6 0.7 0.9u 100% 100% 100% 100% 100% 0.84u 100% 100% 100% 100% 66.67% 0.63u 100% 100% 100% 66.67% 0 0.52u 100% 100% 66.67% 0 0.41u 100% 66.67% 0 0.25u 66.67% 0 0.24u 0

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Zhang, Jin-E |

Publication: | Mathematical Problems in Engineering |

Article Type: | Report |

Date: | Jan 1, 2017 |

Words: | 7733 |

Previous Article: | Optimizing Production Scheduling of Steel Plate Hot Rolling for Economic Load Dispatch under Time-of-Use Electricity Pricing. |

Next Article: | An Improved Particle Swarm Optimization Algorithm Using Eagle Strategy for Power Loss Minimization. |

Topics: |