# Identification of Coupled Map Lattice Based on Compressed Sensing.

1. IntroductionCoupled map lattice (CML), developed by Kaneko [1-3], is a dynamical system that can be described as an array of smaller finite-dimensional subsystems endowed with local interactions. The evolution of the subsystem at one lattice element depends not only on the states of the element itself, but also on the state of the other elements. If the coupling interaction is a function of all the elements, then the CML is called globally coupled map lattice. So far, a very large number of remarkable results about this topic were obtained by researchers working in different areas of physics, biology, mathematics, and engineering. However, most of the papers mainly concentrated on analysing the evolutionary behaviours and properties of CML [4-6]. And the topic about the parameter identification or reconstruction of CML from measured data has received relatively little attention. Over the past fifteen years, several methods for the identification problem have been presented [7-11]. Billings and Coca [8] introduced a novel approach to the identification of CML and the method exploited the regularity of the CML model. Subsequently, Billings et al. [10] pointed out some drawbacks of the method [7, 9] and presented a multiresolution approximation for the underlying spatiotemporal dynamics using B-spline wavelet and scaling functions. Ning et al. [11] first transformed the nonlinear spatiotemporal system into a class of multi-input-multi-output (MIMO) partially linear systems and then proposed an effective online identification algorithm for this nonlinear spatiotemporal system. Subsequently, Ning et al. [12] proposed a novel kernel-based learning algorithm and it can be used to effectively achieve accurate derivative estimation for nonlinear functions in the time domain. Kohar et al. [13] studied the usefulness of coupled redundancy as a mechanism for reduction in local noise in coupled map lattices and investigated the role of network topology, coupling strength, and iteration number in this mechanism. But sometimes in order to improve the efficiency of parameter identification, we need to reduce the number of observations or samplings under the condition of accurate identification. In addition, from an engineering perspective, we cannot ensure that the received data are truly generated by the CML system because our sampled data may be contaminated with noise when we perform some indispensable tasks (e.g., data sampling and data transmission). To the best of our knowledge, most of papers about this topic (including the papers mentioned above) did not consider the above two fascinating and significant problems. And the parameter identification of CML using a very few number of observations or samplings is still regarded as a very intractable problem. Another challenging problem is that if the observed data are contaminated with noise, how can we accurately identify all the weighted parameters?

The compressed sensing theory, introduced in [14-17], breaks the limitation of Nyquist-Shannon sampling theorem and is widely used in various fields, such as information theory, medical imaging, earth sciences, and astronomy. The standard model of compressed sensing is an underdetermined linear system y = Ax, where A [member of] [R.sup.MxN], x [member of] [R.sup.N], y [member of] [R.sup.M], and M < N. It is clear that, given a coefficient matrix A and an observation vector y, there exist infinite x such that y = Ax. In order to recover the unique vector x from the observed vector y, we need two conditions; that is, the vector x is sparse and the matrix A must satisfy some special properties, such as restricted isometry property (RIP) and mutual coherence [18]. In otherwords, if the matrix A satisfies the RIP, then we can recover the N-dimensional sparse vector x from M-dimensional observed vector y. Further, the RIP is sufficient for a variety of algorithms to be able to successfully recover a sparse vector from noisy samplings. These two useful theoretical results about compressed sensing provide insights to solve the above stubborn problems about the identification of CML.

Different from the existing methods about parameter identification of CML system, we independently present a new approach using the compressed sensing theory in this paper. Specifically, we first transform the parameter identification problem into the reconstruction problem of underdetermined linear system through simple mathematical transformation. Subsequently, we give a low bound on the mutual coherence of sensing matrix using the inverse formula of Cauchy-Schwarz inequality and then prove that it satisfies the RIP with suitable parameters. At last we exploit the classic Orthogonal Matching Pursuit algorithm in compressed sensing to recover the weighted parameters of CML system. The result shows that if the weighted vector of every lattice element has at most s nonzero entries, then we just need M observations, which is far less than N, to accurately identify the weighted vector. Another critical advantage of our approach is that the reconstruction algorithm can still work if the sampled data are contaminated with some types of noises (e.g., Gaussian noise, uniform noise). We investigate the effects of coupling parameter and noise on the recovery rate in our simulations. We found an interesting fact that when the sampled data are contaminated with uniform noise, the recovery effects are better than that if the observed data are authentics.

The remainder of this paper is organized in the following manner. We recall the standard CML model in Section 2. In Section 3 we introduce some backgrounds about the compressed sensing theory. We present the theoretical results about our proposed identification method based on compressed sensing in Section 4 and show our simulations in Section 5. Finally, we draw our conclusions in Section 6.

2. The CML Model

We give a brief review of standard CML model in this section. Generally speaking, the widely used CMLs can be divided into two categories, that is, the diffusive (DCML) model and the global (GCML) model. The difference between these two models is whether the evolution of the subsystematany lattice element depends on the states of all the other elements. In a standard GCML model, if we set some weighted parameters to 0, then it can be seen as a DCML model. Without loss of generality, we consider the following GCML model:

[x.sub.t+1] (i) = (l - [epsilon])f ([x.sub.t] (i)) + [[epsilon]/N] [N.summation over (j=1)][c.sub.ij]g ([x.sub.t](j)), (1)

where [x.sub.t] (i) defines the state of the lattice element i (1 [less than or equal to] i [less than or equal to] N) at a discrete time step t (1 [less than or equal to] t [less than or equal to] M), [epsilon] [member of] (0, 1) is the coupling parameter, f and g are maps with respect to the local dynamics and nonlocal system, and [c.sub.i] = ([c.sub.i1], [c.sub.i2], ..., [c.sub.iN]) [member of] [{0, 1}.sup.1xN] is the unknown weighted vector of the lattice element i. The two widely used forms of g(x), namely, g(x) = f(x) and g(x) = x, are corresponding to synchronous and asynchronous updates of the lattice, respectively. For the sake of simplicity, we just consider the synchronous case in our paper; that is, f(x) = g(x). In many scenarios, f(x) is the standard logistic map, which can be written as

f(x) = 4x (1-x), (2)

where x [member of] (0,1) and the range eventually lies in the interval (0,1).

3. Compressed Sensing

In the field of compressed sensing, the researchers mainly concentrate on designing the excellent sensing matrices and the efficient reconstruction algorithms. Compressed sensing provides a method to obtain a unique solution from an underdetermined linear system taking advantage of the prior knowledge that the true solution is sparse. In mathematical terms, we consider the following underdetermined linear system:

y = Ax, (3)

where A [member of] [R.sup.MxN] (M < N) and x [member of] [R.sup.N]. This equation depicts the process that the original N-dimensional data x is compressed into an M-dimensional data y. In this formula, A denotes the sensing matrix and we call y observed vector or observed values. A vector x is said to be s-sparse when it has at most s nonzero entries. For a s-sparse original vector x [member of] [R.sup.N], if the sensing matrix satisfies some suitable condition, then there exist some algorithms to recover the vector x from the observed vector y. Next, we recall some fundamental results about sensing matrices and reconstruction algorithms in the following two subsections.

3.1. Sensing Matrix. In this subsection, we address two significant concepts frequently used in compressed sensing theory, that is, RIP and mutual coherence. The famous RIP was first presented by [14], which is defined as follows.

Definition 1. A matrix A satisfies the RIP of order s if there exists a [[delta].sub.s] [member of] (0,1) such that

(i - [[delta].sub.s])[[parallel]v[parallel].sup.2.sub.2] < [[parallel]Av[parallel].sup.2.sub.2] [less than or equal to] (1 + [[delta].sub.s])[[parallel]v[parallel].sup.2.sub.2] (4)

holds for all s-sparse vectors v.

In fact, the RIP sufficiently guarantees that the original vector can be exactly recovered from the observed values. Furthermore, if the matrix A satisfies the RIP, then it is also sufficient for a variety of algorithms to be able to recover the spare vector from observed data contaminated with noise. The following lemma [19] denotes noiseless recovery if the matrix A satisfies the RIP.

Lemma 2. Let A be a sensing matrix in (3) that satisfies the RIP of order 2s with [[delta].sub.2s] < [square root of 2] - 1. Then the original sparse vector can be exactly recovered.

Baraniuk et al. [20] have proved that the random sensing matrix, whose entries are identically and independently sampled from a Gaussian distribution or a sub-Gaussian distribution, satisfies the RIP with overwhelming probability. Similar to a Gaussian random matrix, a Bernoulli matrix or a Random Discrete Fourier Transform (RDFT) matrix can also be used as sensing matrix. However, given a specific matrix A, it is very hard to verify whether A satisfies the RIP or not. The concept of mutual coherence provides a more simple choice [18, 21].

Definition 3. Given a matrix A [member of] [R.sup.MxN] (M < N), [a.sub.i] denotes the ith column of A. The mutual coherence of the matrix A, [mu](A), is the largest absolute normalized inner product between any two columns of A. Namely,

[mathematical expression not reproducible]. (5)

From Cauchy-Schwarz inequality [absolute value of <[a.sub.i], [a.sub.j]>] [less than or equal to] [[parallel] [a.sub.i][parallel].sub.2] [[parallel][a.sub.J][parallel].sub.2], it is clear that the mutual coherence of any matrix A is bounded by 1. In order to complete our proof, we will review the Gershgorin circle theorem [22] here.

Algorithm 1: Orthogonal matching pursuit. Require: y [member of] [R.sup.M] {the observed vector} Ensure: x [member of] [R.sup.N] {the sparse vector} x[0] := 0; r[0] := y; [LAMBDA] := [empty set]; k := 0. repeat [mathematical expression not reproducible] [LAMBDA] := [LAMBDA] [union]{j}, [mathematical expression not reproducible] r[k + 1] := y - Ax[k + 1], k := k + 1 until [[parallel]r[k][parallel].sub.2] [less than or equal to] EPS. return x := x[k].

Lemma 4. The eigenvalues of an nxn matrix A with entries [a.sub.ij], 1 [less than or equal to] i, j [less than or equal to] n, lie in the union of [d.sub.i] discs [d.sub.i] = [d.sub.i] ([e.sub.i], [f.sub.i]) centered at [e.sub.i] = [a.sub.ii] and with radius [f.sub.i] = [[summation].sub.i[not equal to]j][absolute value of [a.sub.ij]].

3.2. Recovery Algorithm. A straightforward approach to obtain the original s-sparse vector x from the above underdetermined system (3) can be viewed as the optimization problem of

[mathematical expression not reproducible], (6)

which is called [l.sub.0] optimization problem. This problem is very difficult to solve and it is NP-hard in general case [14]. Since the convex property of [l.sub.1] norm, a classic method used in compressed sensing is to replace [[parallel]x[parallel].sub.0] with [[parallel]x[parallel].sub.1]; that is,

[mathematical expression not reproducible]. (7)

In fact, Candes [19] has pointed out that the solution to (6) is that of (7) when A satisfies the RIP with [[delta].sub.2s] < [square root of 2] - 1. Although some convex optimization techniques are powerful for sparse vector reconstruction, there also exist a variety of greedy algorithms [23-25] for solving this problem. We describe the classic Orthogonal Matching Pursuit (OMP) algorithm as follows because our simulations need to use it.

In Algorithm 1, [LAMBDA] denotes the support set and [x.sub.[LAMBDA]] is the vector obtained by collecting the entries of x corresponding to the set [LAMBDA]. [A.sub.[LAMBDA]] is the submatrix of A composed of the column vectors of A corresponding to the support set [LAMBDA]. During each pass through the loop, the algorithm first chooses the optimal index j and updates the support set A and then computes the vector x which minimizes [[parallel]Ax - y[parallel].sup.2.sub.2] such that supp(x) = [LAMBDA].

4. Parameter Identification of GCML Based on Compressed Sensing

In this section, we first transform the parameter identification problem of GCML to the reconstruction problem in compressed sensing. Then we give a low bound on the mutual coherence of the corresponding sensing matrix using the inverse formula of Cauchy-Schwarz inequality. At last, we prove that it satisfies the RIP.

Theorem 5. Given a GCML model described as (1) and M observed values [x.sub.t](i) of every lattice element i, where 1 [less than or equal to] t [less than or equal to] M, 1 [less than or equal to] i [less than or equal to] N, then the identification problem of (1) can be transformed into the reconstruction problem of an underdetermined linear system.

Proof. From (1), we can obtain the following equation:

N/[epsilon] [[x.sub.t+i] (i) - (1 - [epsilon]) f ([x.sub.t] (i))] = [N.summation over (j=1)][c.sub.ij]g ([x.sub.t] (j)). (8)

Denote [y.sub.t](i) = (N/[epsilon])[[x.sub.t+1](i) - (1 - [epsilon])f([x.sub.t](i))]; then we get

[mathematical expression not reproducible]. (9)

If we sample M times for every element i in the lattice, therefore we have

[mathematical expression not reproducible]. (10)

where i = 1,2, ..., N. So

[mathematical expression not reproducible]. (11)

Denote

[mathematical expression not reproducible]. (12)

Then (11) can be expressed as an underdetermined linear system Y = BC. So the identification of GCML can be transformed into the reconstruction problem of an underdetermined linear system.

Let [b.sub.i] be the ith column of the matrix B. Therefore, we define the sensing matrix related to the GCML as A = (1/[alpha])B, where [alpha] = [max.sub.1[less than or equal to]i[less than or equal to]N] [[parallel][b.sub.i][parallel].sup.2.sub.2] . Note that the scaler 1/[alpha] is used for normalization. In what follows, we assume that each column of C has at most s nonzero entries. In order to give a low bound on the mutual coherence of A, we need the following two useful lemmas.

Lemma 6. Let [a.sub.i] and [b.sub.i], 1 [less than or equal to] i [less than or equal to] n, be positive real numbers, where [a.sub.i] [member of] [[A.sub.1], [A.sub.2]] and [b.sub.i] [member of] [[B.sub.1], [B.sub.2]]. Then

[mathematical expression not reproducible]. (13)

Proof. For any i, we have

[B.sub.1]/[A.sub.2] [less than or equal to] [b.sub.i]/[a.sub.i] [less than or equal to][B.sub.2]/[A.sub.1]. (14)

Namely,

([b.sub.i]/[a.sub.i] - [B.sub.1]/[A.sub.2])([B.sub.2]/[A.sub.1] - [b.sub.i]/[a.sub.i]) [greater than or equal to] 0. (15)

That is,

([B.sub.2]/[A.sub.1] + [B.sub.1]/[A.sub.2])[a.sub.i][b.sub.i] - [b.sup.2.sub.i] - [b.sup.2.sub.i] - [B.sub.1][B.sub.2]/[A.sub.1][A.sub.2][a.sup.2.sub.i] [greater than or equal to] 0. (16)

Thus,

[b.sup.2.sub.i] + [B.sub.1][B.sub.2]/[A.sub.1][A.sub.2] [a.sup.2.sub.i][less than or equal to]([B.sub.2]/[A.sub.1] + [B.sub.1]/[A.sub.2])[a.sub.i][b.sub.i]. (17)

Summing i from 0 to n, we obtain the results.

Lemma 7 (the inverse formula of Cauchy-Schwarz inequality). Let [a.sub.i] and [b.sup.i], 1 [less than or equal to] i [less than or equal to] n, be positive real numbers, where [a.sub.i] [member of] [[A.sub.1], [A.sub.2]] and [b.sub.i] [member of] [[B.sub.1], [B.sub.2]]. Then

[mathematical expression not reproducible]. (18)

Proof. Consider

[mathematical expression not reproducible]. (19)

Thus,

[mathematical expression not reproducible]. (20)

From Lemma 6, we obtain

[mathematical expression not reproducible]. (21)

Squaring both sides, we complete the proof.

Theorem 8. Let [max.sub.A] be the maximal element in A and let [min.sub.A] be the minimal element in A. Then

[mu](A)[greater than or equal to] 2[min.sub.A][square root of ([min.sub.A] [max.sub.A])/[max.sub.A]([max.sub.A] + [min.sub.A]). (22)

Proof. Let [a.sub.i] be the ith column of the matrix A. Thus,

[mathematical expression not reproducible]. (23)

From (1), we can know that each element g([x.sub.k](i)) in matrix A is obviously positive. Figure 1 also provides the experimental evidence. For specific i and j, let [C.sub.1] = [min.sub.1[less than or equal to]k[less than or equal to]M]g([x.sub.k](i), [C.sub.2] = [max.sub.1[less than or equal to]k[less than or equal to]M]g([x.sub.k](i)), [D.sub.1] = [min.sub.1[less than or equal to]k[less than or equal to]M]g([x.sub.k](j)), and [D.sub.2] = [max.sub.1[less than or equal to]k[less than or equal to]M]g([x.sub.k](j)). From (21), we obtain

[mathematical expression not reproducible]. (24)

If the ith column contains the maximal element [max.sub.A] in A and the jth column contains the minimal element [min.sub.A], that is, [C.sub.2] = [max.sub.A] and [D.sub.1] = [min.sub.A], where i [not equal to] j, thus, <[a.sub.i], [a.sub.j]> [parallel][a.sub.i][parallel][parallel][a.sub.j][parallel] [greater than or equal to] 2[absolute value of [C.sub.1][C.sub.2][D.sub.1][D.sub.2]/([C.sub.1][D.sub.1] + [C.sub.2][D.sub.2]) [greater than or equal to] 2 [min.sub.A] [square root of ([min.sub.A][max.sub.A])]/[max.sub.A]([max.sub.A] + [min.sub.A]). If the ith column contains the minimal element and the jth column contains the maximal element, the result also holds. Else, if i = j-that is, the maximal element and the minimal element are both in a column of A-then <[a.sub.i], [a.sub.j]>[parallel][a.sub.i][parallel][parallel][a.sub.j][parallel] also greater than 2 [min.sub.A][square root of ([min.sub.A][max.sub.A])]/[max.sub.A]([max.sub.A] + [min.sub.A]). Thus, we complete the proof.

Theorem 9. Given a sensing matrix A defined by the above, then A satisfies the RIP of order s with [[sigma].sub.s] ((2 [min.sub.A][square root of ([min.sub.A][max.sub.A])]/[max.sub.A]([max.sub.A] + [min.sub.A]))(s - 1) [less than or equal to] [[delta].sub.s] < 1) for all s < 1/[mu](A) + 1.

Proof. For any s-sparse vector v, let [LAMBDA] be the support set and [absolute value of [LAMBDA]] = s. [A.sub.[LAMBDA]] denotes the submatrix of A composed of the column vectors of A corresponding to the support set [LAMBDA] and [v.sub.[LAMBDA]] is the vector obtained by collecting the elements of x corresponding to the set [LAMBDA]. Thus,

[[parallel]Av[parallel].sup.2.sub.2] = <[A.sub.[LAMBDA]][v.sub.[LAMBDA]], [A.sub.[LAMBDA]][v.sub.[LAMBDA]]> = [v.sup.T.sub.[LAMBDA]][A.sup.T.sub.[LAMBDA]][A.sub.[LAMBDA]][v.sub.[LAMBDA]]. (25)

We consider the Gram matrix G = [A.sup.T.sub.[LAMBDA]][A.sub.[LAMBDA]]. Evidently it is positive semidefinite. We denote the minimal eigenvalue by [[lambda].sub.min] and the maximal eigenvalue by [[lambda].sub.max]. From Lemma 4, there exist such that

[mathematical expression not reproducible]. (26)

Therefore,

[mathematical expression not reproducible]. (27)

Because [alpha] = [max.sub.1[less than or equal to]i[less than or equal to]N][[parallel][b.sub.i][parallel].sup.2.sub.2], [mu](B) = [mu](A), and [absolute value of [LAMBDA] \h] = [absolute value of [LAMBDA] \k= s - 1], we can get

[mathematical expression not reproducible]. (28)

If (s - 1)[mu](A) < 1, then we let [[delta].sub.s] = (s - 1)[mu](A). Therefore, when s < 1/[mu]A + 1, there exists [[delta].sub.s] such that (1 - [[delta].sub.s]) [[parallel]v[parallel].sup.2.sub.2] [less than or equal to] [[parallel]Av[parallel].sup.2.sub.2] [less than or equal to] (1 + [[delta].sub.s])[[parallel]v[parallel].sup.2.sub.2]. Note that, from Theorem 8, we know that (2[min.sub.A] [square root of ([min.sub.A][max.sub.A])]/[max.sub.A] ([max.sub.A] + [min.sub.A]))(s-1) [less than or equal to] [[delta].sub.s] < 1. Thus, we complete the proof.

5. Simulation Results

If s < 1/[mu](A) + 1, Theorem 9 shows that the sensing matrix of GCML satisfies the RIP of order s and thus the weighted vector [c.sub.i] of every element i can be exactly recovered. In this section, we give the experimental results of parameter identification of this special type of CML using the above OMP algorithm. We state that the weighted parameter matrix C is s-sparse in our model. Namely, every column vector of C contains at most s nonzero numbers, where s is much smaller than N. Simulations can be classified into the following two types: noiseless recovery and noisy recovery.

5.1. Noiseless Recovery. In this subsection we consider the noiseless model; that is, the observed data are truly produced by (1). Figure 1 investigates the distribution of [x.sub.t](i),where the weighted matrix is 25-sparse and all the nonzero entries of the matrix are independently selected from a normal distribution with mean 0 and variance 0.5. The result shows that it can be roughly regarded as a uniform distribution whose possible values are from 0 to 1. In Figure 2, we randomly choose two lattice elements and study the identification accuracy of the weighted vector. Figures 2(a) and 2(b), respectively, depict the identification effects about the 35th and the 162nd lattice elements in our GCML model. The experimental results show that it can accurately identify the parameters using the reconstruction algorithm in compressed theory. Figure 3 depicts the overall relationship among the sparsity, the required sampling times, and the recovery rate. In our simulations, the recovery rate is defined as SE/[N.sup.2], where SE stands for the number of the same entries between the original and the reconstructed weighted matrix. In Figure 3, the recovery rate is close to 1 due to the sparsity of the weighted matrix. However, when M < 2s, the recovery rate is equal to 0. This particular phenomenon is reasonable because if M < 2s, then the underdetermined linear system y = Ax does not have a unique sparse solution. In Figure 4, we study the effects of the sparsity, the sampling time, and the coupling parameter on the recovery rate. Figure 4(a) shows that, whatever the coupling parameter we choose, it only needs less than 100 samplings to completely recover the whole weighted matrix. Similarly, Figure 4(b) shows that the sparsity has a negative effect on the recovery rate. In general, the smaller the coupling parameter is, the higher the recovery rate is.

5.2. Noisy Recovery. It is very difficult to obtain the accurate observations in actual operation. So we consider the case that the observed data are contaminated with noise. Here we mainly consider two classic types of noises, that is, Gaussian noise and uniform noise. Figure 5 shows that if the observed data are contaminated with Gaussian noise, then the recovery rate is lower than that of accurate observations. However, if the data are contaminated with uniform noise, then the recovery rate is surprisingly higher than that of accurate observations. From a statistical point of view, the reason why this counterintuitive phenomenon happens is that the data contaminated with uniform noise are more uniform and random. In fact, the sensing matrix produced by this mixed data has stronger restricted isometry property.

6. Conclusions

In this paper we establish a bridge between the parameter identification of CML and compressed sensing theory. Specifically, we first skillfully transform the identification problem into the sparse reconstruction problem of an underdetermined linear system. And then we prove that the sensing matrix A of CML satisfies the RIP if the sparsity of original signals is smaller than 1/[mu]( A) + 1. In our simulations, we consider the effects of various factors on the recovery rate, such as sampling time, sparsity, coupled parameter, and noise. In general, our proposed approach has the following advantages. Compared with some existing identification approaches, our approach only requires a very few number of observations for accurate identification. In addition, if the sampled data are contaminated with some types of noises, it will still be able to work.

http://dx.doi.org/ 10.1155/2016/6435320

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China (Grant nos. 61472045 and 61573067) and the Scientific Research Project of Beijing Municipal Commission of Education (Grant nos. KZ20150015015 and KM201510015009).

References

[1] K. Kaneko, "Spatiotemporal intermittency in coupled map lattices," Progress of Theoretical Physics, vol. 74, no. 5, pp. 1033-1044, 1985.

[2] K. Kaneko, "Turbulence in coupled map lattices," Physica D, vol. 18, no. 1-3, pp. 475-476, 1986.

[3] K. Kaneko, "Spatiotemporal chaos in one- and two-dimensional coupled map lattices," Physica D: Nonlinear Phenomena, vol. 37, no. 1-3, pp. 60-82, 1989.

[4] P. M. Gade and C.-K. Hu, "Synchronous chaos in coupled map lattices with small-world interactions," Physical Review E, vol. 62, no. 5 B, pp. 6409-6413, 2000.

[5] A. M. Hagerstrom, T. E. Murphy, R. Roy, P. Hovel, I. Omelchenko, and E. Scholl, "Experimental observation of chimeras in coupled-map lattices," Nature Physics, vol. 8, no. 9, pp. 658-661, 2012.

[6] O. Alvarez-Llamoza and M. G. Cosenza, "Synchronization and phase ordering in globally coupled chaotic maps," in Nonlinear Maps and Their Applications, R. Lopez-Ruiz, D. FournierPrunaret, Y. Nishio, and C. Grocio, Eds., vol. 112 of Springer Proceedings in Mathematics & Statistics, pp. 227-239, 2015.

[7] D. Coca and S. A. Billings, "Identification of coupled map lattice models of complex spatio-temporal patterns," Physics Letters A, vol. 287, no. 1-2, pp. 65-73, 2001.

[8] S. A. Billings and D. Coca, "Identification of coupled map lattice models of deterministic distributed parameter systems," International Journal of Systems Science, vol. 33, no. 8, pp. 623-634, 2002.

[9] P. Marcos-Nikolaus, J. M. Martin-Gonzolez, and R. V. Sole, "Spatial forecasting: detecting determinism from single snapshots," International Journal of Bifurcation and Chaos, vol. 12, no. 2, pp. 369-376, 2002.

[10] S. A. Billings, L. Z. Guo, and H. L. Wei, "Identification of coupled map lattice models for spatio-temporal patterns using wavelets," International Journal of Systems Science, vol. 37, no. 14, pp. 1021-1038, 2006.

[11] H. Ning, X. Jing, and L. Cheng, "Online identification of nonlinear spatiotemporal systems using kernel learning approach," IEEE Transactions on Neural Networks, vol. 22, no. 9, pp. 1381-1394, 2011.

[12] H. Ning, G. Qing, and X. Jing, "Identification of nonlinear spatiotemporal dynamical systems with nonuniform observations using reproducing-kernel-based integral least square regulation," IEEE Transactions on Neural Networks and Learning Systems, 2015.

[13] V. Kohar, S. Kia, B. Kia, J. F. Lindner, and W L. Ditto, "Role of network topology in noise reduction using coupled dynamics," Nonlinear Dynamics, pp. 1-8, 2016.

[14] E. J. Candes and T. Tao, "Decoding by linear programming," IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203-4215, 2005.

[15] E. J. Candes, J. Romberg, and T. Tao, "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information," IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489-509, 2006.

[16] E. J. Candes and T. Tao, "Near-optimal signal recovery from random projections: universal encoding strategies?" Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 52, no. 12, pp. 5406-5425, 2006.

[17] D. L. Donoho, "Compressed sensing," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, 2006.

[18] D. L. Donoho and M. Elad, "Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization," Proceedings of the National Academy of Sciences of the United States of America, vol. 100, no. 5, pp. 2197-2202, 2003.

[19] E. J. Candes, "The restricted isometry property and its implications for compressed sensing," Comptes Rendus de l'Academie des Sciences--Serie I, vol. 346, no. 9-10, pp. 589-592, 2008.

[20] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, "A simple proof of the restricted isometry property for random matrices," Constructive Approximation, vol. 28, no. 3, pp. 253-263, 2008.

[21] J. A. Tropp and A. C. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit," Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, 2007.

[22] R. S. Varga, Gersgorin and His Circles, vol. 36 of Springer Series in Computational Mathematics, Springer, Berlin, Germany, 2004.

[23] S. G. Mallat and Z. Zhang, "Matching pursuits with time frequency dictionaries," IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397-3415, 1993.

[24] J. A. Tropp, "Greed is good: algorithmic results for sparse approximation," IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2231-2242, 2004.

[25] T. Blumensath and M. E. Davies, "Gradient pursuits," IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2370-2382, 2008.

Dong Xie, (1,2) Lixiang Li, (1,2) Xinxin Niu, (1,2) and Yixian Yang (1,2)

(1) Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China

(2) National Engineering Laboratory for Disaster Backup and Recovery, Beijing University of Posts and Telecommunications, Beijing 100876, China

Correspondence should be addressed to Lixiang Li; li_lixiang2006@163.com

Received 25 January 2016; Accepted 29 February 2016

Academic Editor: Yang Tang

Caption: Figure 1: The distribution of observed values [x.sub.t](i). [epsilon] = 0.01, N = 256, M = 64, and K = 25.

Caption: Figure 2: The identification effects based on compressed sensing. [epsilon] = 0.01, N = 256, M = 100, and s = 15.

Caption: Figure 3: The relationship among the sparsity, the sampling time, and the recovery rate. [epsilon] = 0.01 and N = 128.

Caption: Figure 4: Recovery rate for different sampling times, sparsity, and coupling parameters. (a) N = 160 and s = 20; (b) N = 256 and M = 60.

Caption: Figure 5: Recovery rate for different sampling times, sparsity, and noises. (a) [epsilon] = 0.01, N = 160, and s = 20; (b) [epsilon] = 0.01, N = 256, and M = 60.

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Xie, Dong; Li, Lixiang; Niu, Xinxin; Yang, Yixian |

Publication: | Mathematical Problems in Engineering |

Date: | Jan 1, 2016 |

Words: | 5321 |

Previous Article: | Integrated Guidance and Control Based Air-to-Air Autonomous Attack Occupation of UCAV. |

Next Article: | Recent Advances on the Theory and Applications of Hybrid Systems. |

Topics: |