Printer Friendly

Compressive Sensing of Multichannel EEG Signals via [l.sub.q] Norm and Schatten-p Norm Regularization. (Research Article.

1. Introduction

The electroencephalogram (EEG) signal is one of the most frequently used biomedical signals [1, 2]. It is known that EEG signals are important health indicators for stroke and trauma; recent studies also indicate that EEG signals can be used for studying dementia and Alzheimer disease. Therefore the monitoring of these signals is of utmost importance. However, continuous EEG monitoring usually records a large number of data which is too large to be sampled and transmitted in many applications [3-5]. To overcome this issue, prior studies have proposed compressive sensing (CS) [6, 7].

From fewer measurements than suggested by the Nyquist theory, compressive sensing (CS) proves that a signal can be recovered when it is sparse in a transform domain. The sampling model is formulated as follows:

y = [PHI]x, (1)

where y is the random measurement and [PHI] [member of] [R.sup.MxN] (M < N) is the sampling matrix. CS assumes that the signal x [member of] [R.sup.N] can be represented as x = [PSI]s where [PSI] is the transform domain and s only contains a small number of nonzero elements. Then the synthesis based [l.sub.0]-minimization model is formulated as

[mathematical expression not reproducible], (2)

where [[parallel] s [parallel].sub.0] counts the number of nonzero elements in s. Many methods are proposed to solve problem (2), such as BP [6], OMP [8], and IHT [9].

Different from the traditional sparse or block sparse signal model, the cosparse signal model [10, 11] assumes that a signal multiplied by an analysis operator results in a sparse vector. The analysis based [l.sub.0]-minimization problem can be formulated: [mathematical expression not reproducible], (3) where O [member of] [R.sup.KxN] (K [greater than or equal to] N) is the analysis operator and a = Ox is the cosparse vector. The above minimization problem can be efficiently solved by many methods, including GAP [10], ABS [12], AIHT [13], and ACoSaMP [14].

The cosparse analysis method has a number of advantages for multichannel EEG signals, which has been demonstrated in [15]. First, compared with the sparse synthesis model which limits the incoherence of the sampling matrix, the cosparse analysis model allows the columns of the analysis operator to be coherent, which can obtain better recovery results. Second, the sparse synthesis model firstly estimates the sparse vector s and then estimates the signal, but the cosparse analysis model directly estimates the EEG signal. In a word, the cosparse analysis method is more suitable than sparse synthesis approach for CS recovery of multichannel EEG signals.

Since the EEG signals from multiple channels are correlated with each other, they motivate us to recover multichannel EEG signals via low-rank regularization [16-18]. Recently, a simultaneous cosparsity and low-rank (SCLR) optimization model [15] has shown the state-of-the-art performance in CS recovery of multichannel EEG signals. SCLR chooses the second-order difference matrix as the analysis operator to enforce the approximate piecewise linear structure, and it takes use of [l.sub.1] norm and nuclear norm as a convex surrogate function for [l.sub.0] norm and rank function. However, SCLR approach may obtain suboptimal results in real application since the [l.sub.1] norm and nuclear norm may not be good surrogate functions for [l.sub.0] norm and rank. There exist irreparable gaps between [l.sub.0] norm, the real rank and [l.sub.1] norm, and nuclear norm, respectively. The optimization results based on convex surrogate functions essentially deviate from the real solution of original minimization problem.

Motivated by the fact that [l.sub.q] norm can obtain a more accurate result in sparse synthesis model [19, 20], schatten- p norm can efficiently recover low-rank matrix in image denoising [21, 22]. They have been proved rigorously in theory that lq norm and schatten- p norm are equivalent to [l.sub.0] norm and rank function, respectively, when p and q are tend to be 0. So it is desirable to take them together to better exploit cosparsity and low-rank property of multichannel EEG signals.

In this paper, a novel CS model based on lq norm and schatten- p norm (LQSP) is proposed for the compressive sensing recovery of multichannel EEG signals reconstruction. We take use of [l.sub.q] norm for the [l.sub.0] norm to enforce cosparsity prior and employ schatten- p norm for the matrix rank to enforce low-rank property prior. In addition, the alternating direction method of multipliers (ADMM) is used to efficiently solve the resulting nonconvex optimization problem.

The rest of the paper is organized as follows. In Section 2, we present our proposed LQSP in detail to exploit the cosparsity and low-rank property. In Section 3, we show that the optimization problems can be solved efficiently by the alternating direction multiplier method. Then we present the numerical experiments in Section 4. Section 5 provides some concluding remarks.

2. [l.sub.q] Norm and Schatten- p Norm for CS Recovery of Multichannel EEG Signals

The cosparse recovery model for multichannel EEG signals can be represented as [10]

[mathematical expression not reproducible], (4)

where X [member of] [R.sup.NxC] and C is the number of the channels. r(X) puts all the columns of X into the column vector sequentially.

In Figure 1, we select chb01_31.edf which is used in our experiments as the test data and take the second-order difference matrix as the cosparse operator. From Figure 1, we can see that most entries of the cosparse vector are nearly zero and many singular values are close to 0, which have shown that our test data naturally have both cosparsity and low-rank property. So we simultaneously exploit these two useful priors in multichannel EEG signal recovery form the compressed measurement. Then the optimization model can be reformulated as [15]

[mathematical expression not reproducible]. (5)

We cannot directly solve the above optimization problem that contained the [l.sub.0] norm and matrix rank function, which is known as an NP-hard problem. To obtain an approximated solution, SCLR employs [l.sub.1] norm and nuclear norm as surrogate functions for [l.sub.0] norm and matrix rank, where the [l.sub.1] norm sums all the absolute values of the entries and the nuclear norm sums all the singular values of the matrix. However, SCLR may obtain suboptimal results by using convex surrogate functions.

Motivated by the fact that [l.sub.q] norm can obtain a more accurate result in sparse synthesis model [19, 20] and schatten- p norm can efficiently recover low-rank matrix in image denoising [21, 22], we propose to take use of [l.sub.q] norm and schatten- p norm as nonconvex surrogate functions for [l.sub.0] norm and rank function. Then the problem can be reformulated as follows:

[mathematical expression not reproducible], (6)

where [l.sub.q] norm sums all the absolute values of the entries to the power of q and [mathematical expression not reproducible] sums all the singular values of X to the power of p.

3. Optimization Algorithm

It is very difficult to solve the above constrained optimization problem, so we employ ADMM, which has been widely used in compressive sensing [23-25], to divide this complicated problem into simpler subproblems and address them iteratively. Figure 2 gives the flow chart of the proposed approach.

By adding a set of auxiliary variables {A, B}, the recovery problem can be reformulated as

[mathematical expression not reproducible]. (7)

The corresponding augmented Lagrangian term is

[mathematical expression not reproducible], (8)

where {f1, f2, f3] is a set of Lagrangian multipliers. Problem (8) consists of the following three subproblems:

[mathematical expression not reproducible], (9)
Algorithm 1: Compressive sensing recovery via [I.sub.q] norm
and schatten- p norm regularization.

Input: [PHI], O [member of] [R.sup.KxN] (K = 2 x N), [[beta].sub.1] =
  [[beta].sub.2] = [[beta].sub.3] = 1, [lambda] = 1, [tau] = 0.05,
       [f.sub.1] = zeros(M, C), [f.sub.2] = zeros(K, C), [f.sub.3] =
         zeros(N, C);
  while stopping criteria unsatisfied do
     (a) Solve X sub-problem by computing (10);
     (b) Solve A sub-problem by computing (12);
     (c) Solve B sub-problem by computing (13);
     (d) Update Lagrangian multipliers:
         [f.sub.1] [right arrow] [f.sub.1] - [tau][[beta].sub.1]
            (Y - [PHI][X.sup.k+1])
         [f.sub.2] [right arrow] [f.sub.2] - [tau][[beta].sub.2]
            ([A.sup.k+1] -O[X.sup.k+1])
         [f.sub.3] [right arrow] [f.sub.3] - [tau][[beta].sub.3]
            ([B.sup.k+1] -[X.sup.k+1])
   end while
Output: final reconstructed signal [??].


Next, we present the details for solving each subproblem.

3.1. X Subproblem. X subproblem is a quadratic optimization problem admitting a closed-form solution

[X.sup.k+1] = [([[beta].sub.1] [[PHI].sup.T] [PHI] + [[beta].sub.2] [O.sup.T] O + [[beta].sub.3] I).sup.-1] ([beta]1 [[PHI].sup.T] Y

+ [[beta].sub.2] [O.sup.T] [A.sup.k] + [[beta].sub.3] [B.sup.k] - [[PHI].sup.T] [f.sub.1] - [O.sup.T] [f.sub.2] - [f.sub.3]), (10)

where I [member of] [R.sup.NxN] is an identity matrix.

3.2. A Subproblem. A subproblem is a nonconvex problem; we cannot obtain the global minimizer. But we can solve it by using an iteratively reweighted approach. Assume

[mathematical expression not reproducible], (11)

where [u.sup.t.sub.i] is the weight q[([absolute value of [a.sup.t.sub.i]]).sup.q-1] that is computed from the previous iterative [A.sup.k+1,t] and [a.sub.i] is the ith value of r(A). Then the problem admits a closed-form solution [26].

[a.sup.k+1,t+1] = max (r (O[X.sup.k+1] + [f.sup.2]/[[beta].sup.2]) - 1/[[beta].sub.2] [u.sup.t] 0); (12)

when [a.sup.k+1,t+1] satisfies the convergence condition, we set [a.sup.k+1] = [a.sup.k+1,t+1].

3.3. B Subproblem. Unfortunately, the B subproblem is also a nonconvex problem. The algorithm to solve this was derived in [23] which is called weighted singular value shrinkage.

Bk+1,c+1 = U max {[DELTA] - [lambda]/[[beta].sub.3] diag ([w.sup.c]), 0} [V.sup.T], (13)

where U[DELTA][V.sup.T] is the SVD of ([X.sup.k+1] + [f.sub.3]/[[beta].sub.3]) and [w.sup.c.sub.j] = p[[delta].sup.p-1.sub.j]. [[delta].sub.j] [B.sup.k+1,c]) is the jth singular value of [B.sup.k+1,c]. When [B.sup.k+1,c+1] satisfies the convergence condition, we set [B.sup.k+1] = [B.sup.k+1,c+1] (see Algorithm 1).

4. Numerical Experimental Results

In this section, extensive experiments are conducted to verify the performance of the proposed LQSP approach. We compare our method with SCLR based on interior point method (SCLR-I) [15], ADMM method based SCLR (SCLR-A) [15], simultaneous orthogonal matching pursuit (SOMP) [27], BSBL [5], and simultaneous greedy analysis pursuit (SGAP) [28]. The experiments are carried out on the CHB-MIT scalp EEG database which is online available in the PhysioBank database: http://www.physionet.org/cgi-bin/atm/ATM [29]. In our experiments, the EEG recording chb01_31.edf is used to demonstrate the superiority of our approach. To quantify the difference between the estimate results and the original data, MSE and MCC are used. MSE = [G.summation over (g=1)] ([[parallel][??] - X [parallel].sup2.sub.F]/GNC) measures the average of the squares of the errors. G is the number of the experiments. MCC = [G.summation over (g=1)] (vec[(X).sup.T] vec([??])/G[[parallel] X [parallel].sub.F] [[parallel] [??] [parallel].sub.F]) is equivalent to the structural similarity index, which measures the similarity of two signals [30].

The parameter settings of CS-TSPN are as follows: the second-order difference matrix is chosen as the analysis operator; the sampling matrix is the Gaussian matrix; the number of compressive measurements is denoted by rate = M/N; q and p are variables, which are selected from 0.1 to 0.5 by step 0.1, respectively. In order to save computational complexity, the inner iteration numbers t and c are set to 1. In addition, the sparse dictionary of SOMP is Daubechies wavelets.

Figures 3-5 display the values of MSE, MCC, and CPU time of the different approaches for compressive sensing recovery of multichannel signals at different sensing rates. We can see that SCLR-I and SCLR-A can obtain better results than BSBL, which is reported to be the best candidate for EEG signal recovery based on sparse synthesis model. This conclusion has been demonstrated in [15]. LQSP outperforms the other ones in accuracy. The speed of LQSP is faster than SCLR-I and the same as SCLR-A. The greedy algorithms SOMP and SGAP are much faster than the rest; their accuracy is much worse and not acceptable. Therefore, it is demonstrated that our proposed LQSP approach is a better candidate for multichannel EEG signal recovery than the other methods.

It is very important to choose the proper q and p for LQSP. We analyze the influence of variables q and p on the signal recovery results. Table 1 lists all the optimal values of q and p at each sensing rate, from which we can find that the optimal q and p values are different in each case. Figure 6 shows the MCC versus q value with rate = 0.3 and p = 0.5. Figure 7 shows the MCC versus p value with rate = 0.3 and q = 0.4. From Figures 5 and 6, the selections of p and q values are crucial to our proposed approach.

5. Conclusion

In this paper, we have presented a new approach toward CS recovery of multichannel EEG signals by exploiting cosparsity and low-rank property simultaneously in a unified manner. lq norm and schatten- p norm are used as the nonconvex surrogate functions for the [l.sub.0] norm and matrix rank, respectively, and ADMM is applied to efficiently solve the resulting nonconvex optimization problem. Experiments have shown that LQSP can achieve the superior performance compared with other competitive reconstruction algorithms with the same amount of measurements.

http://dx.doi.org/10.1155/2016/2189563

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by the Natural Science Fund for Colleges and Universities in Jiangsu Province, Grants no. 16KJB520014 and no. 14KJB520012, the Jiangsu Key Laboratory of Image and Video Understanding for Social Safety (Nanjing University of Science and Technology), Grant no. 30916014107, the National Natural Science Foundation of China, no. 61375121, and the Doctoral Scientific Research Foundation of Jinling Institute of Technology, no. Jit-b201508.

References

[1] A. Hooda and S. Sharma, "Wireless body area network," Proc International Symposium on Medical Information & Communication Technology, vol. 3, no. 3, pp. 203-210, 2013.

[2] A. J. Casson, D. C. Yates, S. J. M. Smith, J. S. Duncan, and E. Rodriguez-Villegas, "Wearable electroencephalography," IEEE Engineering in Medicine and Biology Magazine, vol. 29, no. 3, pp. 44-56, 2010.

[3] M. H. Kamal, M. Shoaran, Y. Leblebici, A. Schmid, and P. Vandergheynst, "Compressive multichannel cortical signal recording," in Proceedings of the 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '13), pp. 4305-4309, can, May 2013.

[4] A. M. Abdulghani, A. J. Casson, and E. Rodriguez-Villegas, "Compressive sensing scalp EEG signals: implementations and practical performance," Medical & Biological Engineering & Computing, vol. 50, no. 11, pp. 1137-1145, 2012.

[5] Z. Zhang, T.-P. Jung, S. Makeig, and B. D. Rao, "Compressed sensing of EEG for wireless telemonitoring with low energy consumption and inexpensive hardware," IEEE Transactions on Biomedical Engineering, vol. 60, no. 1, pp. 221-224, 2013.

[6] D. L. Donoho, "Compressed sensing," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, 2006.

[7] E. J. Candes and M. B. Wakin, "An introduction to compressive sampling," IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21-30, 2008.

[8] J. A. Tropp and A. C. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit," IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, 2007.

[9] A. Beck and M. Teboulle, "A fast iterative shrinkage-thresholding algorithm for linear inverse problems," SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202, 2009.

[10] S. Nam, M. E. Davies, M. Elad, and R. Gribonval, "The cosparse analysis model and algorithms," Applied and Computational Harmonic Analysis, vol. 34, no. 1, pp. 30-56, 2013.

[11] J. Wormann, S. Hawe, and M. Kleinsteuber, "Analysis based blind compressive sensing," IEEE Signal Processing Letters, vol. 20, no. 5, pp. 491-494, 2013.

[12] N. Cleju, M. G. Jafari, and M. D. Plumbley, "Analysis-based sparse reconstruction with synthesis-based solvers," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '12), pp. 5401-5404, Kyoto, Japan, March 2012.

[13] S. Nam, M. E. Davies, M. Elad, and R. Gribonval, "Cosparse analysis modeling," in Proceedings of the International Conference on Uniqueness and Algorithms. Acoustics, Speech, and Signal Processing, pp. 5804-5807, Edinburgh, UK, September 2011.

[14] Y. Liu, M. De Vos, I. Gligorijevic, V. Matic, Y. Li, and S. Van Huffel, "Multi-structural signal recovery for biomedical compressive sensing," IEEE Transactions on Biomedical Engineering, vol. 60, no. 10, pp. 2794-2805, 2013.

[15] Y. Liu, M. De Vos, and S. V. Huffel, "Compressed sensing of multichannel EEG signals: the simultaneous cosparsity and low-rank optimization," IEEE Transactions on Biomedical Engineering, vol. 62, no. 8, pp. 2055-2061, 2015.

[16] A. Majumdar, A. Gogna, and R. Ward, "A low-rank matrix recovery approach for energy efficient EEG acquisition for a wireless body area network," Sensors, vol. 14, no. 9, pp. 15729-15748, 2014.

[17] W. Singh, A. Shukla, S. Deb, and A. Majumdar, "Energy efficient acquisition and reconstruction of EEG signals," in Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '14), pp. 1274-1277, August 2014.

[18] A. Majumdar, A. Shukla, and R. Ward, "Combining sparsity with rank-deficiency for energy efficient EEG sensing and transmission over Wireless Body Area Network," in Proceedings ofthe 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '15), pp. 837-841, Brisbane, Australia, April 2015.

[19] B. D. Rao and K. Kreutz-Delgado, "An affine scaling methodology for best basis selection," IEEE Transactions on Signal Processing, vol. 47, no. 1, pp. 187-200, 1999.

[20] R. Chartrand and W. Yin, "Iteratively reweighted algorithms for compressive sensing," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 3869-3872, Las Vegas, Nev, USA, March 2008.

[21] J. Wang, M. Wang, X. Hu, and S. Yan, "Visual data denoising with a unified Schatten-p norm and [l.sub.q] norm regularized principal component pursuit," Pattern Recognition, vol. 48, no. 10, pp. 3135-3144, 2015.

[22] Y. Xie, "Weighted schattenp-norm minimization for image denoising with local and nonlocal regularization," https://arxiv.org/abs/1501.01372.

[23] W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang, "Compressive sensing via nonlocal low-rank regularization," IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3618-3632, 2014.

[24] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, "Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1-122, 2010.

[25] Z. Lin, M. Chen, and Y. Ma, "The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices," https://arxiv.org/abs/1009.5055.

[26] X. Shu, J. Yang, and N. Ahuja, "Non-local compressive sampling recovery," in Proceedings of the 6th IEEE International Conference on Computational Photography (ICCP '14), pp. 1-8, IEEE Computer Society, Santa Clara, Calif, USA, May 2014.

[27] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, "Algorithms for simultaneous sparse approximation. Part I: greedy pursuit," Signal Processing, vol. 86, no. 3, pp. 572-588, 2006.

[28] Y. Avonds, Y. Liu, and S. Van Huffel, "Simultaneous Greedy Analysis Pursuit for compressive sensing of multi-channel ECG signals," in Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '14), pp. 6385-6388, Chicago, Ill, USA, August 2014.

[29] A. L. Goldberger, L. A. Amaral, L. Glass et al., "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals," Circulation, vol. 101, no. 23, pp. E215-220, 2000.

[30] Z. Wang and A. C. Bovik, "Mean squared error: love it or leave it? A new look at signal fidelity measures," IEEE Signal Processing Magazine, vol. 26, no. 1, pp. 98-117, 2009.

Jun Zhu, (1) Changwei Chen, (2) Shoubao Su, (3) and Zinan Chang (3)

(1) School of Computer Engineering, Jinling Institute of Technology, Nanjing 211169, China

(2) College of Computer and Information Engineering, Nanjing Xiaozhuang University, Nanjing 210017, China

(3) Jinling Institute of Technology, Nanjing, China

Correspondence should be addressed to Jun Zhu; xiaobaabcabc@126.com

Received 11 June 2016; Revised 25 September 2016; Accepted 19 October 2016

Academic Editor: Raffaele Solimene

Caption: Figure 1: The cosparsity and low-rank property in 23-channel EEG data (chb01_31.edf).

Caption: Figure 2: Illustration of cosparsity and low-rank regularizations based CS approach for multichannel EEG signal. First, obtain an estimated EEG signal from sensing matrix and measurements. Second, apply cosparsity and low-rank constraints to the estimated signal, and obtain an improved EEG signal.

Caption: Figure 3: The MSE comparison of different methods at different rates.

Caption: Figure 4: The MCC comparison of different methods at different rates.

Caption: Figure 5: The CPU time comparison of different methods with different rates.

Caption: Figure 6: PSNR value versus q value with rate = 0.3 measurements and p = 0.5.

Caption: Figure 7: MCC versus p value with rate = 0.3 measurements and q = 0.4.
Table 1: The optimal values of variables p and q at each sensing rate.

Value   0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8

p       0.1  0.1  0.5  0.4  0.2  0.4  0.5  0.1
q       0.1  0.1  0.4  0.5  0.3  0.5  0.4  0.3
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Zhu, Jun; Chen, Changwei; Su, Shoubao; Chang, Zinan
Publication:Mathematical Problems in Engineering
Article Type:Report
Date:Jan 1, 2016
Words:3679
Previous Article:A Method of Effective Text Extraction for Complex Video Scene.
Next Article:Quantification of Climate Changes and Human Activities That Impact Runoff in the Taihu Lake Basin, China.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters