Printer Friendly

Robust Adaptive Algorithm by an Adaptive Zero Attractor Controller of ZA-LMS Algorithm.

1. Introduction

A sparse system is characterized by an impulse response with more number of zero and near to zero magnitude coefficients and a few large coefficients. In other words, it is the impulse response with large fraction of energy concentrated in a small region. For example, in underwater acoustics, the channels impulse has sparse structure with multipath in such a way that most of the energy is concentrated only in small regions [1]. Another example of sparse system is acoustic echo signal measured in a loudspeaker enclosed microphone (LEM) system. Here, the echo path is made of only 9-12% of active coefficients due to the large propagation delay and it varies with respect to the movement of objects, speakers, temperature, and so forth [2]. Other prominent examples of sparse systems are network echo channel [3] where only 90-100 filter coefficients have large magnitude among 1024 sequence length impulse responses and wireless multipath channels which are made of multipath with only few active paths [4]. The impulse response not only is sparse but also is said to be time varying [1-4]. If the algorithms used for identifying such systems can be made to make use of sparseness, then an improved performance can be obtained.

Traditional adaptive filters like Least Mean Square (LMS), Normalized LMS (NLMS), and Affine Projection Algorithms (APA) fail to make use of sparsity level to improve their performance [5]. Literatures reveal that several variants were developed to make use of sparsity to improve their performance. Some of the well-known ones are the proportionate type algorithms and their variants [6-8], partial update [9], l1 norm [10-12], l0 norm [13, 14] based, and exponentiated gradient algorithms [15]. Among these variants, [l.sub.1] norm based algorithms are very popular due to their convex property and they provide uniform attraction to all filter taps [16]. They work by including an extra term called zero attractor term into the original cost function and thus they have improved performance in terms of faster convergence and lower steady state mean square error (MSE) than their conventional counterparts when the system is sparse [10, 11]. The chief advantage of ZA-LMS is that its computational complexity is comparatively lesser than proportionate type adaptive filters [17], ZA-APA and ZA-NLMS algorithms, and their variants [16] which act as a major criterion especially when the system is long such as echo cancellation application. However, the major drawback of ZA-LMS is that it works well only when the system is highly sparse and the performance deteriorates when the sparsity level is decreased and it becomes worse than LMS under nonsparse condition [18, 19]. Another difficulty of ZA-LMS is that the convergence and steady state error depend on the value of zero attractor controller [18] which motivates for a proper selection rule.

Several attempts were made to improve the performance of ZA-LMS under nonsparse condition. One such approach is the reweighted ZA-LMS (RZA-LMS) [10]. Here, the zero attractor value is changed in such a way that the attraction is applicable for zero taps only. The RZA-LMS suffers from the difficulty of selecting an appropriate shrinkage factor especially for time varying sparse system [16, 20]. Combinational approach is found to be another alternate. In convex combination of ZA-LMS and LMS proposed in [21], the mixing parameter is updated in such a way that the algorithm follows the one that provide fast convergence and lower steady state MSE always. Computational complexity is the major drawback of this approach. Moreover, the performance of the algorithm depends on the zero attractor controller which again necessitates the requirement of an optimal zero attractor controller. Several selection rules for the zero attractor controller were proposed for ZA-LMS but they are not practically feasible [18, 19].

Thus, this paper proposes an alternate approach to deal with time varying sparse systems. Here, the optimal zero attractor controller is first found by choosing a criterion that provides largest decrease in the MSD error from one iteration to the other. In order to adapt to the time varying sparsity, a simple rule for the update of the zero attractor controller is proposed. It is found from [10] that the difference between optimal weights and l1 norm of filter weights becomes positive only if the system is highly sparse and it becomes negative for nonsparse system. Therefore this is used as a metric to update the zero attractor controller. Thus, robustness in the context of variable sparsity is achieved and is further proved by simulations.

The rest of the paper is organized as follows. Section 2 reviews ZA-LMS algorithm. This is followed by Section 3, in which the adaptive zero attractor controller based ZA-LMS is proposed. An optimal zero attractor controller based on MSD is obtained. A practical optimal zero attractor controller is also derived. Further, an update rule is proposed in this section. Section 4 deals with simulations and conclusions are provided in Section 5.

2. Review of ZA-LMS Algorithm

Consider an unknown system with input x(n) = [[x(n), x(n - 1), x(n - 2), ..., x(n - N + 1)].sup.T] of length N. The desired response d(n) is modeled as a multiple linear regression model given by d(n) = [w.sup.T.sub.o] x(n) + v(n) where [w.sub.o] is the optimal weight vector of length N that need to be estimated and v(n) is the noise source. Let y(n) = [w.sup.T](n)x(n) be the estimated output for the given input x(n) and estimated weights w(n). If e(n) = d(n) - y(n) denotes the error signal which is obtained as the difference between desired and estimated response, the ZA-LMS updates the weights by the recursion given by [10] as

w (n + 1) = w (n) + [mu]e (n) x (n) - [rho] sgn (w (n)), (1)

where [mu] is the step size and sgn(w(n)) is the component-wise sign function given by

[mathematical expression not reproducible]. (2)

From (1), it is found that the update equation consists of three terms. The first two terms are similar to that of the conventional LMS and the third term is the zero attractor term which is responsible for attraction of coefficients to zero thereby accelerating the convergence speed and [rho] is the zero attractor controller which decides the strength of attraction.

Convergence analysis of ZA-LMS [18] indicates that the zero attractor controller parameter plays a major role in reducing the convergence and steady state error tradeoff. For sparse system, a small value of [rho] lowers the steady state error at the cost of slower convergence and if faster convergence is required, then [rho] is increased but at the same time steady state error also increases when the system is sparse. This urges for an optimal [rho]. Also, it is evident from [18] that ZA-LMS cannot outperform standard LMS when the system is nonsparse. Besides, [rho] should be as per the sparsity level when the system changes from sparse to semisparse or nonsparse. Thus, constant value of [rho] is not suitable especially for time varying sparse system and robust algorithm can only be achieved by changing the value of zero attractor controller as per the level of sparsity. Therefore, an adaptive zero attractor controller based ZA-LMS in order to improve its robustness against variable sparsity level is proposed.

3. Proposed Algorithm

This section proposes an adaptive ZA-LMS algorithm. Then, a theoretical

optimal zero attractor controller is deduced based on largest decrease in MSD. A practical optimal zero attractor controller is obtained and a simple update rule for the proposed algorithm is proposed.

The proposed algorithm is based on varying zero attractor controller. Thus, by replacing [rho] by a time varying function, the update recursion of adaptive ZA-LMS becomes

w (n + 1) = w (n) + [mu]e (n) x (n) - [rho](n + 1) sgn (w (n)). (3)

Here, [mu] is assumed to be constant in order to have stable operation [18].

4. Assumptions

The following are the assumptions used in this work:

(A.1) The input is assumed to be independent and identically distributed (i.i.d.) and is white with zero mean and variance [[sigma].sup.2.sub.x].

(A.2) The noise is also i.i.d. and is assumed to be white with zero mean and variance [[sigma].sup.2.sub.v].

(A.3) The weight error vector [??](n) is independent of the input.

These assumptions are commonly used in analyzing all adaptive filters [22]. Using these assumptions, the optimal zero attractor controller is derived.

5. Optimal Zero Attractor Controller

The optimal value is based on the objective of

[[rho].sub.0] = min E {[[parallel][??](n + 1)[parallel].sup.2] - [parallel][??](n)[parallel].sup.2]}, (4)

where [??](n) is the weight error vector given by [??](n) = [w.sub.o] - w(n). The update recursion in terms of weight error vector can be written as

[??] (n + 1) = [??] (n) - [mu]e(n) x (n) + [rho](n + 1) sgn (w (n)). (5)

Squaring on both sides of (5) and if expectation is taken, we obtain (6) after substituting e(n) = [w.sup.T] (n) x (n) + v(n) [equivalent to] [x.sup.T](n)w(n) + v(n) as

[mathematical expression not reproducible]. (6)

As [mu] is constant which is assumed as 0 < [mu] < 2/(N + 2) [[sigma].sup.2.su.x] [18, 22], the optimal zero attractor controller is obtained by differentiating (6) with respect to [rho](n + 1) on both sides and equating it to zero. Thus, the optimal zero attractor controller is given by

[[rho].sub.o] (n + 1) = - E [[[??].sup.T] (n) sgn (w (n))]/ [(I - [mu][[mu].sup.2.sub.x].sup.-1] E [sgn [(w(n)).sup.T] sgn (w (n))]. (7)

The optimal value obtained consists of nonlinear terms. In order to find a feasible solution, let

-E [[[??].sup.T] (n) sgn (w (n))] = [[beta].sub.1] (n), E [sgn [(w (n)).sup.T] sgn (w (n))] = [[beta].sub.2] [beta] (n). (8)

The value of [[beta].sub.2] (n) is always positive and is equal to N [10,21]. The step size is chosen such that [mu] < 1/(N + 2)[[sigma].sup.2.sub.2] [18] in order to have stability. Thus, [[(I - [mu][[sigma].sup.2.sub.x]).sup.-1] E[sgn(w(n)).sup.T] sgn(w(n))] = N. In order to find [[beta].sub.1](n), the filter weights are divided as nonzero (NZ) and zero (Z) filter coefficients such that NZ [union] Z = N and NZ [intersection] Z = [empty set] [10, 18, 21]. If [??](n) = [w.sub.o] - w(n) is substituted in [[beta].sub.1] (n) and if the weights are assumed to follow Gaussian distribution, then

[mathematical expression not reproducible]. (9)

For i [member of] Z, E[[w.sub.o]] = 0, if Price's Theorem is used (E[w(n)] sgn(w(n)) = [square root of 2[[sigma].sup.2.sub.w]/[pi]) then

[mathematical expression not reproducible], (10)

where [[sigma].sup.2.sub.w] is the variance of the weights. Hence, the first term varies with respect to zeros of the filter coefficients and second term varies with respect to nonzero filter coefficients. If the number of nonzero coefficients is high then the value of [[beta].sub.1] (n) will be negative as more bias is obtained on nonzero terms in ZA-LMS algorithm. On the other hand, if the number of zero coefficients is high, then a positive value of [[beta].sub.1] (n) is obtained as the first term in (10) dominates the second term [10]. Thus, the rule for updating zero attractor controller term of ZA-LMS is given by

[mathematical expression not reproducible]. (11)

Since the update equation for [rho] includes [[beta].sub.1](n) which is a nonlinear term and which involves [??](n) which is not known in advance, time average method is adopted to find the value of [[beta].sub.1](n). Thus,

[[beta].sub.1] (n) = [alpha][[beta].sub.1] (n - 1) + (1 - [alpha]) [[??].sup.T] (n) sgn (w (n)), (12)

where [alpha] is the smoothing factor which varies as 0 < [alpha] < 1.

6. Justification of Adaptive Rule

The update rule for the zero attractor controller must be in such a way that the zero attractor controller works at optimal value in case of sparse and semisparse system and to zero in case of nonsparse system. Equation (11) shows the update rule. It can be seen that in the case of highly sparse and semisparse systems the value of [[beta].sub.1] (n) is positive as the first term in (10) is higher than the second one. The optimal zero attractor controller is obtained as a function of variance of the weights. A large value of [[sigma].sup.2.sub.w] is obtained if the system is highly sparse due to the presence of large zero and small nonzero coefficients and small value is obtained for semisparse system as the system has more or less equal number of zero and nonzero coefficients. On the other hand, if the nonzero taps dominate, then [[beta].sub.1] (n) is said to be negative as the second term is higher than the first term. Thus, the zero attractor controller becomes zero which is the required criterion for nonsparse system. Thus, the proposed algorithm is found to be robust under variable sparsity conditions.

7. Simulations

The proposed algorithm is further tested through simulations which are made for identification of the unknown systems. The adaptive filter and the unknown system are assumed to have the same lengths. The input x(n) and noise v(n) are both white Gaussian source with zero meanvariance unity and [[sigma].sup.2.sub.v], respectively, such that the SNR = 30 db. It is also assumed that the variance of the noise source is known. The results are averaged over 100 independent runs. Normalized MSD is used to evaluate the performance of the algorithm. The Normalized MSD is defined as 10 [log.sub.10]([[parallel][w.sub.o] - w[parallel].sub.2]/[[parallel][w.sub.o][parallel].sub.2]).

In the first experiment, an unknown system with 32 coefficients is taken. In order to evaluate the performance of the proposed algorithm, the performance of LMS [5], ZA-LMS [10] is also simulated with our proposed adaptive ZA-LMS algorithm. All the three conditions, namely, sparse, semisparse, and nonsparse, are taken for analysis. The sparse system has one nonzero coefficient and the position is chosen randomly. The semisparse system has equal number of zero and nonzero filter coefficients and the nonsparse system has 32 nonzero filter coefficients. The step sizes is chosen as [mu] = 0.01 for all the algorithms and [alpha] is set as 7 x 10-5 (as per (42) of [18]) for ZA-LMS algorithm and a = 0.99 for adaptive ZA-LMS algorithm. The simulation results are shown in Figure 1.

For the first 3000 iterations, the system is said to be sparse and, for the next 3000 iterations, the system changes to semisparse and for the last 3000 iterations the nonsparse condition is applied. When the system is highly sparse during the first 3000 iterations, adaptive ZA-LMS performs better than standard LMS and ZA-LMS with constant [rho] whereas when the system is semisparse, adaptive ZA-LMS maintains the best performance with lesser steady state error among the three filters. After 6000 iterations, when the system is nonsparse, the performance of ZA-LMS decreases while the proposed adaptive ZA-LMS maintains the performance comparable with the standard LMS. The same experiment is repeated for a step size of [mu] = 0.03 for all the algorithms and [rho] = 6 x [10.sup.-4] for ZA-LMS and similar performance curve is plotted in Figure 2.

Several interesting findings can be observed from Figures 1 and 2. First of all, in all different environmental conditions, the proposed algorithm gives lowest steady state error with faster convergence which confirms the robustness of the algorithm against variable sparsity conditions. This is an expected occurrence since, as per (11), [rho] is changed based on [[beta].sub.1](n). If the present weights result in positive value of [[beta].sub.1](n), then the optimal value of [rho] is obtained from the variance of weights. On the other hand, if the system changes to nonsparse, then, [[beta].sub.1] (n) is negative; thus (11) makes the algorithm work with [rho] [congruent to] 0 so as to have convergence similar to LMS. Thus, it is found that our proposed algorithm gives lower steadystate error than LMS and ZA-LMS inboth sparse and semisparse conditions that are seen in Figures 1 and 2. Secondly, as per (4), the proposed algorithm should behave similar to LMS algorithm under nonsparse condition and this is satisfied in Figures 1 and 2.

In order to prove that the proposed algorithm works with an optimal value of zero attractor controller, the second experiment is conducted and the results are plotted in Figure 3. Here, the system and the parameters chosen are similar to the ZA-LMS algorithm as discussed in previous literatures [10, 18]. The unknown system has 16 filter coefficients with one nonzero coefficient for a sparse system and 10 for a semisparse system. The nonsparse system has all nonzero filter coefficients. The step size is chosen as [mu] = 0.05 and [rho] = 5 x [10.sup.-4] [10, 18] and all other parameters are the same as the first experiment.

Thus, from Figure 3, it is evident that the proposed adaptive ZA-LMS under sparse condition adapts itself and converges to ZA-LMS when it is operated under optimal condition and to LMS under semisparse and nonsparse condition. Thus, the effectiveness of (11) in selecting an optimal zero attractor controller under all conditions of sparsity ranging from sparse to nonsparse is proved.

Another interesting way to analyze the performance of the proposed algorithm is to plot the time evolution of [rho] for the above three environmental conditions with different sparsity levels. For this, E[[rho](n)] versus samples and [[sigma].sup.2.sub.w] versus samples are plotted as shown in Figures 4 and 5 for the first experiment. From Figures 4 and 5 it could be observed that E[[rho](n)] and [[sigma].sup.2.sub.w] converge to 0 when the system is nonsparse as predicted. This is because as ZA-LMS cannot outperform LMS during nonsparse condition any value of [rho] yields higher steady state error. In case of highly sparse and semisparse systems E[[rho](n)] and [[sigma].sup.2.sub.w] converge to an optimal value with higher value for highly sparse system and to an intermediate value for semisparse system, respectively.

Next, the proposed approach is tested to evaluate the performance of the proposed approach for different values of SNR. For this, the SNRs 10 db and 20 dB are chosen and the MSD analysis is made as shown in Figures 6 and 7. As expected, the proposed algorithm is robust against different SNR also.

The next experiment is done to evaluate the performance of the proposed approach for echo cancellation application with 512 coefficients [3]. The sparse system consists of 40 nonzero filter coefficients as it is one of the real time situation that prevails in echo cancellation application [3] and the semisparse system is made of equal number of nonzero and zero filter taps. The nonsparse system is made of 512 nonzero filter coefficients. The value of [rho], [mu] and SNR are chosen to be 1 x [10.sup.-6] (criterion 1 from [18] gives [rho] < 5 x [10.sup.-6]), 0.001, and 30 dB, respectively. Thus as seen from Figure 8, when the system is highly sparse, the performance of the proposed algorithm is similar to ZA-LMS as it is operated based on criterion 1 of [18]. However criterion 1 specifies only the upper limit but there is no procedure to select the optimal value which can be found only by trial and error. Moreover in case of semisparse and nonsparse systems, the performance of ZA-LMS deteriorates as constant value of zero attractor controller is used. These disadvantages are eliminated in our proposed algorithm where the algorithm adapts itself to the optimal value at all level of sparsity thus claiming to be suitable for echo cancellation application which is more prone to time varying sparsity.

Figure 9 evaluates the tracking capability of the proposed algorithms. For the first 3000 samples, the highly sparse system of the first experiment is taken and the system is suddenly changed from 1 zero to 3 zero filter taps after 3000 samples and to 5 zero taps after 6000 samples. It is found that the algorithm also has good tracking capability.

8. Conclusions

An adaptive ZA-LMS is proposed in this paper. The proposed algorithm has an adaptive zero attractor controller term which is changed based on the characteristics of the filter coefficients. A simple update rule is also proposed which makes the algorithm work with optimal zero attractor controller depending on the number of zero and nonzero filter coefficients. Thus, the algorithm provides better performance than LMS in highly sparse system by exploiting the sparse nature and behaves like LMS under nonsparse condition thus providing robustness against variable sparsity which is proved through simulations in the context of identification of an unknown system.

http://dx.doi.org/10.1155/2016/3945895

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

[1] W. Li and J. C. Preisig, "Estimation of rapidly time-varying sparse channels," IEEE Journal of Oceanic Engineering, vol. 32, no. 4, pp. 927-939, 2007.

[2] C. Breining, P. Dreiseitel, E. Hansler et al., "Acoustic echo control, an application of very-high-order adaptive filters," IEEE Signal Processing Magazine, vol. 16, no. 4, pp. 42-69, 1999.

[3] J. Radecki, Z. Zilic, and K. Radecka, "Echo cancellation in IP networks," in Proceedings of the 45th Midwest Symposium on Circuits and Systems (MWSCAS '02), vol. 2, pp. II-219-II-222, Tulsa, Okla, USA, August 2002.

[4] A. F. Molisch, "Ultrawideband propagation channels-theory, measurement, and modeling," IEEE Transactions on Vehicular Technology, vol. 54, no. 5, pp. 1528-1545, 2005.

[5] S. S. Haykin, Adaptive Filter Theory, Pearson Education India, 2008.

[6] D. L. Duttweiler, "Proportionate normalized least-mean-squares adaptation in echo cancelers," IEEE Transactions on Speech and Audio Processing, vol. 8, no. 5, pp. 508-517, 2000.

[7] J. Benesty and S. L. Gay, "An improved PNLMS algorithm," in Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP '02), vol. 2, pp. II-1881-II-1884, IEEE, Orlando, Fla, USA, May 2002.

[8] S. Ciochina, C. Paleologu, J. Benesty, and S. L. Grant, "An optimized proportionate adaptive algorithm for sparse system identification," in Proceedings of the In 49th Asilomar Conference on Signals, Systems and Computers, pp. 1546-1550, IEEE, Pacific Grove, Calif, USA, November 2015.

[9] K. Dogancay, Partial-Update Adaptive Signal Processing: Design Analysis and Implementation, Academic Press, New York, NY, USA, 2008.

[10] Y. Chen, Y. Gu, and A. O. Hero III, "Sparse LMS for system identification," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '09), pp. 3125-3128, Taipei, Taiwan, April 2009.

[11] R. Meng, R. C. de Lamare, and V. H. Nascimento, "Sparsity aware affine projection adaptive algorithms for system identification," in Proceedings of the Sensor Signal Processing for Defence (SSPD '11), pp. 1-5, IET, London, UK, September 2011.

[12] A. Gully and R. C. De Lamare, "Sparsity-inducing modified filtered-x affine projection algorithms for active noise control," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '14), pp. 6657-6661, Florence, Italy, May 2014.

[13] Y. Gu, J. Jin, and S. Mei, "[k.sub.0] norm constraint LMS algorithm for sparse system identification," IEEE Signal Processing Letters, vol. 16, no. 9, pp. 774-777, 2009.

[14] F. Albu, A. Gully, and R. De Lamare, "Sparsity-aware pseudo affine projection algorithm for active noise control," in Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA '14), pp. 1-5, IEEE, Siem Reap, Cambodia, December 2014.

[15] J. Benesty and Y. Huang, Eds., Adaptive Signal Processing: Applications to Real-World Problems, Springer Science & Business Media, Berlin, Germany, 2013.

[16] Y. Kopsinis, S. Chouvardas, and S. Theodoridis, Sparse Models in Echo Cancellation: When the Old Meets the New. Trends in Digital Signal Processing: A Festschrift in Honour of AG Constantinides, 2015.

[17] B. K. Das, L. A. Azpicueta-Ruiz, M. Chakraborty, and J. Arenas-Garcia, "A comparative study of two popular families of sparsity-aware adaptive filters," in Proceedings of the 4th International Workshop on Cognitive Information Processing (CIP '14), pp. 1-6, IEEE, Copenhagen, Denmark, May 2014.

[18] K. Shi and P. Shi, "Convergence analysis of sparse LMS algorithms with l1-norm penalty based on white input signal," Signal Processing, vol. 90, no. 12, pp. 3289-3293, 2010.

[19] G. Su, J. Jin, Y. Gu, and J. Wang, "Performance analysis of l0 norm constraint least mean square algorithm," IEEE Transactions on Signal Processing, vol. 60, no. 5, pp. 2223-2235, 2012.

[20] O. Taheri and S. A. Vorobyov, "Reweighted l1-norm penalized LMS for sparse channel estimation and its analysis," Signal Processing, vol. 104, pp. 70-79, 2014.

[21] B. K. Das and M. Chakraborty, "Sparse adaptive filtering by an adaptive convex combination of the LMS and the ZALMS Algorithms," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 61, no. 5, pp. 1499-1507, 2014.

[22] A. H. Sayed, Fundamentals of Adaptive Filtering, John Wiley & Sons, New York, NY, USA, 2003.

Radhika Sivashanmugam (1) and Sivabalan Arumugam (2)

(1) Faculty of Electrical and Electronics Engineering, Sathyabama University, Chennai 600119, India

(2) NEC Mobile Networks Excellence Centre, Chennai 600 096, India

Correspondence should be addressed to Radhika Sivashanmugam; radhikachandru79@gmail.com

Received 16 May 2016; Revised 7 July 2016; Accepted 18 July 2016

Academic Editor: Stefan Balint

Caption: Figure 1: Normalized MSD analysis of proposed algorithm under different sparsity conditions with [mu] = 0.01 and SNR = 30 dB.

Caption: Figure 2: Normalized MSD analysis of proposed algorithm under different sparsity conditions with [mu] = 0.03 and SNR = 30 db.

Caption: Figure 3: Normalized MSD analysis of proposed algorithm under different sparsity conditions with [mu] = 0.05 and SNR = 30 db.

Caption: Figure 4: Evolution of E[[rho](n)] of proposed algorithm for different sparsity levels.

Caption: Figure 5: Variance of weights over one trial for different sparsity level.

Caption: Figure 6: Normalized MSD analysis of proposed algorithm under different sparsity conditions, with SNR = 20 dB.

Caption: Figure 7: Normalized MSD analysis of proposed algorithm under different sparsity conditions with SNR = 10 dB.

Caption: Figure 8: Normalized MSD analysis of proposed algorithm for a 512 length filter coefficients under different sparsity conditions with SNR = 30 dB.

Caption: Figure 9: Tracking analysis of proposed algorithm under different sparse systems with SNR = 30 db.
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Sivashanmugam, Radhika; Arumugam, Sivabalan
Publication:Mathematical Problems in Engineering
Date:Jan 1, 2016
Words:4442
Previous Article:A Hybrid Algorithm Based on Particle Swarm Optimization and Artificial Immune for an Assembly Job Shop Scheduling Problem.
Next Article:Texture Classification Using Scattering Statistical and Cooccurrence Features.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters