Printer Friendly

A stochastic total least squares solution of adaptive filtering problem.

1. Introduction

Ordinary least squares methods are extensively used in many signal processing applications to extract the system parameters from input/output data [1, 2]. These methods yield an unbiased solution of adaptive least squares problem having no interference in both inputs and outputs or having interference only in the outputs of the unknown system and clean inputs. However, if interference exists in both input and output of the unknown system or adaptive filtering problem, the ordinary least squares solution gets biased [3].

Total least squares (TLS) method [4] is an efficient technique to achieve an unbiased estimate of the system parameters when both input and output are contaminated by noise. Golub and Van Loan [5] provided an analytical procedure to get an unbiased solution of the TLS problem using singular value decomposition (SVD) of data matrices. This technique is extensively used in data processing and control applications [4,6,7]. However, application of TLS methods in signal processing is still limited because computation of SVD requires a high complexity of 0(N3) for an N x N matrix.

TLS solutions of adaptive filtering problem gained importance after the pioneer work done by Pisarenko [8]. He presented an efficient solution of adaptive TLS problem by adaptively computing the eigenvector corresponding to smallest eigenvalue of augmented input/output signal's autocorrelation matrix. Since then, several algorithms have been proposed based on the adaptive implementations of Pisarenko. The adaptive TLS algorithms proposed in [9-11] are able to achieve an unbiased TLS solution of adaptive filtering problem with a complexity of O(N). However they are sensitive to the correlation properties of input signals and have a drawback of bad performance under correlated inputs.

In this paper, an iterative algorithm is presented to find an optimal TLS solution of adaptive FIR filtering problem. A stochastic technique similar to that of least mean squares (LMS) algorithm of adaptive least squares filtering is employed to develop a total least mean squares (TLMS) algorithm for adaptive total least squares problem. Instead of basing the approach on the minimum mean squares error as the LMS algorithm does, the proposed (TLMS) algorithm is based on the total mean squares, obtained by minimizing the weighted cost function for the TLS solution of adaptive filtering problem. The proposed algorithm has maintained the O(N) complexity of adaptive TLS algorithms with an additional quality of having steady state convergence under correlated inputs. Convergence analysis is presented to show the global convergence of the proposed algorithm under all kinds of inputs provided the stepsize parameter is suitably chosen.

This paper is outlined as follows: we start with a mathematical formulation of adaptive least squares problem in Section 2 and derivation of the TLMS algorithm is given in Section 3, including its convergence analysis in Section 3.1. After that efficiency of the proposed algorithm is tested in Section 4 by applying it for an unknown system identification problem and comparing the results with conventional LMS and normalized LMS (NLMS) algorithms. Concluding remarks are given in Section 5.

2. Mathematical Formulation of Adaptive Total Least Squares Problem

Consider an unknown system to be identified by adaptive FIR filter of length N and response vector [w.sub.n], at time n, with an assumption that both input and output are corrupted by an additive white Gaussian noise (AWGN). The noise free input vector [a.sub.n] [member of] [R.sup.N] is formed from the input signals u(n), such that

[a.sub.n] = [[u(n) ,u(n - 1), ..., u(n - N + 1)].sup.T] [member of] [R.sup.N]. (1)

The desired output of the unknown system is then given by

[??](n) = s (n) + [DELTA]s (n), (2)

where s(n) = [w.sup.T.sub.n][a.sub.n] is system's output and [DELTA]s(n) an added white Gaussian noise of zero mean and variance [[sigma].sup.2.sub.[DELTA]s] .

The primary assumption of an adaptive least squares (ALS) problem is that perturbations occur in the output signals only and that the input signals are exactly known. This assumption is not practical enough, because perturbations due to sampling or modeling or measurement errors affect the input signals too. A sensible choice to overcome such situations is to introduce perturbations in input signals in addition to perturbations of output signals. A schematic diagram of an adaptive filter with perturbed input is depicted in Figure 1.

If [DELTA][a.sub.n] = [[[DELTA]u(n), [DELTA]u(n - 1), ..., [DELTA]u(n - N + 1)].sup.T] [member of] [R.sup.N], denote the perturbations in input vector [a.sub.n], where [DELTA]u(n) is an additive white Gaussian noise (uncorrelated from the output noise) of zero mean and variance [[sigma].sup.2.sub.[DELTA]u], then noisy input vector is

[[??].sub.n] = [a.sub.n] + [DELTA] [a.sub.n]. (3)

It is clear from Figure 1 that for every input signal [??](n) = u(n) + [DELTA]u(n), the filter produces an estimated output y(n) = [w.sup.T.sub.n][a.sub.n], which is compared with [??](n) to produce a least squares error signal e(n) = y(n) - [??](n). Define the autocorrelation matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of noisy input vector [[??].sub.n] as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the cross-correlation vector of output signal with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

At this stage the least squares solution, obtained by minimizing the cost function J = E{[e.sup.2](n)}, gives a poor estimation of the solution of adaptive filtering problem because of the presence of noise in filter input. Casting adaptive filtering problem as total least squares problem can, however, restructure the poor estimation of solution under noisy input [10,11]. The following definition is made to adopt a more general signal model for ATLS-based filtering.

Definition 1 (augmented data vector). Define an (N + 1) x 1 augmented data vector [[??].sub.n] as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

An alternate form of e(n), in terms of augmented data vector of Definition 1, is obtained as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

where [[??].sub.n] = [[[w.sup.T.sub.n] - 1].sup.T] denote the (N + 1) x 1 extended parameter vector.

The TLS solution of adaptive filtering problem is an eigenvector associated with the smallest eigenvalue of extended autocorrelation matrix [[??].sub.n]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Instead of minimizing the mean square error E{[e.sup.2](n)}, adaptive total least squares problem is concerned with minimizing the total mean square error E{[[eta].sup.2](n)} and cost function J([[??].sub.n]) = E{[e.sup.2](n)}, where the total error [eta](n) is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

The TLS cost function [??]([[??].sub.n]) is then defined in terms of total error as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

The adaptive total least squares problem is a minimization problem of the form [10,11]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

Note that an optimal solution [w.sub.opt] of the TLS problem (9) is an eigenvector corresponding to the smallest eigenvalue of [[??].sub.n]. In practice SVD technique is used to solve TLS problems since it offers lower sensitivity to the computational errors; however, it is computationally expensive [5]. An alternate choice to estimate eigenvector corresponding to smallest eigenvalue is to use an adaptive algorithm 1, 2].

3. Derivation of Total LMS Algorithm for Adaptive Filtering Problem

In adaptive least squares problem, conventional LMS algorithm is a steepest descent method which uses an instantaneous cost function J = [e.sup.2](n) for computation of gradient vector [1]. Using a similar implementation in TLS problem, the total LMS (TLMS) algorithm is obtained by having an instantaneous estimate of the cost function (8) as [??] = [[eta].sup.2](n). The recursive update equation of TLMS algorithm is then given as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

where [mu] is the stepsize parameter or convergence parameter. Note that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

Using [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] then above equation becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

Substituting (12) in (10), the updated equation of TLMS algorithm becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

Once [[??].sub.n+1] is computed using (13), the TLS solution update [w.sub.n+1] is obtained by the following formula:

[W.sub.n+1] = - [[??].sub.n+1] (1 : N)/[W.sub.n+i] (N + 1). (14)

The detailed TLMS algorithm is summarized in Table 1. A complexity measure of the algorithms shows that it is a computationally linear algorithm, requiring a total of 6N + 9 multiplications/divisions per iteration. This computational simplicity of adaptive TLMS algorithm makes it a better choice than computationally expensive SVD based TLS algorithm, which requires 6[N.sup.3] computations per iteration [5].

3.1. Convergence Analysis. In (13), inner product with [[??].sub.n] yields,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

Since e(n) = [w.sup.T.sub.n][[??].sub.n], Cauchy-Schwarz inequality [12] gives

[[absolute value of e(n)].sup.2] < [parallel][[??].sub.n][[parallel].sup.2][parallel][[??].sub.n][[parallel].sub.2]; (16)

that is,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (17)

Let [[delta].sub.n] = [parallel][[??].sub.n][[parallel].sup.2] - [[absolute value of [eta](n)].sup.2] [greater than or equal to] 0 then (15) becomes:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (18)

or,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (19)

Since [[??].sub.n] [not equal to] 0,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (20)

which shows that {[[??].sub.n]} is a geometric progression. It would converge to an optimal solution if

[absolute value of 1 - 2[mu][[delta].sub.n]/[parallel][[??].sub.n][[parallel].sup.2]] < 1 (21)

or

0 < [mu]/[parallel][[??].sub.n][[parallel].sup.2] < 1/[[delta].sub.n]. (22)

This shows that the proposed algorithm is a variable stepsize algorithm, with [??] = [mu][parallel][[??].sub.n][[parallel].sup.2]. An appropriate way to choose [mu] is to initialize the algorithm such that [parallel][w.sub.0] - [w.sub.opt][parallel] is less than 2[parallel][w.sub.opt][parallel] [13]. According to this result for [w.sub.0] = 0, [parallel][w.sub.0][parallel] = 1 and [??] = [mu], while [[delta].sub.0] = [parallel][[??].sub.j][[parallel].sup.2] - [??](n) > 0.

4. Application of TLMS Algorithm in System Identification

To examine the performance of proposed TLMS algorithm, an unknown system identification model, shown in Figure 2, is used.

A white Gaussian input signal of variance [[sigma].sup.2] = 1 is passed through a coloring filter with frequency response [1]:

H(z) = [square root of 1 - [[alpha].sup.2] / 1 - [alpha][z.sup.-1], (23)

where [absolute value [alpha]] < 1, a is a correlation parameter and controls the eigenvalue spread of input signals. [alpha] = 0 corresponds to the case when eigenvalue spread of input signals is close to 1, and eigenvalue spread increases with an increase in the value of a.

A white Gaussian noise of SNR = 30 dB is added in the input signal u(n) to get noisy signal [??](n), and an output signal [??](n) is obtained by corrupting the output signal s(n) with an additive white Gaussian noise of SNR 30 dB. Proposed TLMS algorithm is compared with LMS and NLMS algorithms of [1] to get an FIR vector for a filter of length N = 10. Least squares misalignment [parallel][w.sub.n] - [w.sub.opt][parallel] is compared with the total least squares misalignment [parallel][w.sub.n] - [w.sub.opt][parallel]/[square root of [w.sup.T.sub.n][[??].sub.n]], and simulations results are recorded for 2000 iterations with an ensemble average of 1000 independent runs.

4.1. Convergence Behavior Corresponding to Stepsize Parameter p. Although TLMS algorithm converges for all values of [mu], satisfying (22), but steady state convergence TLMS algorithm is observed when stepsize parameter [mu] is a power of 2. In Figures 3(b)-3(d), four learning curves of total misalignment of TLMS algorithm are shown, corresponding to [mu] = 0.5,0.25,0.125, and 0.0625, and it is observed that robustness increases uniformly with an increase in the value of p. On the other hand if [mu] is chosen randomly, then a change in the convergence behavior is random, though Figure 3(a) shows that algorithm still converges.

4.2. Convergence Behavior Corresponding to Correlation Parameter [alpha]. To check effect of changes in correlation parameter [alpha] on the steady state convergence behavior of TLMS algorithm, different simulations are presented in Figure 3, each showing four learning curves of total misalignment of TLMS algorithm corresponding to [mu] = 0.5,0.25,0.125, and 0.0625. In the first two simulations [alpha] = 0.3 in Figures 3(a) and 3(b), it is 0.6 in Figure 3(c), and 0.9 in Figure 3(d). It is clear from the results of all these simulation curves that increase in correlation of data signals has not affected the steady state performance of the algorithm. Although the convergence speed seems to slow down, but all the curves converge to optimal solution.

4.3. Comparison. Figure 4 shows the comparison of misalignment of three algorithms, that is, LMS, NLMS, and TLMS algorithms. The first two compute a least squares solution of adaptive total least squares problem, while the third one computes TLS solution of adaptive total least squares problem. Taking [alpha] = 0.3, stepsize parameter for LMS algorithm is chosen as 0.015, for NLMS algorithm as 0.3, and for TLMS algorithm, it is 0.25. The results in this simulation show that the convergence of TLMS algorithm increases with an increase in the iteration, and it presents a better solution of adaptive TLS problem.

5. Conclusion

In this paper, an efficient TLMS algorithm is presented for the total least squares solution of adaptive filtering problem. The proposed algorithm is derived by using cost function of weighted instantaneous error signals and an efficient computation of misalignment in terms of mean squares deviation. TLMS algorithm has better ability to tackle with perturbations of both input and output signals, because it is chiefly derived for the purpose. Since in real life problems, both input and output signals are contaminated by noise, therefore TLMS algorithm has great applicability. Convergence analysis shows that the proposed algorithm has global convergence, provided that the stepsize parameter is chosen appropriately. Furthermore, it is computationally simple and requires only O(N) complexity, while other algorithms for TLS problems either require higher complexity or are sensitive to correlation properties of data signals.

http://dx.doi.org/10.1155/2014/625280

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the anonymous referees for their valuable comments, appreciation, and recommendation to publish this paper. This work is supported by School of Mathematical Sciences, Universiti Sains Malaysia.

References

[1] B. Farhang-Boroujeny, Adaptive Filters: Theory and Applications, John Wiley & Sons, New York, NY, USA, 1998.

[2] N. A. Ahmad, "Comparative study of iterative search method for adaptive filtering problems," in Proceedings of the International Conference On Applied Mathematics (ICAM '05), Bedong, Indonesia, August 2005.

[3] S. Haykin, Adaptive Filter Theory, Prentice-Hall, 2nd edition, 1991.

[4] S. Van Huffel and J. Vandewalle, The Total Least Squares Problem: Computational Aspects and Analysis, SIAM, Philadelphia, Pa, USA, 1991.

[5] G. H. Golub and C. F. Van Loan, An Analysis of Total Least Squares Problem, SIAM, Philadelphia, Pa, USA, 1980.

[6] M. A. Rahman and K.-B. Yu, "Total least squares approach for frequency estimation using linear prediction," IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 35, no. 10, pp. 1440-1454, 1987

[7] Y. Hua and T. K. Sarkar, "On the total least squares linear prediction method for frequency estimation," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 12, pp. 21862189, 1990.

[8] V. F. Pisarenko, "The retrieval of harmonics from a covariance function," Geophysical Journal of the Royal Astronomical Society, vol. 33, no. 3, pp. 347-366, 1973.

[9] P. A. Thompson, "Adaptive spectral analysis technique for unbiased frequency estimation in the presence white noise," Proceedings of the 13th Asilomar Conference on Circuits, Systems & Computers, pp. 529-533, 1980.

[10] D.-Z. Feng, "Efficient recursive total least squares algorithm for FIR adaptive filtering," IEEE Transactions on Signal Processing, vol. 42, no. 2, pp. 268-280, 1994.

[11] D.-Z. Feng, X.-D. Zhang, D.-X. Chang, and W. X. Zheng, "A fast recursive total least squares algorithm for adaptive FIR filtering," IEEE Transactions on Signal Processing, vol. 52, no. 10, pp. 2729-2737, 2004.

[12] E. Kreyszig, Introductory Functional Analysis with Applications, vol. 1, John Whey & Sons, New York, NY, USA, 1989.

[13] B. E. Dunne and G. A. Williamson, "Stable simplified gradient algorithms for total least squares filtering," in Proceedings of the 34th Asilomar Conference on Signals, Systems, Computers, pp. 1762-1766, November 2000.

Shazia Javed (1,2) and Noor Atinah Ahmad (1)

(1) School of Mathematical Science, Universiti Sains Malaysia, 11800 Penang, Malaysia

(2) Lahore College for Women University, Lahore 54000, Pakistan

Correspondence should be addressed to Shazia Javed; shaziafateh@hotmail.com

Received 30 August 2013; Accepted 5 December 2013; Published 3 February 2014

Academic Editors: S. Bourennane and J. Marot

TABLE 1: LMS-total (TLMS) algorithm for adaptive filtering.

Algorithm                                              x//       +/-

Initialization

    [w.sub.o] = 0                                      ...      ...
  [??].sub.o] = [[[w.sup.T.sub.o] - 1].sup.T]          ...      ...

Update
for n = 0,1,2, ...

    [MATHEMATICAL EXPRESSION                           ...      ...
    NOT REPRODUCIBLE IN ASCII]
      e(n) = [[??].sup.T.sub.n][[??].sub.n]           N + 1      N
    [norm.sub.sq] = [[??].sup.T.sub.n][[??].sub.n]    N + 1      N

[MATHEMATICAL EXPRESSION                              3 N+ 7   2N + 2
NOT REPRODUCIBLE IN ASCII]

  [w.sub.n+1] = - [[??].sub.n+1] (1 : N)/               N
  [[??].sub.n+1] (N + 1)

Total                                                 6N + 9   4N + 2
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Javed, Shazia; Ahmad, Noor Atinah
Publication:The Scientific World Journal
Article Type:Report
Date:Jan 1, 2014
Words:2989
Previous Article:Organic solvent tolerant lipases and applications.
Next Article:Improved feature-selection method considering the imbalance problem in text categorization.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |