#### Read Microsoft PowerPoint - matias.ppt text version

`S-88.4221 Postgraduate Seminar on Signal Processing 23.11.2005Adaptive Signal ProcessingMatias With Signal Processing Laboratory HUTThis presentation is based on: Jacob Benesty and Yifeng Huang: &quot;Adaptive Signal Processing: Applications to Real-World Problems&quot;, Springer publisher 2003Outline1. 2. 3. 4. 5. 6. 7.Introduction Basic concepts of adaptive filters Wiener filter LMS and NLMS algorithms RLS algorithm An example application Conclusions______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 2IntroductionIn many applications requiring filtering:­ the necessary frequency response is not known beforehand ­ the necessary frequency response varies with time An adaptive filter is a digital filter that automatically changes its characteristics (e.g. frequency response) by optimizing the internal parameters In wireless communications, adaptive signal processing is used in many ways, for example: ­ Channel equalization ­ Channel estimation ­ Voice coding ­ Interference cancelling______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 3Outline1.Introduction2.3. 4. 5. 6. 7.Basic concepts of adaptive filtersWiener filter LMS and NLMS algorithms RLS algorithm An example application Conclusions______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 4Basics of adaptive filtersTwo major classes of adaptive filter realizations:­ Finite Impulse Response (FIR) filters » Typically non-recursive structure ­ Infinite Impulse Response (IIR) filters » Recursive structureAnother classification:­ Linear adaptive filters ­ Nonlinear adaptive filtersOn this lecture we will concentrate on adaptive linear FIR filters, because:­ They are very popular ­ They are easy to understand______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 5Adaptive FIR filterOutput signal y(k)=wH(k)x(k) Input signal vector x(k)=[x(k) x(k-1) ... x(k-N)]T Desired signal (choise depends on the application)Error signal e(k)=d(k)-y(k)Coefficient vector w(k) is updated on each iteration The output signal y(k) is an estimate of d(k) ______________________________________________________________________________________________________________________________________Adaptive Signal Processing Matias With Page 6How to choose desired signal ­ example 1System identification:­ Desired signal is the output of the unknown system ­ When the output MSE is minimized, the adaptive filter represents a model for the unknown systemFigure from [1]______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 7How to choose desired signal ­ example 2Channel equalization ­ Desired signal is a delayed version of the original signal: » Initially training signal » Later received data (feedback after decision) ­ When the output MSE is minimized, adaptive filter represents an inverse model (equalizer) of the channelFigure from [1]______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 8How to choose desired signal ­ example 3Signal enhancement:­ Desired signal: signal x(k) corrupted by noise n1(k) ­ Input signal: another noise signal n2(k), correlates with n1(k) ­ Error signal e(k) contains that part of the desired signal that does not correlate with the input signal, i.e. an enhanced version of x(k)______________________________________________________________________________________________________________________________________Figure from [1]Adaptive Signal ProcessingMatias WithPage 9The goal of an adaptive filterTwo tap filter, weights w=[w0 w1] Find w0 and w1 such that the &quot;output error&quot; is minimized in some way:Objective function F[e(k)] is an exact formulation of output errorFigure from [1]______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 10Updating the coefficient vectorAn adaptive algorithm updates the coefficient vector w(k) such that the objective function F[e(k)] is minimized Typical objective functions: F [e(k )] = E [| e(k ) |2 ] ­ Mean-Square Error (MSE): ­ Least Squares (LS): ­ Weighted Least Squares (WLS): ­ Instantaneous Squared Value (ISV):1 k 2 F [e(k )] = e( k - i )  k + 1 i =0 F [e(k )] =  k e(k - i ) ,  &lt; 12 i =0 kF [e(k )] = e(k )2Objective functions are allways non-negative. Optimality: when y(k)=d(k): F[e(k)] = F[0] = 0.______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 11Outline1. 2.Introduction Basic concepts of adaptive filters3.4. 5. 6. 7.Wiener filterLMS and NLMS algorithms RLS algorithm An example application Conclusions______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 12Wiener filterAn optimal weight vector wo for the MSE objective function is called the (FIR) Wiener filter. It minimizes the MSE between y(k) and d(k). Solving the Wiener filter wo is easy: ­ The MSE objective function for a filter with fixed coefficients w:F [e(k )] = E e 2 (k ) = E d 2 (k ) - 2w T p + w T Rw[] []­ To minimize, we need the gradient vector gw:gw = F [e(k )] = -2p + 2Rw w­ Set the gradient vector to zero, solve for w:w o = R -1pR=E[x(k) xT(k)] is the input signal correlation matrix P=E[d(k) xT(k)] is the cross-correlation between the desired and input signals______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 13Outline1. 2. 3.Introduction Basic concepts of adaptive filters Wiener filter4.5. 6. 7.LMS and NLMS algorithmsRLS algorithm An example application Conclusions______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 14LMS algorithmIdea: Approach the Wiener solution by searching in the opposite direction of the gradient vector gw ­ &quot;steepest-descent&quot; based algorithm ­ Step-size µ is used to control the rate of convergence Remember, the gradient of the MSE function is given asgw = F [e(k )] = -2p + 2Rw wUse instantaneous estimates of R and p: ­ R=E[x(k) xT(k)] ­ p=E[d(k) xT(k)] The resulting algorithm: least mean-square (LMS) algorithm ­ Update equation: w (k + 1) = w (k ) + 2 µe(k )x(k ) ­ Very simple, computational complexity O[N] ­ Propably the most used adaptive algorithm ______________________________________________________________________________________________________________________________________Adaptive Signal Processing Matias With Page 15LMS ­ convergence path exampleTwo tap filter System identification problem weights w=[w0 w1] Convergence along the gradient of the MSE surfaceFigure from [1]______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 16LMS step sizeIf the step size µ is too large, no convergence. Maximum µ that provides convergence depends on input signal's properties. When properly selected, µ controls the convergence speed and misadjustment:Small µ: slower convergence, less misadjustmentLarge µ: faster convergence, bigger misadjustmentFigure from [1]______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 17The NLMS algorithmIdea: adjust the LMS step size on each iteration Normalized LMS (NLMS): the step size µ is normalized by the energy of the data vector:  µ NLMS = T x x + ­ 0 &lt;  &lt; 2 is a fixed convergence factor. Controls misadjustment and convergence speed. ­  is a small number used avoid very large step sizes Effects of the normalization ­ Makes the algorithm independent of signal scaling ­ Converges usually much faster than the LMS ­ Computational complexity slightly higher than LMS______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 18Outline1. 2. 3. 4.Introduction Basic concepts of adaptive filters Wiener filter LMS and NLMS algorithms5.6. 7.RLS algorithmAn example application Conclusions______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 19The RLS algorithmRemember the optimal MMSE solution, Wiener filter: ­ wo=R-1p LMS approaches the Wiener solution along its gradient How about solving it directly? ­ We need estimates of R and p. ­ Taking the inverse of R on each iteration is costly The Recursive Least Squares (RLS) algorithm solves these issues: ­ Recursive estimates are used for R and p. The convergence speed is controlled by the forgetting factor 0&lt;&lt;1: R (k ) = x(k )xT (k ) + R (k - 1) p(k ) = x(k )d (k ) + p(k - 1) ­ By using these estimates, the weight vector update becomesw (k ) = w (k - 1) + e(k )R -1 (k )x(k )­ The matrix inversion lemma is used to update the inverse of R, resulting a lower computational complexity ______________________________________________________________________________________________________________________________________Adaptive Signal Processing Matias With Page 20Properties of the RLS algorithmProvides a (weighted) least-squares solution. In other words, finds the minimum of the WLS objective function. Fast convergence­ by an order of magnitude faster than LMSHigh computational complexity­ Basic version: O[N2] ­ When the input consists of delayed versions of the same signal (such as our x), the computational complexity can drop to O[N] ­ In the fast O[N] algorithms, the weight vector w is not typically available, i.e. you only get e(k). A problem in certain applications (e.g. system identification)______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 21RLS ­ convergence path exampleTwo tap filter Channel equalization problem weights w=[w0 w1] Convergence straight towards the Wiener solution (noise causes fluctuations) Large steps when the coefficients are far from the optimum solutionFigure from [1]______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 22Parallel/pipielined RLS implementationsProblem divided into several paraller problems:­ Lower clock frequency ­ more surface area ­ Lower power consumptionOverall complexity still O[N2]Fig. From [2]______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 23Outline1. 2. 3. 4. 5.Introduction Basic concepts of adaptive filters Wiener filter LMS and NLMS algorithms RLS algorithm6.7.An example applicationConclusions______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 24An example ­ adaptive noise cancellationA signal enhancement example from the book &quot;Adaptive Signal Processing&quot; Assumptions ­ Noise [v1(n) and v(n)] uncorrelated with the speech signal s(n) ­ Auxillary microphone recording v(n) well isolated from the speech source ­ v1(n) and v(n) at least partially coherent______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 25Adaptive noise cancellationTheoretical performance of adaptive noise cancellation depends on: ­ SNR at the primary microphone ­ Coherence between noise in primary/secondary microphonesSNR Coherence______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 26Adaptive noise cancellation - convergenceLMS: slow convergence NLMS: faster convergence RLS: fast convergence______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 27Adaptive noise cancellation ­ performanceLess than 0 dB noise reduction translates to noise enhancementRLS very close to optimal______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 28Adaptive noise cancellation - complexityIn the example, filters of order N=10 were used Number of multiplications needed for the algorithms (M=N+1 is the number of filter taps):Algorithm LMS NLMS RLS FQR-RLS #mult 2M+1 3M 3M(3+M)/2 20M+5 #mult, M=11 23 33 231 225FQR-RLS is an example of fast RLS algorithms with computational complexity relative to O[N]. In adaptive noise cancellation, fast RLS algorithms can be used since we don't need to know the coefficient vector w______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 29Another adaptive noise cancellation exampleAn excercise I made on a course &quot;RLS adaptive filtering&quot; More or less the same scenario than presented before Filter order N=15 Two adaptive algorithms used: ­ LMS ­ RLS______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 30Another adaptive noise cancellation exampleThe starting point: original signal in noise: x(n)=s(n)+v1(n)1 0 .5 amplitude 0 -0 . 5 -1010002000300040005000600070008000900010000Noise measurement from another location: v(n) Signal after adaptive noise cancellation, LMS algorithm1 0.5 amplitude 0 -0 . 5 -1010002000300040005000600070008000900010000Signal after adaptive noise cancellation, RLS algorithm1 0 .5 amplitude 0 -0 . 5 -1010002000300040005000600070008000900010000______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 31ConclusionsThree adaptive filter algorithms were reviewed­ LMS » Fairly slow convergence » Low computational complexity ­ NLMS » Slightly faster convergence w.r.t. LMS » 50% higher computational complexity w.r.t. LMS ­ RLS » Fast convergence » High computational complexity______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 32HomeworkShow that the MSE objective function for the Wiener filter is:F [e(k )] = E e 2 (k ) = E d 2 (k ) - 2w T p + w T Rw[] []Hint: see slides 6 and 13______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 33References[1] Paulo S. R. Diniz, &quot;Adaptive Filtering: Algorithms and practical implementation&quot;, second edition, Kluwer Academic Publishers 2002 [2] Gao, L. and Parhi, K.K, &quot;Hierarchical pipelining and folding of QRD-RLS adaptive filters&quot;, International Conference on Acoustics, Speech, and Signal Processing. June 2000, Page(s):3283 ­ 3286, vol.6______________________________________________________________________________________________________________________________________Adaptive Signal ProcessingMatias WithPage 34`

#### Information

##### Microsoft PowerPoint - matias.ppt

34 pages

Find more like this

#### Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

113614