#### Read Microsoft PowerPoint - matias.ppt text version

S-88.4221 Postgraduate Seminar on Signal Processing 23.11.2005

Adaptive Signal Processing

Matias With Signal Processing Laboratory HUT

This presentation is based on: Jacob Benesty and Yifeng Huang: "Adaptive Signal Processing: Applications to Real-World Problems", Springer publisher 2003

Outline

1. 2. 3. 4. 5. 6. 7.

Introduction Basic concepts of adaptive filters Wiener filter LMS and NLMS algorithms RLS algorithm An example application Conclusions

______________________________________________________________________________________________________________________________________

Adaptive Signal Processing

Matias With

Page 2

Introduction

In many applications requiring filtering:

the necessary frequency response is not known beforehand the necessary frequency response varies with time An adaptive filter is a digital filter that automatically changes its characteristics (e.g. frequency response) by optimizing the internal parameters In wireless communications, adaptive signal processing is used in many ways, for example: Channel equalization Channel estimation Voice coding Interference cancelling

______________________________________________________________________________________________________________________________________

Adaptive Signal Processing

Matias With

Page 3

Outline

1.

Introduction

2.

3. 4. 5. 6. 7.

Basic concepts of adaptive filters

Wiener filter LMS and NLMS algorithms RLS algorithm An example application Conclusions

______________________________________________________________________________________________________________________________________

Adaptive Signal Processing

Matias With

Page 4

Basics of adaptive filters

Two major classes of adaptive filter realizations:

Finite Impulse Response (FIR) filters » Typically non-recursive structure Infinite Impulse Response (IIR) filters » Recursive structure

Another classification:

Linear adaptive filters Nonlinear adaptive filters

On this lecture we will concentrate on adaptive linear FIR filters, because:

They are very popular They are easy to understand

Adaptive Signal Processing

Matias With

Page 5

Adaptive FIR filter

Output signal y(k)=wH(k)x(k) Input signal vector x(k)=[x(k) x(k-1) ... x(k-N)]T Desired signal (choise depends on the application)

Error signal e(k)=d(k)-y(k)

Coefficient vector w(k) is updated on each iteration The output signal y(k) is an estimate of d(k) ______________________________________________________________________________________________________________________________________

Adaptive Signal Processing Matias With Page 6

How to choose desired signal example 1

System identification:

Desired signal is the output of the unknown system When the output MSE is minimized, the adaptive filter represents a model for the unknown system

Figure from [1]

Adaptive Signal Processing

Matias With

Page 7

How to choose desired signal example 2

Channel equalization Desired signal is a delayed version of the original signal: » Initially training signal » Later received data (feedback after decision) When the output MSE is minimized, adaptive filter represents an inverse model (equalizer) of the channel

Figure from [1]

Adaptive Signal Processing

Matias With

Page 8

How to choose desired signal example 3

Signal enhancement:

Desired signal: signal x(k) corrupted by noise n1(k) Input signal: another noise signal n2(k), correlates with n1(k) Error signal e(k) contains that part of the desired signal that does not correlate with the input signal, i.e. an enhanced version of x(k)

Figure from [1]

Adaptive Signal Processing

Matias With

Page 9

The goal of an adaptive filter

Two tap filter, weights w=[w0 w1] Find w0 and w1 such that the "output error" is minimized in some way:

Objective function F[e(k)] is an exact formulation of output error

Figure from [1]

Adaptive Signal Processing

Matias With

Page 10

Updating the coefficient vector

An adaptive algorithm updates the coefficient vector w(k) such that the objective function F[e(k)] is minimized Typical objective functions: F [e(k )] = E [| e(k ) |2 ] Mean-Square Error (MSE): Least Squares (LS): Weighted Least Squares (WLS): Instantaneous Squared Value (ISV):

1 k 2 F [e(k )] = e( k - i ) k + 1 i =0 F [e(k )] = k e(k - i ) , < 1

2 i =0 k

F [e(k )] = e(k )

2

Objective functions are allways non-negative. Optimality: when y(k)=d(k): F[e(k)] = F[0] = 0.

Adaptive Signal Processing

Matias With

Page 11

Outline

1. 2.

Introduction Basic concepts of adaptive filters

3.

4. 5. 6. 7.

Wiener filter

LMS and NLMS algorithms RLS algorithm An example application Conclusions

Adaptive Signal Processing

Matias With

Page 12

Wiener filter

An optimal weight vector wo for the MSE objective function is called the (FIR) Wiener filter. It minimizes the MSE between y(k) and d(k). Solving the Wiener filter wo is easy: The MSE objective function for a filter with fixed coefficients w:

F [e(k )] = E e 2 (k ) = E d 2 (k ) - 2w T p + w T Rw

[

] [

]

To minimize, we need the gradient vector gw:

gw = F [e(k )] = -2p + 2Rw w

Set the gradient vector to zero, solve for w:

w o = R -1p

R=E[x(k) xT(k)] is the input signal correlation matrix P=E[d(k) xT(k)] is the cross-correlation between the desired and input signals

Adaptive Signal Processing

Matias With

Page 13

Outline

1. 2. 3.

Introduction Basic concepts of adaptive filters Wiener filter

4.

5. 6. 7.

LMS and NLMS algorithms

RLS algorithm An example application Conclusions

Adaptive Signal Processing

Matias With

Page 14

LMS algorithm

Idea: Approach the Wiener solution by searching in the opposite direction of the gradient vector gw "steepest-descent" based algorithm Step-size µ is used to control the rate of convergence Remember, the gradient of the MSE function is given as

gw = F [e(k )] = -2p + 2Rw w

Use instantaneous estimates of R and p: R=E[x(k) xT(k)] p=E[d(k) xT(k)] The resulting algorithm: least mean-square (LMS) algorithm Update equation: w (k + 1) = w (k ) + 2 µe(k )x(k ) Very simple, computational complexity O[N] Propably the most used adaptive algorithm ______________________________________________________________________________________________________________________________________

Adaptive Signal Processing Matias With Page 15

LMS convergence path example

Two tap filter System identification problem weights w=[w0 w1] Convergence along the gradient of the MSE surface

Figure from [1]

Adaptive Signal Processing

Matias With

Page 16

LMS step size

If the step size µ is too large, no convergence. Maximum µ that provides convergence depends on input signal's properties. When properly selected, µ controls the convergence speed and misadjustment:

Small µ: slower convergence, less misadjustment

Large µ: faster convergence, bigger misadjustment

Figure from [1]

Adaptive Signal Processing

Matias With

Page 17

The NLMS algorithm

Idea: adjust the LMS step size on each iteration Normalized LMS (NLMS): the step size µ is normalized by the energy of the data vector: µ NLMS = T x x + 0 < < 2 is a fixed convergence factor. Controls misadjustment and convergence speed. is a small number used avoid very large step sizes Effects of the normalization Makes the algorithm independent of signal scaling Converges usually much faster than the LMS Computational complexity slightly higher than LMS

Adaptive Signal Processing

Matias With

Page 18

Outline

1. 2. 3. 4.

Introduction Basic concepts of adaptive filters Wiener filter LMS and NLMS algorithms

5.

6. 7.

RLS algorithm

An example application Conclusions

Adaptive Signal Processing

Matias With

Page 19

The RLS algorithm

Remember the optimal MMSE solution, Wiener filter: wo=R-1p LMS approaches the Wiener solution along its gradient How about solving it directly? We need estimates of R and p. Taking the inverse of R on each iteration is costly The Recursive Least Squares (RLS) algorithm solves these issues: Recursive estimates are used for R and p. The convergence speed is controlled by the forgetting factor 0<<1: R (k ) = x(k )xT (k ) + R (k - 1) p(k ) = x(k )d (k ) + p(k - 1) By using these estimates, the weight vector update becomes

w (k ) = w (k - 1) + e(k )R -1 (k )x(k )

The matrix inversion lemma is used to update the inverse of R, resulting a lower computational complexity ______________________________________________________________________________________________________________________________________

Adaptive Signal Processing Matias With Page 20

Properties of the RLS algorithm

Provides a (weighted) least-squares solution. In other words, finds the minimum of the WLS objective function. Fast convergence

by an order of magnitude faster than LMS

High computational complexity

Basic version: O[N2] When the input consists of delayed versions of the same signal (such as our x), the computational complexity can drop to O[N] In the fast O[N] algorithms, the weight vector w is not typically available, i.e. you only get e(k). A problem in certain applications (e.g. system identification)

Adaptive Signal Processing

Matias With

Page 21

RLS convergence path example

Two tap filter Channel equalization problem weights w=[w0 w1] Convergence straight towards the Wiener solution (noise causes fluctuations) Large steps when the coefficients are far from the optimum solution

Figure from [1]

Adaptive Signal Processing

Matias With

Page 22

Parallel/pipielined RLS implementations

Problem divided into several paraller problems:

Lower clock frequency more surface area Lower power consumption

Overall complexity still O[N2]

Fig. From [2]

Adaptive Signal Processing

Matias With

Page 23

Outline

1. 2. 3. 4. 5.

Introduction Basic concepts of adaptive filters Wiener filter LMS and NLMS algorithms RLS algorithm

6.

7.

An example application

Conclusions

Adaptive Signal Processing

Matias With

Page 24

An example adaptive noise cancellation

A signal enhancement example from the book "Adaptive Signal Processing" Assumptions Noise [v1(n) and v(n)] uncorrelated with the speech signal s(n) Auxillary microphone recording v(n) well isolated from the speech source v1(n) and v(n) at least partially coherent

Adaptive Signal Processing

Matias With

Page 25

Adaptive noise cancellation

Theoretical performance of adaptive noise cancellation depends on: SNR at the primary microphone Coherence between noise in primary/secondary microphones

SNR Coherence

Adaptive Signal Processing

Matias With

Page 26

Adaptive noise cancellation - convergence

LMS: slow convergence NLMS: faster convergence RLS: fast convergence

Adaptive Signal Processing

Matias With

Page 27

Adaptive noise cancellation performance

Less than 0 dB noise reduction translates to noise enhancement

RLS very close to optimal

Adaptive Signal Processing

Matias With

Page 28

Adaptive noise cancellation - complexity

In the example, filters of order N=10 were used Number of multiplications needed for the algorithms (M=N+1 is the number of filter taps):

Algorithm LMS NLMS RLS FQR-RLS #mult 2M+1 3M 3M(3+M)/2 20M+5 #mult, M=11 23 33 231 225

FQR-RLS is an example of fast RLS algorithms with computational complexity relative to O[N]. In adaptive noise cancellation, fast RLS algorithms can be used since we don't need to know the coefficient vector w

Adaptive Signal Processing

Matias With

Page 29

Another adaptive noise cancellation example

An excercise I made on a course "RLS adaptive filtering" More or less the same scenario than presented before Filter order N=15 Two adaptive algorithms used: LMS RLS

Adaptive Signal Processing

Matias With

Page 30

Another adaptive noise cancellation example

The starting point: original signal in noise: x(n)=s(n)+v1(n)

1 0 .5 amplitude 0 -0 . 5 -1

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Noise measurement from another location: v(n) Signal after adaptive noise cancellation, LMS algorithm

1 0.5 amplitude 0 -0 . 5 -1

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Signal after adaptive noise cancellation, RLS algorithm

1 0 .5 amplitude 0 -0 . 5 -1

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Adaptive Signal Processing

Matias With

Page 31

Conclusions

Three adaptive filter algorithms were reviewed

LMS » Fairly slow convergence » Low computational complexity NLMS » Slightly faster convergence w.r.t. LMS » 50% higher computational complexity w.r.t. LMS RLS » Fast convergence » High computational complexity

Adaptive Signal Processing

Matias With

Page 32

Homework

Show that the MSE objective function for the Wiener filter is:

F [e(k )] = E e 2 (k ) = E d 2 (k ) - 2w T p + w T Rw

[

] [

]

Hint: see slides 6 and 13

Adaptive Signal Processing

Matias With

Page 33

References

[1] Paulo S. R. Diniz, "Adaptive Filtering: Algorithms and practical implementation", second edition, Kluwer Academic Publishers 2002 [2] Gao, L. and Parhi, K.K, "Hierarchical pipelining and folding of QRD-RLS adaptive filters", International Conference on Acoustics, Speech, and Signal Processing. June 2000, Page(s):3283 3286, vol.6

Adaptive Signal Processing

Matias With

Page 34

#### Information

#### Report File (DMCA)

Our content is added by our users. **We aim to remove reported files within 1 working day.** Please use this link to notify us:

Report this file as copyright or inappropriate

113614