• Aucun résultat trouvé

On robustness of linear prediction based blind identification

N/A
N/A
Protected

Academic year: 2022

Partager "On robustness of linear prediction based blind identification"

Copied!
8
0
0

Texte intégral

(1)

On robustness of Linear Prediction based blind identification

Luc Deneire

and Dirk T.M. Slock Institut EURECOM, 2229 route des Crˆetes, B.P. 193,

F-06904 Sophia Antipolis Cedex, FRANCE

f

deneire, slock

g

@eurecom.fr

Abstract

Linear prediction based algorithms have been applied to the multi-channel FIR identification problem. In [11], it was shown that oversampled and/or multiple antenna re- ceived signals may be modeled as well as low rank MA processes as low rank AR processes. Indeed, taking FIR nature and the singularity of the MA process into account (due to the fact that the number of channels is bigger than the number of sources) leads to a finite order prediction filter (i.e. AR(

L <

1) modeling), which is automati- cally identified by, e.g., a singular multichannel Levinson algorithm, and can be shown to be robust to AR order overestimation. On the other hand, K.A. Meraim and A.

Gorokhov derive other robustness properties based on the equations P

(z)

H

(z) =

h

(0)

, where P

(z)

is the prediction filter H

(z)

is the channel andh

(0)

its first coefficient. Al- though using P

(z)

of overestimated order, clever use of the previous equations leads to robustness of the estima- tion of H

(z)

to channel length overestimation. This pa- per investigates these robustness issues, comparing both methods (and derived methods) to order estimation algo- rithms for, e.g., subspace-fitting methods. An important point developed hereunder is the implicit order estima- tion schemes present in linear prediction based methods and their influence on identification performance. Fur- thermore, we develop a new order estimation method, of low computational cost and giving the channel estimate as a by-product.

1 Introduction

Lots of batch multichannel identification algorithms based on Second Order Statistics have been developed re- cently [3, 1]. Among these, the Linear Prediction (LP) based algorithms have the following advantages:

They are robust to order overestimation. We will study the mechanisms which lead to this robustness and to what extent this robustness holds.

They are computationally efficient, as will be demonstrated hereunder.

The weighted LP can be shown to be asymptot- ically statistically equivalent to Weighted Noise Subspace Fitting with proper parameterization (see companion paper in this conference).

They are able to identify minimum phase common zeros among the different channels [12].

All these advantages leads us to think that LP meth- ods are good candidates for blind channel identification, at least as a startup method.

LP methods consist in two main part, the first one identifies the noiseless equivalent AR model, the second part deduces the channel from the LP filter coefficients.

The niceties of LP are that both steps are robust, the first to AR model order overestimation and the second to chan- nel order overestimation. We will first present the differ- ent approaches to LP for channel estimation, identify the robustness features and then propose a global approach where the three first cited advantages are combined. The first part of the algorithm consists of a multichannel sin- gular Levinson algorithm with AR order estimation and the second part of the Weighted LP approach introduced by [4].

2 Data Model

Consider linear digital modulation over a linear channel with additive Gaussian noise. Assume that we have

p

transmitters at a certain carrier frequency and

m

anten-

nas receiving mixtures of the signals. We shall assume that

m > p

. The received signals can be written in the baseband as

y i (t) =

X

p

j

=1

X

k a j (k)h ji (t

?

kT) + v i (t)

(1)

The work of Luc Deneire is supported by the EC by a Marie-Curie Fellowship (TMR program) under contract No ERBFMBICT950155

(2)

where the

a j (k)

are the transmitted symbols from source

j

,

T

is the common symbol period,

h j i (t)

is the (over- all) channel impulse response from transmitter

j

to re-

ceiver antenna

i

. Assuming the

a j (k)

andf

v i (t)

gto

be jointly (wide-sense) stationary, the processes f

y i (t)

g

are (wide-sense) cyclostationary with period

T

. Iff

y i (t)

g

is sampled with period

T

, the sampled process is (wide- sense) stationary. Sampling in this way leads to an equiv- alent discrete-time representation. We could also ob- tain multiple channels in the discrete-time domain by oversampling the continuous-time received signals, see [7],[11].

We assume the channels to be FIR. In particular, after sampling we assume the (vector) impulse response from source

j

to be of length

N j

. Without loss of generality, we assume the first non-zero vector impulse response sample to occur at discrete-time zero. Let

N =

P

p j

=1

N j

and

N

1

= max j (N j )

. The discrete-time received signal can be represented in vector form as

y

(k) =

X

p

j

=1

N

Xj?1

i

=0

h

j (i)a j (k

?

i) +

v

(k)

= N

1

?1

X

i

=0

h

(i)

a

(k

?

i) +

v

(k)

=

X

p

j

=1

H

j A j N

j

(k) +

v

(k) =

HA

N (k) +

v

(k)

(2)

y

(k) =

y H

1

(k)

y Hm (k)

H ;

v

(k) =

v H

1

(k)

v Hm (k)

H ;

h

j (k) =

h

h jH

1

(k)

h jH m (k)

i

H

H

j =

h

j (N j

?

1)

h

j (0)

;

H

=

H1H

p

;

h

(k) =

h1

(k)

h

p (k)

;

H

ji =

line

i

ofH

j

a

(k) =

a

1

H (k)

a pH (k)

H ; A jn (k) =

a jH (k

?

n+1)

a jH (k)

H ;

A

N (k) =

h

A

1

N H

1

(k)

A pH N

p

(k)

i

H

where superscript

H

denotes Hermitian transpose. (3) We consider additive temporally and spatially white Gaussian circular noise v

(k)

with

R vv (k

?

i) =

E

v

(k)

v

H (i) =

2

v I m ki

. Assume we receive

M

sam-

ples :

Y

M (k) =

T

M p (

H

)

A

N

+

p

(

M

?1)

(k+M

?

1) +

V

M (k)

(4) where Y

M (k) =

hY

H (k)

Y

H (k+M

?

1)

i

H

and

V

M (k)

is defined similarly whereasT

M p (

H

)

is the mul- tichannel multiuser convolution matrix of H, with

M

block lines. Therefore, the structure of the covariance ma- trix of the received signalY

(k)

is

R YY =

T

M p (

H

)R AA

T

M pH (

H

) +

2

v I mM

(5) where

R AA =

EnA

N

+

p

(

M

?1)

(k)

A

HN

+

p

(

M

?1)

(k)

o.

From here on, we will assume white sources with power

a

2

(R AA =

2

a I)

.

3 FIR Zero-Forcing Equalization

We consider an equalizer

F(z)

such that

F(z)H(z) =

diagf

z

?

n

1

:::z

?

n

pg, which can be written in the time- domain as

FT

L p (

H

) =

2

6

4

0

1

0

0

0

... . .. ...

0

0

0

1

0

3

7

5 (6)

F(z)

is a

p

m

filter of order

L

.

(6) is a system of

p(N + p(L

?

1))

equations in

Lmp

unknowns. The minimum length

L

of the FIR equalizer is such that the system (6) is exactly or under-determined.

Hence

L

L =

N

?

p m

?

p

(7) We assume thanHhas full rank if

N

m

, otherwise, there is lack of channel diversity (space or time diversity according to the manner the channels were obtained) and only a subset of the channels is relevant.

4 LP and Equalization

4.1 Noise-free Linear Prediction

Consider the problem of predictingy

(k)

fromY

L (k

?

1)

,

whereY

L (k

?

1)

is considered noiseless (in ”real life”, we will use

R YY = R YY

?

2

v I

). The prediction error can be written as

e

y

(k)

jYL(

k

?1)

=

y

(k)

?yb

(k)

jYL(

k

?1)

=

P

L

Y

L

+1

(k)

(8) withP

L = [

P

L;L

P

L;

1 P

L;

0

] ;

P

L;

0

= I m

. Mini- mizing the prediction error variance leads to the following optimization problem

min

PL:PL;0=

I

m P

L R YY

P

HL =

2~

y;L

(9)

hence

P

L R YY =

0

0 y;L

2~

:

(10)

All this holds for

L

L

. As a function of

L

, the rank

profile of

y;L

2~ behaves like rank

?

y;L

2~

8

<

:

= p ; L

L

= m

?

m

2f

p + 1;:::;m

g

; L = L

?

1

= m ; L < L

?

1

(11) where

m = mL

?

(L+N

?

1)

2 f

0;1;:::;m

?

1

?

p

g

represents the degree of singularity of

R YY;L

.

(3)

4.2 LP and inverse of

RYY

Note that multichannel linear prediction corresponds to block triangular factorization of (some generalized) in- verse of

R YY

. Indeed,

L L R YY L HL = D L ; (L L ) i;j =

P

i

?1

;i

?

j ;

(D L ) i;i =

2~

y;i

?1 (12)

where

L L

is block lower triangular and

D L

is block diag- onal. (A slight generalization to the singular case of) the multichannel Levinson algorithm can be used to compute the prediction quantities and hence the triangular factor- ization above in a fast way. In the case that

R Y Y;L

is sin- gular, some precaution is necessary in the determination of the last block coefficient

P L;L

, which is not unique (see [8]). Similar singularities will then arise at higher orders.

4.3 Other LP algorithms

Rewriting equation (10) at the correct order as

[

?Q

L

j

I m ] =

2

6

6

4

R YY;L r H

r r

0

3

7

7

5

= [0

0

j

y

2

]

(13)

gives :

y

2

= r

0?

r(R Y Y;L )

?1

r H

Q

L = r(R Y Y;L )

?1 (14)

In this method, the inverse is replaced by a pseudo- inverse in the overestimated case, which gives slightly dif- ferent results than the singular multichannel Levinson al- gorithm (this is another choice for the non-unique P

(z)

in

the case of singular correlation matrix). The drawback of this method is that the pseudo-inverse resorts to computa- tionally intensive SVD.

The main advantage of this method, besides its robust- ness to order overestimation, is that it allows the use of a correlation of a smoothing window

K

(i.e. a correla- tion matrix size

MK

MK

) bigger than than

L

, op-

posed to the Levinson method. The correct use of the pseudo-inverse, as will be shown hereunder, corresponds to the use of the signal subspace part of it and can be seen as resorting to the correlation matrix cleaned from its noise subspace (the whole algorithm has strong con- nections with [13]).

To get this averaging effect, but without the cost of the SVD, we propose the following “simplified” method, which, when the order is correctly estimated, relies on :

[0

0

j?Q

L

j

I m ]

2

6

6

6

6

6

6

6

6

6

6

6

4

. .. ...

R

rect ...

r

3

7

7

7

7

7

7

7

7

7

7

7

5

= [0

0

j

2

y ]

(15) which, solved in a least-squares manner, gives :

Q

L = (R

rect

R H

rect

)

?1

R

rect

r H

(16)

Further investigation should lead to a weighted least- squares solution.

4.4 LP filter as ZF equalizer

Consider the noise-free received signal, which is a singu- lar multivariate MA process, then for

L = L

we have

y

(k) +

X

L

i

=1

P L;i

y

(k

?

i) =

ey

L (k) =

h

(0)

a

(k)

(17)

so that the prediction error is a singular white noise. This means that the noise-free received signaly

(k)

is also a

singular multivariate AR process. Hence

P

L =

0

P

L

;

2

y;L

~

= y;L

2~

; L > L :

(18)

Hence the factors

L L

and

D L

in the factorization (12) be- come block Toeplitz after

L

lines.

For

L = L

,

2

y =

2

a

h

H (0)

h

(0)

allows us to find

h

(0)

up to a unitary matrix. We see from (8) and from

e

y

(k)

jYLL(

k

?1)

=

a

(k)

that hH(0)

h

H

(0)h(0)

P

L

is a zero- delay ZF equalizer. Along with the preceding section, this gives us a ZF equalizer of minimum length.

5 LP and Identification

5.1 Identifiability

The channel can be found from

P

L

E

n

Y

L

+1

(k)

Y

HN (k+N

?

1)

o

=

2

a

h

(0)[

h

H (0)

h

H (N

?

1)]

(19)

or from P

L (z)

H

(z) =

h

(0)

) H

(z) =

P?1

L (z)

h

(0)

using the lattice parameterization for P

L (z)

obtained with the Levinson algorithm.

Consider for a moment that we do not have chan- nel overestimation problems (and that the singularities are properly handled), then P

(z)

is consistently estimated and the fundamental equation is

(4)

P

(z)

H

(z) =

h

(0)

(20)

whereh

(0)

is computed from

2

y =

2

a

h

H (0)

h

(0)

up to

a unitary matrix (say U). Obviously, H0

(z) =

H

(z)U

ful-

fills (20), which is a fundamental limitation of the second order methods. Identification of the unitary matrix must be done by resorting to higher order statistics, by finding the innovations of the AR process and applying a source separation to these. Taking into account the whiteness of the sources allows then proper identification of h

(0)

.

Some refinements appear when the orders of the channels of the different users are different [4].

5.2 Weighted Linear Prediction

Alternatively, givenh

(0)

andP

L

, we can solve for the channel impulse responseHfrom P

(z)

H

(z) =

h

(0)

, us-

ing a weighted least squares procedure [4] :

c

H

= argmin

H

jj

W :

5

(

T

tH (

P

)

H?

[^

h

(0) H ;0

0] H )

jj2

(21) We retain here the “practical” algorithm proposed by the this author, where the weighting matrix is :

W = I

(^

2

y

~

+ ^

2

v I m )

?1 (22)

which is some weighting between the signal innovations subspace, which would be sufficient if the order were known and the noise innovations subspace, which yields some robustness through the regularization of the LS equations system.

6 LP Order overestimation

6.1 “Pseudo-inverse” method

In this method, equation (14) becomes :

y

2

= r

0?

r(R YY;L )

?#

r H

Q

L = r(R YY;L )

# (23)

where#denotes the Moore-Penrose pseudo-inverse.

This pseudo-inverse relies on the correct separation of the signal and noise subspaces of the correlation ma- trix. When working with the true value of the noise power (or the ML estimate, which can be computed as the mean value of the noise singular values), simulations and some theoretical considerations (e.g. appendix F of [4]) show that incorrect separation of this signals do not worsen the performance of the channel estimation. More precisely, this is due to the following conjecture :

P

b

= P +

b

P

b?

+ O( 1

p

L) P

b?

R Y Y = 0

(24)

Unfortunately, in “practical” situations, simulations do not agree with this. Indeed, when overestimating the channel length (

N

0

> N

), the noise power is underesti- mated :

^ v

2

=

X

mL

i

=

N

0+

L

?1

i <

X

mL

i

=

N

+

L

?1

i = ^

2

v;ML

(25)

where

i

are the eigenvalues of

R YY

, which leads to :

P

b

= P +

b

P

b?

+ ^

2

v I + O( 1

p

L) P

?

R YY = 0

(26)

Hence

^

2

y

~

=

2

a

h

H (0)

h

(0) + ^

2

v rr H

(27)

which will introduce an estimation error on h

(0)

. This

error is a function of the channel itself via

r

and can be

hardly specified statistically. Moreover, the error on the prediction filter will also affect the channel estimate via the WLS estimator.

6.2 “Levinson” method

In the Levinson method, in a noiseless context, the pre- diction coefficients of overestimated order become zero.

Without going into the details of the algorithm, the cal- culations rely on the backward and forward prediction er- ror powers (this latter corresponding to

2~

y

), and on some pseudo-inverse and the rank of these powers. To get the correct prediction filter, detection of this rank is neces- sary, even if the type of pseudo-inverse is of no influ- ence on the prediction error variances’ values. When this rank is not correctly estimated, there is no result as in the

“pseudo-inverse” method (one can consider that the use of the “Levinson” method leads to a minimum length gener- alized inverse, while the “pseudo-inverse” method leads to a minimum norm generalized inverse, which has more robustness virtues). Nevertheless, the values of the pre- diction error variances will give us a good method for de- termination of the AR order.

7 Order estimation

7.1 Channel order estimation based on

i

In a subspace fitting algorithm, it is natural to try to esti- mate the order by examining the eigenvalues of the corre- lation matrix. In the Direction of Arrival context, Wax de- veloped a method described in [14] and based on Informa- tion Theoretic criterions. Unfortunately, this method does not apply here. He further worked out a general method in [15], but this method resorts to non linear minimization and its complexity does not suit our purposes.

(5)

7.2 AR order estimation

Order estimation for vector AR(

L <

1) processes are usually based either on test statistics of the ideally zero prediction coefficients or the prediction error variances.

Another class of estimation procedures rely on Infor- mation Theoretic considerations and were initiated by Akaike, Rissanen, Hannan and Quinn (see [9] and ref- erences therein). These methods use the prediction error variances. We will use these latter results and adapt them to the singular case.

The classical Infomation Theoretic Criteria try to min- imize :

AIC

log

j

2~

y;k

j

+ 2km L

2

HQ

log

j

2~

y;k

j

+ 2km

2

loglogL L

MDL

log

j

2~

y;k

j

+ km

2

logL 2L

(28)

where

k

are the candidate orders andj

:

jdenotes the deter- minant.

These expression are based on the maximized log- likelihood of the prediction error, with different bias cor- rection terms based on the number of free parameters (

km

2) and the length of the data burst used (

L

). Rissa-

nen’s MDL (Minimum Description Length) and Hannan and Quinn’s HQ criteria give strongly consistent estimates for true AR(L) processes and Akaike’s AIC criterion has a tendency to overestimate the order. As we will use a subsequent procedure which is robust to order overesti- mation, we will use the latter in the simulations.

In the singular case, using classic results concerning singular normal multivariate distributions, these criteria extend to our case, but gave poor results, so we propose the following modification.

Remind that

~y(n) =

h

(0)

a

(n)

where

R aa =

2

a I m

. This mean that we can reason on the equivalent lower di- mensionh

H (0)~

y

(n) =

h

H (0)

h

(0)

a

(n)

and its predic- tion error. Since the sources have been considered uncor- related, we can further consider the uncorrelated predic- tion error powers, the total prediction pozer being the sum of the single prediction error powers, i.e. the trace of this matrix. Hence, thej

y;k

2~ jby trace

(

2~

y;k )

.

More involved and heavier related methods are pre- sented (for the non-singular case) in [5].

8 Overall algorithm

The overall algorithm(s) proposed simply use the AR or- der selection + inspection of the eigenvalues of the next to last

y

2~to get a channel length estimate. Once we have these order estimates, we proceed with the various clas- sical WLP algorithms where the prediction coefficients are estimated according to the various methods described.

The ultimate choice will depend on the price we are will- ing to pay for estimation accuracy.

9 Simulations

9.1 Channel estimation

In order to characterize the robustness of the WLP to channel length overestimation, we made various simula- tions illustrating the different effects of channel lenght es- timation errors and of the overall algorithm.

The performance measure is the Normalized Root MSE (NRMSE) which is computed over 100 Monte Carlo runs as

NRMSE

=

v

u

u

t

1 100

100

X

i

=1

h

H P

?

b

h

(i)h

=

khk2

where h

H P

b? h

h

= min

k

hb ?hk2. We use the real channelhwith

N = 6

,

m = 4

and

p = 1

which was used in [2].

The symbols are i.i.d. QPSK, and the data length is

L = 250

. The SNR is defined as

(

khk2

2

a )=(mM

2

v )

.

The eigenvalues profile of the correlation matrix is re- produced hereunder, which gives an idea of how easy it should be to determine order at the different SNR’s.

0 5 10 15 20 25

−35

−30

−25

−20

−15

−10

−5 0 5 10 15

dB

index Eigenvalues of the RYY

SNR=−5

SNR=0

SNR=5 SNR=10

SNR=15 SNR=20

SNR=25

Figure 1: Eigenvalues of

R Y Y

.

Use of

^

2

v;ML

The simulations agree with the conjecture that, using the ML estimate of the noise power, there is no loss of performance due to overestimation of the channel length. The smoothing window is

K = 6

(i.e.

R YY

of size

mK

mK

).

(6)

−5 0 5 10 15 20 25

−35

−30

−25

−20

−15

−10

−5 0

WLP Estimation Performance with ML estimation for noise power

NRMS in dB

SNR

Overestimated order : 0..5

o : SSF performance

Figure 2: Performance WLP using

^

2

v;ML

Use of

^ v

2 Here, the simulations show clearly the influ- ence of

^ v

2, mostly at high SNR, where order estimation is the easiest to perform. We first show the results with the minimum smoothing window and then for a smooth- ing window

K = ^N

. Comparison of these clearly favor non-minimum smoothing window.

−5 0 5 10 15 20 25

−35

−30

−25

−20

−15

−10

−5 0

WLP Performance estimation : minimum window size

NRMS in dB

SNR

order overestimation 0..5

o : SSF performance

Figure 3: WLP without order estimation or knowledge, minimum

K

−5 0 5 10 15 20 25

−35

−30

−25

−20

−15

−10

−5 0

WLP Performance Estimation

NRMS in dB

SNR

o : SSF performance WLP : overestimation 0..5

Figure 4: WLP without order estimation or knowledge,

K = ^N

Use of the prior order estimation This final figure compares the different practical WLP algorithms, namely the WLP without order estimation, WLP with AR order estimation, WLP with channel order estimation followed by a ’pseudo-inverse’ LP modeling method or a ’Levin- son’ LP modeling method. In this latter case, we use a two-step procedure where the first step consists of the channel length estimation and the second step is again LP modeling by Levinson, based on the covariance matrix of minimum size (

R YY;L

) and

^

2

v = min (R YY;L )

The re-

sults agree with what we expected, namely the channel order estimation greatly improve the performance and the

’pseudo-inverse’LP method is far better then the Levinson method on the raw correlation matrix.

−5 0 5 10 15 20 25

−35

−30

−25

−20

−15

−10

−5 0

WLP Performance : various algorithms

NRMS in dB

SNR

No order est.

Order est. + simplified Order est. + Levinson Only AR order est.

Order est. + pseudoinverse SSF

Figure 5: WLP with order estimation

9.2 LP order estimation

Hereunder, we reproduce the tests for a channel with

m = 4

sub-channels, one user and length

N = 6

, leading

(7)

to an AR of true order 2. The results are rather convinc- ing for relatively low SNR. The noise power is computed under the assumption that we have an AR(6) process.

AIC SNR

ord. 25 20 15 10 5 0 -5

1 0 0 0 0 0 1 69

2 99 99 82 11 2 23 15

3 0 0 18 87 86 61 3

4 0 0 0 0 1 2 0

5 0 0 0 0 1 1 2

6 1 1 0 2 10 12 11

MDL SNR

ord. 25 20 15 10 5 0 -5

1 0 0 0 0 0 2 99

2 100 100 100 56 33 79 1

3 0 0 0 44 67 19 0

4 0 0 0 0 0 0 0

5 0 0 0 0 0 0 0

6 0 0 0 0 0 0 0

HQ SNR

ord. 25 20 15 10 5 0 -5

1 0 0 0 0 0 17 100

2 100 100 100 86 78 81 0

3 0 0 0 14 22 2 0

4 0 0 0 0 0 0 0

5 0 0 0 0 0 0 0

6 0 0 0 0 0 0 0

These are the results of the tests for a channel with

m = 4

sub-channels, 2 users and length

N = 12

, leading

to an AR of true order 5. The results are rather convincing for high to moderate SNR. The noise power is computed under the assumption that we have an AR(10) process.

AIC SNR

ord. 25 20 15 10 5 0 -5

1 0 0 0 0 0 70 100

2 0 0 0 0 0 1 0

3 0 0 0 0 1 9 0

4 0 0 0 0 57 20 0

5 87 97 100 100 39 0 0

6 13 3 0 0 0 0 0

7 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0

10 0 0 0 0 3 0 0

MDL SNR

ord. 25 20 15 10 5 0 -5

1 0 0 0 0 30 100 100

2 0 0 0 0 0 0 0

3 0 0 0 0 18 0 0

4 0 0 0 3 49 0 0

5 100 100 100 97 3 0 0

6 0 0 0 0 0 0 0

7 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0

10 0 0 0 0 0 0 0

HQ SNR

ord. 25 20 15 10 5 0 -5

1 0 0 0 3 92 100 100

2 0 0 0 0 0 0 0

3 0 0 0 0 6 0 0

4 0 0 0 17 2 0 0

5 100 100 100 72 0 0 0

6 0 0 0 0 0 0 0

7 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0

10 0 0 0 0 0 0 0

10 Conclusions

We have investigated Linear Prediction robustness char- acteristics, comparing various methods yielding the pre- diction coefficients of a singular AR(

L <

1) process.

The “pseudo-inverse” method is robust, but its perfor- mance critically relies on the noise power estimation and on the use of a big enough smoothing window. The

“Levinson” method needs to perform order estimation to prove robust. This led to the development of perfor- mant order estimation algorithms at almost no cost, and yielding the prediction coefficients as a by-product. Al- though computationally far less demanding than the first method, it does not use a non-minimum smoothing win- dow, yielding poorer channel estimates. In order to con- jugate low computational complexity and better perfor- mance, we propose a solution where a bigger smooth- ing window is used but without the need of an SVD, this method should be further refined with the use of a weight- ing matrix and thus attain comparable performance to the best method.

References

[1] “Special Issue on Signal Processing for Advanced Communications”, volume 45. IEEE Transactions on Signal Processing, January 1997.

(8)

[2] K. Abed-Meraim, J.-F. Cardoso, A. Y. Gorokhov, P. Loubaton, and E. Moulines. “On Subspace Methods for Blind Identification of Single-Input Multiple-Output FIR Systems”. IEEE Transactions on Signal Processing, 45(1):42–55, January 1997.

[3] Karim Abed-Meraim, Wanzhi Qiu, and Yingbo Hua.

“Blind System Identification”. Proceedings of the IEEE, 85(8):1310–1322, August 1997.

[4] Alexei Gorokhov. “S´eparation autodidacte des m´elanges convolutifs: m´ethodes du second ordre”.

PhD thesis, Ecole Nationale Sup´erieure des T´el´e- communications, 1997.

[5] Changhua Chen, Richard A. Davis, and Peter J.

Brockwell. “Order Determination for Multivariate Autoregressive Processes Using Resampling Meth- ods”. Journal of Multivariate Analysis, 57(2):175–

190, May 1996.

[6] Steven D. Halford and Georgios B. Giannakis.

“Channel order Determination based on Sample Cyclic Correlations”. In Twenty-Eight Asilomar Conference on Signal, Systems & Computers, Oc- tober 1994.

[7] E. Moulines, P. Duhamel, J.-F. Cardoso, and S. Mayrargue. “Subspace Methods for the Blind Identification of Multichannel FIR filters”. IEEE Transactions on Signal Processing, 43(2):516–525, February 1995.

[8] Boaz Porat. “Contributions to the Theory and Ap- plications of Lattice Filters”. PhD thesis, Stanford University, August 1982.

[9] Gregory C. Reinsel. “Elements of Multivariate Time Series Analysis”. Springer Series in Statistics.

Springer-Verlag, 1993.

[10] Dirk T.M. Slock. “Blind Joint Equalization of Multi- ple Synchronous Mobile Users Using Oversampling and/or Multiple Antennas”. In Twenty-Eight Asilo- mar Conference on Signal, Systems & Computers, October 1994.

[11] D.T.M. Slock. “Blind Fractionally-Spaced Equal- ization, Perfect-Reconstruction Filter Banks and Multichannel Linear Prediction”. In Proc ICASSP 94 Conf., Adelaide, Australia, April 1994.

[12] A. Touzni and I. Fijalkow. “Robustness of Blind Fractionally-Spaced Identification/Equalizqtion to Loss of Channel Disparity”. In Proc. ICASSP, vol- ume 5, pages 3937–3940, Munich, Germany, April 1997.

[13] D.W. Tufts and R. Kumaresan. “Estimation of fre- quencies of multiple sinusoids : making linear pre- diction perform like maximum likelihood”. Pro- ceedings of the IEEE, 70:975–989, 1982.

[14] M. Wax and T. Kailath. “Detection of signals by in- formation theoretic criteria”. IEEE Trans. Acoust., Speech, Signal Processing, 33:387–392, 1985.

[15] M. Wax and I. Ziskind. “Detection of the num- ber of coherent signals by the MDL principle”.

IEEE Trans. Acoust., Speech, Signal Processing, 37:1190–1196, 1989.

Références

Documents relatifs

In this paper, sufficient conditions for the robust stabilization of linear and non-linear fractional- order systems with non-linear uncertainty parameters with fractional

“inactive-sparse phase” to a “labyrinth phase” and produce experimental data to quantify these changes as a function of the density of the initial configuration, the value of

Detailed plots of time series of resultant force on the test structure measured by the dynamometer, deceleration of the floe in the x-direction as measured with the accelerometer,

Even after improvement of the radio frequency (RF) chain with low noise components at critical parts, by measuring the intrinsic noise of the probing method (red curve in fig. 2),

manufacturing&#34; as I believe it is understood by Japanese managers who are sent out to Japanese subsidiaries abroad. In order to provide one kind of gauge on the

To further examine the tissue-specific or stress-induced expression patterns of paralogous protein family mem- bers, we calculated the Preferential Expression Measure (PEM) for each

Ainsi, il a été montré que la résistance aux inhibiteurs de l’epidermal growth factor receptor (EGFR) dans le cancer bronchique peut dépendre de la présence d'une

La MPR a développé une approche globale du patient, en s’appuyant notamment sur la Classification internationale du fonctionnement, du handicap et de la santé (CIF), qui