• Aucun résultat trouvé

Subspace fitting without eigendecomposition

N/A
N/A
Protected

Academic year: 2022

Partager "Subspace fitting without eigendecomposition"

Copied!
4
0
0

Texte intégral

(1)

SUBSPACE FITTING WITHOUT EIGENDECOMPOSITION

Luc Deneire1, Jaouhar Ayadi and Dirk T.M. Slock

Institut EURECOM, 2229 route des Crˆetes, B.P. 193, F-06904 Sophia Antipolis Cedex, FRANCE

fdeneire, ayadi, slockg@eurecom.fr

Abstract : Subspace fitting has become a well known method to identify FIR Single Input Multiple Output (SIMO) systems, only resorting to second-order statistics. The main drawback of this method is its computational cost, due to the eigendecomposition of the sample covariance matrix. We propose a scheme that solves the subspace fitting problem without using the eigendecomposition of the cited matrix. The approach is based on the observation that the signal subspace is also the column space of the noise-free covariance matrix. We suggest a two-step procedure. In the first step, the column space is generated by arbitrary combinations of the columns. In the second step, this column space estimate is refined by optimally combining the columns using the channel estimate resulting from the first step. Our method only requires computation of two eigenvectors of a small matrix and of two projection matrices, although yielding the same performance as the usual subspace fitting.

1. INTRODUCTION

Subspace fitting algorithms have been applied to the multi-channel identification problem. In [8], it was shown that oversampled and/or multiple antenna re- ceived signals may be modeled as low rank processes and thus lend themselves to subspace methods, exploit- ing the orthogonality property between the noise sub- space of the covariance matrix and the convolution ma- trix of the channel. Recent papers [1] [10] provide per- formance analysis of these methods. The huge major- ity of algorithms recently proposed to perform subspace fitting resort to SVD (singular value decomposition), which make them of little use for real-time implemen- tations. On the other hand, literature on other, computa- tionally less demanding rank revealing decompositions [3] has lead to some fast subspace estimation methods (see a.o. [5]). The usual scheme is to use a rank reveal- ing decomposition (the SVD being the best, but the less cost-effective one) to determine the so called signal sub- space. Possible candidates for this decomposition are the URV decomposition, the rank revealing QR decomposi- tion and the HQR factorization in a Schur-type method (see [5] and references therein).

Recently, using the circularity property of the noise in a real symbol constellation based communication sys- tem, Kristensson, Ottersten and Slock proposed in [6]

an alternative subspace fitting algorithm. In this paper, we show that this method can be used in the general case, leading to a consistent estimate, and that the per- formance is similar to that of the usual subspace fitting algorithms. Our method does not require the eigende- composition of the covariance matrix. Nevertheless, as in the usual subspace fitting method, the computation of a projection matrix is required which may remain com- putationally demanding. Although we do not discuss this topic in detail here, fast algorithms can be used to allevi- ate this problem. In this paper, we consider the channel identification problem, but the ideas presented here ap- ply to any subspace fitting problem.

2. DATA MODEL

We consider a communication system with one emit- ter and a receiver consisting of an array of

M

antennas.

The received signals are oversampled by a factor

m

w.r.t.

the symbol rate. We furthermore consider linear digital modulation over a linear channel with additive noise, so that the received signal y

(t) = [y

1

(t):::y M (t)] T

has the

following form y

(t) =

X

k

h

(t

,

kT)a(k) +

v

(t)

where

a(k)

are the transmitted symbols,

T

is the symbol period and h

(t) = [h

1

(t):::h M (t)] T

is the channel im- pulse response. The channel is assumed to be FIR with duration

NT

. If the received signals are oversampled at the rate

mT

, the discrete input-output relationship can be written as :

y(k )=

NX,1

i=0h

(i)a(k,i)+v(k )=HAN(k )+v(k )

where

y(k ) = [y(k T)Hy(k T +

mT1 )H:::y(k T + mmT,1)H]H,

h(k ) = [h(k T)Hh(k T +

mT1 )H:::h(k T + mmT,1)H]H,

v(k )=[v(k T)Hv(k T+

mT1 )H:::v(k T+mmT,1)H]H;

H

= [

h

(0);:::;

h

(N

,

1)]

and A

N (k) = [a(kT):::a((k

,

N+1)T)]

, superscript

H

denoting con- jugate transpose. So we get a SIMO system with

Mm

channels. We consider additive temporally and spatially white Gaussian circular noise v

(k)

with r

vv (k

,

i) =

E

v

(k)

v

(i) H = v

2I

Mm ki

. Assume we receive

L

samples :

Y

L (k) =

T

L (

H

)

A

L

+

N

,1

(k) +

V

L (k)

whereT

L (

H

)

is the convolution matrix of H, Y

L (k) = [

y

H (k)

y

H (k

,

L+1)] H

and similarly for V

L (k)

. In

an obvious shorthand notation, we will use the following expression :

Y

=

HA

+

V

:

1The work of Luc Deneire is supported by the EC by a Marie-Curie Fellowship (TMR program) under contract No ERBFMBICT950155

(2)

We assume that

mML > L + N

,

1

, so that the con- volution matrixH is “tall” and we assume Hto have full column rank (which leads to the usual identifiability conditions). The covariance matrix of Y is

R

YY =

EYY

H =

HR

AA

H

H +

2

v

I

3. SUBSPACE FITTING 3.1. Signal Subspace Fitting

One can write the eigendecomposition of the co- variance matrix R

Y Y =

EYY

H =

V

S S

V

HS +

V

N N

V

HN

in which V

S

has the same dimensions asH and

N =

2

v

I. The signal subspace can be expressed as:

rangefV

S

g

=

rangefHg

We can then formulate the classical subspace fitting problem :

H

min ;

QjjH,V

S

Qjj2

F

Since V

N

spans the noise subspace, this leads to

min

H H

t

"

LMm

X

i

=

D

?

T

N (

V

Ht i )

T

N H (

V

Ht i )

#

H

Ht

where V

i

is column

i

of V

= [

V

S

V

N ]

,

D

?

= N +L

and

superscript

t

denotes the transposition of the blocks of a block matrix. Under constraintjjHjj

= 1

,Hb

t

is then the eigenvector corresponding to the minimum eigenvalue of the matrix between the brackets. One can lower the com- putational burden by using

D

?

> N + L

, loosing some performance (see a.o. [7],[9]).

Obviously, the projection on the noise subspace sat- isfies :

PVN

=

P?VS

=

I,PVS

=

I,V

S (

V

HS

V

S )

,1V

HS

which leads to the equivalent maximization :

max

H H

t

2

4

D

X?,1

i

=1

T

N (

V

Ht i )

T

N H (

V

Ht i )

3

5H

Ht

3.2. Noise Subspace Fitting

Similarly to signal subspace fitting, V

N

spans the noise subspace andT

H (

H?

)

spans most of it, where H? is such that H?y

(z)y(k) = 0

and Hy

(z) =

H

H (1=z

)

(for more details, see [2]). Hence, the following noise subspace fitting (NSF) can be introduced :

H

min ;

Q

T

H (

H?

)

,V

N

Q2

F

(1) 4. ALTERNATIVE SIGNAL SUBSPACE FITTING

4.1. The method

In the absence of noise, we have :

R

Y Y =

HR

AA

H

H =

R

=

V

S

0

S

V

HS +

V

N

0

N

V

HN

where

0

S = S

,

2

v

I and

0

N = 0

. From this expres- sion, we observe that the column spaces ofHand R are

the same, leading us to introduce the following subspace fitting criterion :

H

min ;

QjjH,RBQb jj

2

F

(2)

wherejj

:

jj

F

denotes Frobenius norm andR is a consis-b tent estimate of R. The matrix B has the same dimen- sions asHand is fixed; we will see later how its choice influences the performance. Note that the range of F

=

RB provides an estimate for the signal subspace. We canb

takeR asb Rb

=

Rb

YY

,b

2

v

I

=

Rb

YY

,

min (

Rb

YY )

I where

min (:)

denotes the minimum eigenvalue (a rank reveal- ing decomposition ofRb

Y Y

,

min (

Rb

Y Y )

I would lead to a better estimate of R). We note that the simulations below show that even simplyRb

=

Rb

YY

can work well also. The criterion (2) is separable in H and Q. Mini- mizing w.r.t. Q first yields

Q

= (

F

H

F

)

,1F

H

H

:

Substitution in (2) yields :

min

H

jjP?FHjj

2

F = min

H trace

H

H

P? FH

:

With the constraintjjHjj

= 1

, we get:

Hb

= arg max

jjHjj=1

trace

H

H

PFH

=

H

t

FH

Ht

whereFis a sum of submatrices of block size

N

of PF.

The solution is thus

V max (

F

)

, the eigenvector ofFcor- responding to

max (

F

)

. Given that R

=

HR

AA

H

H

, not every choice for B is acceptable. For instance, if the columns of B are in the noise subspace, then F

= 0

forRb

=

R. Intuitively, the best choice for B should be B

=

H, which corresponds to matched filteringH

H

with

H(postmultiplication of B with a square non-singular matrix does not change anything since that matrix can be absorbed in Q). These considerations lead to the fol- lowing two-step procedure:

step 1: at first, B is chosen to be a fairly arbitrary se- lection matrix. The first step yields a consistent channel estimate (ifH

H

B is non-singular).

step 2: in this step, the consistent channel estimate of the first step is used to formHb and we solve (2) again, but now with B

=

Hb.

For the first step, for instance the choice B

H = [I 0]

leads to something that is quite closely related to the

“rectangular Pisarenko” method of Fuchs [4]. We found however that a B of the same block Toeplitz form asH but filled with a randomly generated channel works fairly well (this choice will be the one used in the simulations).

4.2. Asymptotics: exact estimation

Aymptotically, Rb

=

R. We get F

=

RB

=

HR

AA

H

H

B. Assuming R

AA > 0

, then ifH

H

B is non-

singular, we get

PF

=

PH

=

PVS

:

(3)

If furthermore we have a consistent channel estimate, then we can take asymptotically B

=

H. In that case,

(3)

the use of R

=

R

Y Y

and hence F

=

R

Y Y

Halso leads to (3). Pursuing this issue further, and applying a per- turbation analysis similar to the one hereunder, we will have a consistent estimate of the channel withRb

=

Rb

Y Y

as SNR!1.

4.3. Perturbation analysis

For the first step of the algorithm, we get a consis- tent channel estimate ifH

H

B is non-singular. We can furthermore pursue the following asymptotic (first order perturbation) analysis.This analysis is based on the per- turbation analysis of a projection matrix. IfFb

=

F

+

F,

then up to first order in

F, PFb

=

PF

+

PF where

PF

= 2Sym

,P?F

F

(

F

H

F

)

,1F

H

(4)

where

2Sym(

X

) =

X

+

X

H

.

a. Optimality of B

=

H

LetRb

=

R

+

R, then using the eigendecomposition of R, we get up to first order

R

=

V

S

0

S

V

HS +

V

S

0

S

V

HS +

V

S

0

S

V

HS +

V

N

0

N

V

HN +

V

N

0

N

V

HN +

V

N

0

N

V

HN

Let B

=

V

S

B

S +

V

N

B

N

where we assume B

S

non- singular. Then usingFb

=

RB leads tob

PF

= 2Sym

PVNh

V

s

0

S +

V

N

0

N

B

N

B,1

S

i

(

H

H

V

S )

,1R,1

AA (

H

H

H

)

,1H

H

This shows that B

N = 0

is optimal.

b. Asymptotic equivalence of the two SSF

UsingFb

= (

Rb

Y Y

,

min (

Rb

Y Y )

I

)

HbwithRb

Y Y

and

b

Hconsistent estimates, one can show that

PF is the same as withbF

=

Vb

S

. Hence we get up to first order

P

(

RbY Y,

min(RbYY)I)Hb

=

P

(

RbY Y,

2vI)H

=

PVbS

:

This shows that the alternative signal subspace fitting method gives asymptotically exactly the same perfor- mance as the original SSF method. Furthermore, as long as consistent estimates are used for

2

v

andH, the corre- sponding estimation errors have no influence up to first order.

c. Simplified method

When we use simplyFb

=

Rb

YY

Hb, then we get

PF

= 2Sym(

PVN,

V

S +

H

(

V

HS

H

)

,1

N

,1

S

V

HS )

The use of Rb

Y Y

instead of R leads to the appearanceb of the second term, the relative importance of which is proportional to

N

,1

S

. Hence this term is negligible at high SNR.

5. ALTERNATIVE NOISE SUBSPACE FITTING Another possibility to formulate the NSF criterion is :

min

H

T

(

H?

)

R

Y Y

12 2

F

(5)

where the matrix square-root is of the form R

1

Y Y

2

=

V

S B

12Q for some unitary Q. This NSF criterion can be written as

min

H of

trfT

(

H?

)

R

Y Y

12

R YY

H2 T

(

H?

) H

g

=

trfT

(

H?

)

R

Y Y

T

(

H?

) H

g

=

trfH?P

M k

=,1

N

,1R

YY (k)

H?

H

g

= (M

,

N+1)

trfH?R

^ YY

H?

H

g

(6)

Furthermore, it is possible to write this NSF criterion as

min

H H

H B

H

:

This coincides with the Subchannel Response Matching (SRM) formula in [2]. This proves that the noise sub- space fitting problem given by (5) is nearly equivalent to the SRM problem, apart from a weighting matrix

N

12.

6. SIMULATIONS 6.1 Signal Subspace fitting

In our simulations, we use a randomly generated real channel of length 6T, an oversampling factor of

m = 1

and

M = 3

antennas. We draw the NRMSE of the chan- nel, defined as

NRMSE

=

v

u

u

t

1 100

100

X

l

=1

jjH

^

(

l

)

,Hjj

2

F =

jjHjj2

F

whereH

^

(

l

)

is the estimated channel in the

l

th trial.

The SNR is defined as

(

kHk2

2

a )=(mM

2

v )

. The

correlation matrix is calculated from a burst of 100 QAM-4 symbols. For these simulations, we used 100 Monte-Carlo runs.

We draw the NRMSE for the first step of the al- gorithm, the second step and the subspace fitting with eigendecomposition.

These curves show that the proposed algorithm yields the same performance as the subspace fitting al- gorithm with eigendecomposition (even slightly better at low SNR, but this is not relevant). It is to note that we made the simulation using R

^ YY

and R

^ Y Y

,

min (^

R

Y Y )

I, which gives the same performance.

−5 0 5 10 15 20 25

−35

−30

−25

−20

−15

−10

−5 0

Channel NRMSE with estimated covariance matrix

NRMSE in dB

SNR + : based on R + : based on R − Ι σ2v + : based on R − Ι σ2v x : based on R First step

−−−−−−−

o : classic Subspace Fitting Second step

−−−−−−−−−

x : based on R + : based on R − Ι σ2v

Figure 1: Subspace Fitting performance

(4)

Furthermore, we also include the NRMSE of channel estimate when using a perfect estimated covariance for the two steps. Comparison of the two graphs illustrates the preponderance of the covariance estimation error on the channel estimation error.

−5 0 5 10 15 20 25

−100

−90

−80

−70

−60

−50

−40

−30

−20

−10 0

Channel NRMSE with Perfect estimation of the covariance matrix

NRMSE in dB

SNR o : first step

+ : second step

Figure 2: Subspace Fitting performance 6.2 Noise Subspace fitting

In these simulations, we use a randomly generated complex channel of length 3T, an oversampling factor of

m = 1

and

M = 3

antennas and 300 Monte-Carlo runs.

In the figure hereunder, we plot the curves corre- sponding to NMSE versus SNR using the NSF technique with R

1

Y Y

2 replaced by R

Y Y

. We also plot the normal- ized deterministic Cramer-Rao bound [9]. The obtained curves are identical to SRM curves. In order to study the influence of noise, on NSF performance, we con- sider NMSE computed withR

^ YY

,R

^ YY

,

min (^

R

Y Y )

I

andR

^ YY

,

2

v

I: the obtained curves are superimposed which shows that noise has little influence on NSF (note that this was to be expected since the considered NSF is equivalent to SRM and it is well known that usingR

^ Y Y

orR

^ Y Y

,

I leads to same performance for SRM).

5 10 15 20 25 30

10−5 10−4 10−3 10−2 10−1

SNR

NMSE

NSF

trCRBH^=kHk2

’o’:R^Y Y

’+’:^RY Y ,minI

’x’:R^Y Y,2vI

Figure 3: NSF performance 7. CONCLUSIONS

We have proposed a new two-step algorithm for solving the signal subspace fitting problem, in the channel identification context, which is computation- ally less demanding than the usual algorithms. Per- turbation analysis shows the asymptotic equivalence of

the eigendecomposition-free approach to the original method. This equivalence was confirmed by simulation results. Further work should explore different rank re- vealing decompositions that can improve the finite sam- ple performance and explore fast algorithms for the cal- culation of the projection matrix and associated perfor- mance if degraded. Continuing our investigations for the noise subspace fitting, we found a tight link between noise subspace fitting and SRM.

References

[1] K. Abed-Meraim, J.-F. Cardoso, A. Y. Gorokhov, P. Loubaton, and E. Moulines. On Subspace Methods for Blind Identification of Single-Input Multiple-Output FIR Systems. IEEE Transactions on Signal Processing, 45(1):42–55, January 1997.

[2] Jaouhar Ayadi and Dirk T.M. Slock. Cramer-Rao Bounds and Methods for Knowledge Based Esti- mation of Multiple FIR Channels. In Proc IEEE SPAWC Workshop, Paris, France. April 1997.

[3] T.F. Chan. Rank revealing QR factorizations. Linear Algebra Applications, 88, 89:67–82, 1987.

[4] J.J. Fuchs. Rectangular Pisarenko Method Applied to Source Localization. IEEE Transactions on Sig- nal Processing, 44(10):2377–2383, Oct. 1996.

[5] J¨urgen G¨otze and Alle-Jan van der Veen. On- Line Subspace Estimation Using a Schur-Type Method. IEEE Transactions on Signal Processing, 44(6):1585–1589, June 1996.

[6] Martin Kristensson, Bj¨orn Ottersten, and Dirk Slock. Blind Subspace Identification of a BPSK Communication Channel. In Proc. Asilomar Conf., Asilomar, CA, USA, November 1996.

[7] E. Moulines, P. Duhamel, J.-F. Cardoso, and S. Mayrargue. Subspace Methods for the Blind Iden- tification of Multichannel FIR filters. IEEE Transac- tions on Signal Processing, 43(2):516–525, Febru- ary 1995.

[8] D.T.M. Slock. Blind Fractionally-Spaced Equaliza- tion, Perfect-Reconstruction Filter Banks and Multi- channel Linear Prediction. In Proc ICASSP 94 Conf., Adelaide, Australia, April 1994.

[9] D.T.M. Slock and C.B. Papadias. Blind fractionally- spaced equalization based on cyclostationarity. In Proc. Vehicular Technology Conf., pages 1286–

1290, Stockholm, Sweden, June 1994.

[10] Hanks H. Zeng and Lang Tong. Blind Channel Es- timation Using the Second-Oder Statistics: Asymp- totic Performance and Limitations. submitted to IEEE Transactions on Signal Processing, January 1997.

Références

Documents relatifs

Classical optimization methods, namely gradient based methods, require those derivatives: therefore, when they are not available as simulator outputs, they are usually

Recursive Ordinary MOESP update + Sequential correlation based propagator method Recursive PI/PO MOESP update + Sequential correlation based propagator method COIVPM

Abstract—In this paper, we propose an improvement of the attack on the Rank Syndrome Decoding (RSD) problem found in [1], usually the best attack considered for evaluating the

In this section, we develop a test function plug-in method based on the minimization of the variance estimation in the case of the family of indicator functions for Ψ H.. Actually,

“Low-complexity algorithms for low rank clutter parameters estimation in radar systems,” Signal Processing, IEEE Transactions on, to appear.

A Subspace Fitting Method based on Finite Elements for Identification and Localization of Damage in Mechanical Systems.. 11th International Conference on Computational

A subspace fitting method based on finite elements for fast identification of damages in vibrating mechanical systems.. 25th International Con- ference on Noise and

The GODFIT algorithm for GOME total ozone makes a single fit for the vertical column; there is no DOAS slant-column fitting and AMF conversion.. GODFIT relies on a