• Aucun résultat trouvé

Recursive linearly constrained minimum variance estimator in linear models with non-stationary constraints

N/A
N/A
Protected

Academic year: 2021

Partager "Recursive linearly constrained minimum variance estimator in linear models with non-stationary constraints"

Copied!
8
0
0

Texte intégral

(1)

an author's

https://oatao.univ-toulouse.fr/22750

https://doi.org/10.1016/j.sigpro.2018.03.016

Vincent, François and Chaumette, Eric Recursive linearly constrained minimum variance estimator in linear models

with non-stationary constraints. (2018) Signal Processing, 149. 229-235. ISSN 0165-1684

(2)

Short

communication

Recursive

linearly

constrained

minimum

variance

estimator

in

linear

models

with

non-stationary

constraints

François

Vincent,

Eric

Chaumette

Department of Electronics, Optronics and Signal, ISAE-SUPAERO, University of Toulouse, 10 Avenue Edouard Belin, Toulouse 31055, France

Article history: Received 11 August 2017 Revised 18 March 2018 Accepted 21 March 2018 Available online 22 March 2018

Keywords:

Parameters estimation

Linearly constrained minimum variance estimator

Robust adaptive beamforming

a

b

s

t

r

a

c

t

Inparameterestimation,itiscommonplacetodesignalinearlyconstrainedminimumvariance estima-tor (LCMVE)totackle the problemofestimatingan unknownparameter vectorinalinearregression model.Sofar,theLCMVEhasbeenmainlystudiedinthecontextofstationaryconstraintsinstationary ornon-stationaryenvironments,givingrisetowell-establishedrecursiveadaptiveimplementationswhen multipleobservationsareavailable.Inthiscommunication,providedthattheadditivenoisesequenceis temporallyuncorrelated,wedeterminethefamilyofnon-stationaryconstraintsleadingtoLCMVEswhich canbecomputedaccordingtoapredictor/correctorrecursionsimilartotheKalmanFilter.Aparticularly noteworthyfeatureoftherecursiveformulationintroducedistobefullyadaptiveinthecontextof se-quentialestimationasitallowsateachnewobservationtoincorporateornotnewconstraints.

© 2018ElsevierB.V.Allrightsreserved.

1. Introduction

In the signal processingliterature dealingwith parameter es-timation, one of the most studied estimation problem is that of identifyingthecomponentsofaN-dimensionalobservationvector (y ) formed froma linearsuperpositionof P individualsignals (x ) tonoisydata(v ): y =Hx +v 1,a.k.a.thelinearregressionproblem, where H isaN-by-Pmatrix and v isa N-dimensionalvector. The importanceofthisproblemstemsfromthefactthat awiderange ofproblemsincommunications,array processing,andmanyother areascanbecastinthisform [1,2].Asin[3,Section5.1],weadopt a jointpropercomplexsignalsassumption for x and v ,which al-lows to resortto standard estimation inthe mean squared error (MSE)sensedefinedontheHilbertspaceofcomplexrandom vari-ableswithfinitesecond-ordermoment.Apropercomplexrandom variableisuncorrelatedwithitscomplexconjugate.Anyresult de-rivedwithjointpropercomplexrandomvectorsarevalidforreal random vectors provided that one substitutes the matrix/vector transpose conjugateforthe matrix/vector transpose. Additionally, it isassumedthat: (a) v iszeromean,(b) x isuncorrelatedwith

Corresponding author.

E-mail addresses: francois.vincent@isae.fr (F. Vincent), eric.chaumette@isae.fr (E. Chaumette).

1 Throughout the present communication, scalars, vectors and matrices are rep- resented, respectively, by italic, bold lowercase and bold uppercase characters. The scalar/matrix/vector transpose conjugate is indicated by the superscript H . [ A B ] and A

B



denotes respectively the matrix resulting from the horizontal and the vertical

v ,(c)themodelmatrix H andthenoisecovariancematrix C v are

eitherknownorspecifiedaccordingtoknownparametric models. Inthissetting,theweightedleastsquaresestimatorof x [4]:2 xb =argmin x



(

y− Hx

)

H C−1 v

(

y− Hx

)



(1a) =



HH C−1v H



−1 HH C−1v y, (1b)

coincides withthe maximum-likelihood estimator [5], if x is de-terministicand v isGaussian,andisknowntominimizethe MSE matrixamongalllinearunbiasedestimatorsof x ,thatisx b = W bH y

where [6]: Wb =argmin W

E



WH y− x



WH y− x



H

s.t. WH H=I (2a) =C−1v H



HH C−1v H



−1 , (2b)

whatever x isdeterministicorrandom.Furthermore,sincethe ma-trix W b isaswellthesolutionof [2,6]:

Wb =argmin W



WH CvW



s.t. WH H=I, (2c)

x b is also knownasthe minimum variance distortionless re-sponseestimator(MVDRE) [1,2,6].However,itis wellknownthat the performance achievable by the MVDRE strongly depends on the accurate knowledge on the parametric model of the obser-vations, that is on H and C v, and are not particularly robust in 2 The superscript b is used to remind the reader that the value under considera-

tion is the “best” one according to a given criterion. concatenation of A and B. E[ · ] denotes the expectation operator.

(3)

the presence of various types of differences between the model and the actual environment [1, Section 6.6], [7, Section 1], [8]. Thuslinearlyconstrainedminimumvariance estimators(LCMVEs)

[6,9,10]havebeendevelopedinwhichadditionallinearconstraints are imposed to make the MVDRE more robust [1, Section 6.7], [7,Section1], [8]: Wb =argmin W



WH CvW



s.t. WH



=



,



=[H



],



=[I

ϒ

], (3a) =C−1v







H C−1v





−1



H , (3b)

where



and Y are known matrices of the appropriate dimen-sions,attheexpenseofanincreaseoftheminimumMSEachieved, since additional degrees of freedom are used by the LCMVE in order to satisfy these constraints. However, firstly, the closed-form solution of the LCMVE (3b) requires the inversion of C v,

which can be too computationally complex for numerous real-world applications. Secondly, C v may be unknown and must be

learnedby an adaptive technique. Interestingly enough, if x and

v are uncorrelated, C v can be replaced by C y in (1b), (2b) and (3b), which means that either C v can be learned from auxiliary

datacontainingnoise only,ifavailable,or C ycan beused instead

and learned from the observations. Therefore, when several ob-servations

{

y 1,...,y k

}

are available, adaptive implementations of

theLCMVEhavebeendevelopedresortingtoconstrained stochas-ticgradient [6,11],constrainedrecursive leastsquares [12,13]and constrained Kalman-type [14,15] algorithms. The known equiva-lencebetweentheLCMVEandthegeneralizedsidelobecanceller processor [9,10,16] allows to resort as well to standard (uncon-strained) stochastic gradient or recursive least squares [2]. These recursive algorithms belongs to the set of sequential estimation algorithms compatible with applications where the observations become available sequentially and, immediately upon receipt of newobservations,itisdesirabletodeterminenewestimatesbased upon all previous observations (including the current ones).It is anattractiveformulationforembeddedsystemsinwhich compu-tationaltimeandmemoryareatapremium,sinceitdoesnot re-quirethatallobservationsareavailableforsimultaneous(“batch”) processing.Last,thiscanbecomputationallybeneficialincasesin whichthenumberofobservationsismuchlargerthanthenumber ofsignals [17].

However,theaforementionedrecursivealgorithmscanonly up-datesequentiallytheLCMVE (3b)innon-stationaryenvironments, i.e.whentheobservationmodelchangesovertime (y l =H l x +v l ,

1≤ l≤ k), for a given set of linear constraints W H



=



[2,6,11– 15],whichdefinesthesetofrecursiveLCMVEsforstationary con-straints.AnexampleofarecursiveLCMVEfornon-stationary con-straintsinnon-stationaryenvironmentsisgivenbytheMVDREx b k

of x , based on observations up to and including time k. Indeed, providedthattheadditivenoisesequence

{

v 1,...,v k

}

istemporally

uncorrelated,x b k follows apredictor/corrector recursionsimilar to theKalmanFilter[17,Section1] [18]:

xb k =xb k −1+WbH k



yk − Hk xb k −1



, xb 1=



HH 1C−1v1H1



−1

HH 1C−1v1y1, (4)

where W b k isanalogous toa Kalman gain attime k.In thiscase, theset ofconstraints (2c) is non-stationarysince itis definedas

W H H k =I ,where H k isthematrixresultingfromthevertical con-catenationofkmatrices H 1,... ,H k, and W isan unknown matrix

oftheappropriatedimensions.Off course,fromatheoreticalpoint ofview,ifall theobservations

{

y 1,...,y k

}

arestacked intoa

sin-glevector y T k=



y T 1,...,y T k



,the “batchform”(3b) obtainedfrom

y k allows toimplementLCMVEswithnon-stationnaryconstraints, which are, unfortunately, hardly likely to be computable as the sizeof y k increases.Thereforethenovelcontributionofthepresent

communicationistointroduce,providedthattheadditivenoise se-quence

{

v 1,...,v k

}

istemporallyuncorrelated,thefamilyoflinear

constraintsyielding a LCMVE whichcan be computedrecursively in the form of (4) in place of the “batchform” (3b). It appears that thisfamily only containsnon-stationary constraints, includ-ingtheaforementionedMVDRE.Aparticularlynoteworthyfeature ofthe recursive formulationintroduced is tobe fully adaptivein thecontext ofsequentialestimationasitallows ateachnew ob-servation toincorporate ornot newconstraints.The relevance of theproposedrecursiveformulationoftheLCMVEisexemplifiedin

Section3inthecontextofarrayprocessing.

2. Recursive linearly constrained minimum variance estimators

Inthefollowing:a)thevector spaceofcomplexmatriceswith

NrowsandP columnsisdenotedMC

(

N,P

)

,b)thematrix

result-ingfromtheverticalconcatenationofkmatricesA 1,..., A k ofsame

column numberis denoted A k . We consider the linear measure-ment/observationmodel:

yk =Hk x+vk , k≥ 1, (5a)

where x is a P-dimensional complex unknown vector, y k is a

Nk -dimensional complex measurement/observation vector, H k

MC

(

Nk ,P

)

andthecomplexnoise sequence{v k }k ≥ 1 iszero-mean

andtemporallyuncorrelated.Then (5a)canbeextendedona hori-zonofkpointsfromthefirstobservationas:

yk =

y1 .. . yk

=

H1 .. . Hk

x+

v1 .. . vk

=Hk x+vk ,







yk ,vk MC

(

Nk ,1

)

Hk MC

(

Nk ,P

)

Nk =kl=1Nl . (5b) Let W k =



Dk−1 Wk



where D k −1MC



Nk −1,P



and W k MC

(

Nk ,P

)

.Theaimistolookforthefamilyoflinearconstraints: WHk



k =



k ,



k =



Hk



k



,



k =[I

ϒ

k ], (6)

where



k and Y k are knownmatrices ofthe appropriate dimen-sions,yieldingaLCMVEx b k =W bH k y k where (3a)and (3b):

Wb k =argmin Wk

WHk CvkWk

s.t. WHk



k =



k (7a) =C−1v k



k





H kC−1vk



k



−1



H k , (7b)

which can be computedaccordingto a predictor/corrector recur-sionoftheform,

k≥ 2:

xb k =xb k −1+WbH k



yk − Hk xbk −1



=



I− WbH k Hk



xb k −1+WbH k yk . (8)

Akey point tosolve theproblemat handisto noticethat, since

C vl, vk= C vk

δ

kl ,thenforany W ksatisfying (6): Pk



Wk



=WH k CvkWk =D H k −1Cvk−1Dk −1+W H k CvkWk =Pk



Dk −1,Wk



, (9) whichsuggeststhatsomeadhoclinearconstraints (6)couldyield separablesolutionsfor D k −1andW k ,whichisinvestigatedinafirst step. First step If we recast



k =



H k



k



as



k =



k−1 k



where



k −1=



H k −1



k −1



and



k =[H k



k ],thenanequivalentformof (6)is:

(4)
(5)
(6)
(7)
(8)

4. Conclusions

In this communication, for multiple noisy linear observations which noises are uncorrelated, we have introduced the family of linear constraints yielding a LCMVE computable via a predic-tor/corrector recursion similar to the Kalman Filter in place of the “batchform”. It appears that this family only contains non-stationary constraints. Anoteworthy feature of the recursive for-mulation introduced is to be fully adaptive inthe context of se-quential estimationasitallows optionalconstraintsaddition that can be triggered by a preprocessing of each new observation or externalinformationontheenvironment.

Acknowledgment

This work has been partially supported by the DGA/MRIS (2015.60.0090.00.470.75.01).

Supplementary material

Supplementary material associated with this article can be found,intheonlineversion,atdoi:10.1016/j.sigpro.2018.03.016.

References

[1] H.L.V. Trees , Optimum Array Processing, Wiley-Interscience, 2002 .

[2] P.S.R. Diniz , Adaptive Filtering: Algorithms and Practical Implementation (4Ed), in: Springer Science+Business Media, 2013 .

[3]P.J. Schreier , L.L. Scharf , Statistical Signal Processing of Complex-Valued Data, Cambridge University Press, 2010 .

[4]R. L. Plackett , Some theorems in least squares, Biometrika 37 (1950) 149–157 .

[5]F. C. Schweppe , Sensor array data processing formultiple signal sources, IEEE Trans. IT 14 (1968) 294–305 .

[6]O. L. Frost , An algorithm for linearly constrained adaptive array processing, Proc. IEEE 60 (8) (1972) 926–935 .

[7]J. Li , P. Stoica , Robust Adaptive Beamforming, Wiley-Interscience, 2006 .

[8]S. A. Vorobyov , Principles of minimum variance robust adaptive beamforming design, Elsevier Signal Process. 93 (2013) 3264–3277 .

[9]L.J. Griffiths , C.W. Jim , An alternative approach to linearly constrained adaptive beamfonning, IEEE Trans. on AP 30 (1) (1982) 27–34 .

[10]S. Werner , J.A. Apolinario Jr , M.L.R. De Campos , On the equivalence of RLS im- plementations of LCMV and GSC processors, IEEE SP Letters 10 (12) (2003) 356–359 .

[11]X. Guo , L. Chu , B. Li , Robust adaptive LCMV beamformer based on an iterative suboptimal solution, Radioengineering 24 (2) (2015) 572–582 .

[12]L.S. Resende , J.M.T. Romano , M.G. Bellanger , A fast least squares algorithm for linearly constrained adaptive filtering, IEEE Trans. SP 44 (5) (1996) 1168–1174 .

[13]R.C. de Lamare , Adaptive reduced-rank LCMV beamforming algorithms based on joint iterative optimisation of filters, Electron. Lett. 44 (8) (2008) 565–566 .

[14]Y.H. Chen , C.T. Chiang , Adaptive beamforming using the constrained kalman filter, IEEE Trans. AP 41 (11) (1993) 1576–1580 .

[15]D. Cherkassky , S. Gannot , New insights into the kalman filter beamformer: ap- plications to speech and robustness, IEEE SP Lett. 23 (3) (2016) 376–380 .

[16]B.R. Breed , J. Strauss , A short proof of the equivalence of LCMV and GSC beam- forming, IEEE SP Lett. 9 (6) (2002) 168–169 .

[17]J.L. Crassidis , J.L. Junkins , Optimal Estimation of Dynamic Systems (2ed), CRC Press, Taylor & Francis Group, 2012 .

[18]A.H. Sayed , T. Kailath , A state-space approach to adaptive RLS filtering, IEEE SP Mag. 11 (3) (1994) 18–60 .

Références

Documents relatifs

The principle of the approach is based on a new theorem that made it possible to compute recursively the set of all state vectors that are consistent with some bounded errors and

In this article, we show that for nonlinear mixed effects models with Gaussian random effects, Expectation Maximization (EM) like algorithms for the computation of the

This note shows that the asymptotic variance of Chen’s [Econometrica, 70, 4 (2002), 1683–1697] two-step estimator of the link function in a linear transformation model depends on

Then, the ability of our recursive averaged estimator to deal with large samples of very high-dimensional data is illustrated on the robust estimation of television audience

In order to solve this kind of problem, different approach can be found, such as model inversion (Sil- verman, 1969), Unknown Input Observers (Darouach et al., 1994) and Filters

First we apply our graphical method to the case of data set with one single outlier, either in the explanatory variable or/and in the response variable.. Second we apply the

It is worth noting that the continuous variation of V / z is monitored by the estimation algorithm during level changes, and that the parameters accuracy can also be

Keywords : Functional data, recursive kernel estimators, regression function, quadratic error, almost sure convergence, asymptotic normality.. MSC : 62G05,