• Aucun résultat trouvé

STATE SPACE MODEL FORMULATION 37

Dans le document The DART-Europe E-theses Portal (Page 96-100)

Adaptive EM-Kalman Filter

3.5. STATE SPACE MODEL FORMULATION 37

It is noteworthy that the choice of the F22,k matrix size N should be done carefully. In fact, the value ofN should be superior to the maximum value of the periods for a possible period tracking. It can be noticed that the coefficients (1−αk) bk and αk bk are situated respectively in the⌊τkth and ⌊τk+ 1⌋th columns of F22,k and F12,k.

In order to perform the separation jointly for the K sources, we introduce the vector xt that consists of the concatenation of the {xk,t}k=1:K vectors xt = h

xT1,t xT2,t · · · xTK,tiT

which results in the time update equation, see (3.12). Moreover, by reformulating the expression of{yt}, we introduce the observation equation (see 3.13).

We obtain the following state space model:

xt = F xt−1 + G et (3.12)

yt = hTxt + vt (3.13)

where

• et= [e1,t e2,t · · · eK,t]T is theK×1 column vector resulting from the concatenation of the K innovations at timet. Its covariance matrix is the K×K diagonal matrix Q=diag(σ1,· · · , σK).

• F is thePK

k=1(pk+N + 2) × PK

k=1(pk+N + 2) block diagonal matrix given by F = blockdiag (F1,· · · ,FK). See Figure 3.1 for a two-sources example.

• Gis the PK

k=1(pk+N + 2)×K matrix given by G = block diag(g1,· · · ,gK)

• h is the PK

k=1(pk+N + 2)×1 column vector given by h = [hT1 · · ·hTK]T where hi = [1 0· · · 0]T of length (N+pk+ 2).

Figure 3.1 shows a two-sources example. On the right the state vector is represented.

It consists of the concatenation of two state vectors (themself being the concatenation of the source signal and of its Short Term prediction error signal), one by source k. The transition matrixF is shown on the right of the Figure. It is a block diagonal matrix (one block by source k). Each source blockFk ofF is composed by three sub matrices defined in (3.11).

It is obvious that the linear dynamic system derived previously depends on unknown parameters recapitulated in the variable

θ=n

{ak,n}k∈{1,...,K}

n∈{1,...,pk}

,{bk}k∈{1,...,K},{σk}k∈{1,...,K}, σ2v

o (3.14)

Hence, a joint estimation of sources (the state) and θ is required. In literature (see e.g. [70, 72, 118]) the EM-Kalman algorithm presents an efficient approach to estimate iteratively parameters and its convergence to the Maximum Likelihood solution is proven [10]. In the next section, the application of this algorithm to our case is developed.

38 CHAPTER 3. ADAPTIVE EM-KALMAN FILTER

Figure 3.1: State Space Model and State Vector for 2 sources.

3.6 Algorithm

The EM-Kalman algorithm allows to estimate iteratively parameters and sources by al-ternating two steps, E-step and M-step [10]. In the M-step, an estimate of the parameters θˆis computed. In our problem, there are two types of parameters. The first one includes the parameters of the time update equation, in (3.12), which consist on the Short term and Long term coefficients and the innovation variance of all the K sources. The second one is the parameter of the observation in (3.13), the additive noise variance. From the state space model presented in the first part, and for each sourcek, the relation between the innovation process at timet−1 and the LT plus ST coefficients could be written as

ek,t−1=vTk˘xk,t−1 (3.15)

wherevk= [1ak,1,· · · , ak,pk, −(1−αk)bk, −αkbk]T is a (pk+ 3)×1 column vector and

˘

xk,t−1 = [xk(t−1, θ)· · ·xk(t−pk−1, θ) ˜xk(t− ⌊τk⌋ −1, θ) ˜xk(t− ⌊τk⌋ −2, θ)]T is called the partial state deduced from the full state xt with the help of a selection matrix Sk. The relation between the partial state at timet−1 and the full state at timetis ˘xk,t−1 =Skxt. The lag of one time sample between the full and partial state will be justified later and will lead to the fixed-lag smoothing.

3.6.1 Partial states discussion

As we have just mentioned, the selection matrix and the extended form of the transition matrix allow to transform the filtering in fixed lag smoothing. Also, this selection matrix extracts in the source state the quantities needed to estimate the Short plus Long term variances. But note that this formulation can lead to extract the parameters (ST and LT) jointly or separately:

3.6. ALGORITHM 39

• The joint relation between the partial state and the innovation is the one defined before:

˘

xk,t−1 = [xk(t−1, θ)· · ·xk(t−pk−1, θ) ˜xk(t− ⌊τk⌋ −1, θ) ˜xk(t− ⌊τk⌋ −2, θ)]T vk = [1, ak,1,· · · , ak,pk, −(1−αk) bk, −αk bk]T

ek,t−1 = vTkk,t−1

• The alternative relation consists on decoupling between ST and LT parameters. The consequence is the algorithm design possibilities that it offers:

STk,t−1 = [xk(t−1, θ)· · ·xk(t−pk−1, θ)]T vSTk = [1ak,1· · ·ak,pk]T

˜

xk,t−1 = vSTk T

˘ xSTk,t−1 and

˘

xLTk,t−1 = [˜xk(t−1, θ) ˜xk(t− ⌊τk⌋ −1, θ) ˜xk(t− ⌊τk⌋ −2, θ)]T vLTk = [1, −(1−αk) bk, −αk bk]T

ek,t−1 = vLTk T

˘ xLTk,t−1

This lead us to design two algorithms. The first one is calledJoint-EMKand it estimates jointly the parameters. The second one performs alternated estimation and is called Alt-EMK. Naturally designing algorithms with only one aspect of the speech model are also investigated in simulations.

3.6.2 Parameters estimation

After multiplying (3.15) by ˘xTk,t−1 on both sides, applying the conditional expectation operatorE{ |y1:t}and doing a matrix inversion, the following relation between the vector of coefficients and the innovation variance is deduced:

vkkR−1k,t−1[1,0· · ·0]T (3.16) The vectorvk contains all the parameters we want to estimate for the sourcek. Note that although this was performed for the joint estimation, we get a similar procedure for the separate estimation except that two covariances matrices are involved and two vectors are estimated. In (3.16) the covariance matrix Rk,t−1 is defined as En

˘

xk,t−1Tk,t−1|y1:t

o. It is important to notice that the estimation of Rk,t−1 is done using observations till timet, which is a fixed-lag smoothing treatment withlag= 1. As mentioned before, the relation between the partial state at time t−1 and the full state at time tis ˘xk,t−1 =Skxt. The following key relation is used in the partial state covariance matrix computation:

R−1k,t−1=SkEn

xtxTt|y1:t

oSTk (3.17)

Notice here the transition from the fixed lag smoothing with the partial state to the simple filtering with the full state. This fact justifies the selection of the partial state at time

40 CHAPTER 3. ADAPTIVE EM-KALMAN FILTER t−1 from the full state at time t. This selection is possible due to the augmented form matrix Fk. In pratice, the expectation is done using a forgetting factor (λ < 1). If we use the alternative reduced state to estimate separatly the ST and LT parameters, then it follows that we can use differents forgetting factor. This point is not investigated, but we claim that by relaxing the covariances matrix of, e.g., LT parameters when the period is changing can be useful for a quicker adaptation of the system.

The innovation variance is simply deduced as the first component of the matrixR−1k,t−1. The estimation of the observation noise powerσv2 is achieved by maximizing the log like-lihood function logP yt|xt, σv2

relative toσv2. The optimal value can be easily proved to be equal to

σˆv2(t)=Eh

yt−hTt|t2i

+hTPt|th (3.18)

The time index (t) in ˆσv2(t) denotes the iteration number. The computation of the partial covariance matrixRk,t−1is achieved in theE−step. This matrix depends on the quantity En

xk,txTk,t|y1:t

o the definition of which is

En

xtxTt|y1:t

o= ˆxt|tˆxTt|t+Pt|t (3.19)

where the quantities ˆxt|t and ˆPt|t are respectively the full estimated state and the full estimation error covariance computed using Kalman filtering equations. The algorithm needs an accurate initialization, which will be discussed afterward. Let us call with ˆxk,t the estimation of the sourcek at timet.

Adaptive EM Kalman Algorithm

• E-Step. Estimation of the sources covariance Kt = Pt|t−1h(hTPt|t−1h+ ˆσ2v)−1

• M-Step. Estimation of the AR parameters using linear prediction. k = 1, ...., K ˆ

As previoulsy mentioned, it is essential that the symmetry of the covariance matrices

Dans le document The DART-Europe E-theses Portal (Page 96-100)