• Aucun résultat trouvé

Subspace identification of switching model

N/A
N/A
Protected

Academic year: 2021

Partager "Subspace identification of switching model"

Copied!
7
0
0

Texte intégral

(1)

HAL Id: hal-01188371

https://hal.archives-ouvertes.fr/hal-01188371

Submitted on 29 Aug 2015

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Subspace identification of switching model

Komi Pekpe, Gilles Gasso, Gilles Mourot, José Ragot

To cite this version:

Komi Pekpe, Gilles Gasso, Gilles Mourot, José Ragot. Subspace identification of switching model.

13th IFAC Symposium on System Identification, SYSID, Aug 2003, Rotterdam, Netherlands. �hal- 01188371�

(2)

Subspace identié cation of switching model

Komi Midzodzi PEKPE, Komi GASSO

]

, Gilles MOUROT, Josê RAGOT Centre de Recherche en Automatique de Nancy-CNRS UMR 7039 2, Avenue de la Forèt de Haye 54516 Vandoeuvre-l’ s-Nancy Cedex France

Tel: (33) 3 83 59 57 04, Fax: (33) 3 83 59 56 44 E-mail: {kpekpe, gmourot, jragot}@ensem.inpl-nancy.fr

]

Laboratoire PSI - CNRS FRE N

Ã

2645

Place Emile Blondel - BP 08 F-76131 Mont-Saint-Aignan Cedex Tel : (33) 2 32 95 98 75, Fax: (33) 2 32 95 97 08

E-mail: komi.gasso@insa-rouen.fr

Abstract… Subspace identi cation of switching model is considered in this paper. Here the switching model is sup- posed to be a sum of weighted linear models. The method established uses recursive subspace identi cation to estimate the switching function and least squares method for local model Markov parameters estimation. To perform the com- putation of the weighting functions a two-steps algorithm (switching times determination and model merging) is given.

Finally the local model parameter estimation is based on the estimation of the Markov parameters.

Keywords switching model, Markov parameters, least squares, weighting function, recursive subspace identi ca- tion.

I. Introduction

The objective of this paper is to identify switching mod- els. Here a switching model structure is considered as a sum of weighted linear systems (local models), but only one model is active at any time. In this paper, we use a weighting function which determines the active model. Our goal is to estimate the weighting function and identify the local model, using the knowledge of the inputs and outputs only. Thus, knowing the weighting function allows to de- termine which local model is active. The estimation of the switching times is made by a recursive subspace identiéca- tion method. The proposed method is not sensible to the change in input dynamics because the recursive subspace method proposed in [4] is not sensible to this fact. Then the switching times correspond only to a change in the sys- tem dynamic. To identify the local models, we use least squares method. The parameter estimation is done by the extraction of the Markov parameter matrix. The advan- tage of the proposed method results in fact that there is not any stage of nonlinear optimization, and the parameter estimation is unbiased.

After the formulation of the problem, the notations are introduced and the estimation of the Markov parameters is achieved. In section 4 the determination of the weight- ing function is performed. The identiécation of the local models procedure is given. Finally an example illustrates the performance of the proposed method.

II. Formulation of the model

The output of the switching model can be modeled as a weighted sum of outputs of h linear models as follows:

yk = Xh s=1

!s;kys;k; (1)

where yk 2 R`, any weight !s;k 2 f0;1gand Ph

s=1

!s;k =1, 8 k2[1; q](if we have q measurements of the inputs and outputs). !s;k represents the weighting function and ys;k the output of the sth local model.

Any local model is supposed linear of order ns and can be described by the equation:

xs;k+1=Asxs;k+Bsuk;

ys;k=Csxs;k+Dsuk+ek (2) Here the output error ek2R`is assumed to be a zero mean white noise sequence and uncorrelated with uk 2Rm and has covariance matrix

Eeket =

¡ R >0; k=t 0; otherwise.

We suppose each local model is stable.

The inputs ukand outputs yk of the switching model are supposed to be known. The object is to determine:

-the weighting function!s;k for each model,

-the order ns and the h linear local model parameters.

The parameters to be determined areAs,Bs,CsandDs. Note that, to obtain the gthweighting outputs, it is suf- écient to make the product of the global outputsykby the gth weighting function:

!g;kyk =!g;k Xh s=1

!s;kys;k=!g;kyg;k; 8k2[1; q] (3) because

(!g;k)2=!g;k and !g;k¢ !s;k= 0; (if g6=s), 8k2[1; q]

(4)

(3)

In the following we use a recursive subspace identiécation technique to obtain the weighting function !s;k. In order to obtain local Markov parameters, least squares tools will be used. In the next section the matrices used later are deéned.

III. The system matrices

For the sth model, we can set the following deénitions (note that, for simplicity we use i instead of isin this arti- cle).

Output weighted block Hankel matrix of the sth model is deéned as (i>ns):

Ys,!i =

¢!s;iys;i !s;i+1ys;i+1 ::: !s;i+j£ 1ys;i+j£1 ³ (5) Input block Hankel matrix is deéned as:

U = 0 BB B@

u1 u2 ::: uj

u2 u3 ::: uj+1

... ... ... ... ui ui+1 ::: ui+j£1

1 CC

CA (6)

The same deénition is made for the Hankel block matrix E for the noise ek.

The state sequence matrixXs and the weighting matrix of the sthmodel ¡s are deéned as:

Xs

xTs;1 xTs;2 ::: xTs;j ³T

(7) and

¡s= 0 BB B@

!s;i 0 0 0

0 !s;i+1 0 0

... ... ... ...

0 0 0 !s;i+j£1

1 CC

CA (8)

Then we have:

Ys,!i

ys;i ys;i+1 ::: ys;i+j£1 ³

| {z }

Ysi

¡ s (9)

The extended observability matrix©i,s of the sth model is deéned as:

© i,s= 0 BB B@

Cs CsAs

... CsAs 1

1 CC

CA2R`i¤ns (10)

The extended controllability matrixCi,s of the sthmodel is deéned as :

Ci,s

Bs AsBs ¡¡¡ Ais£ 1Bs ³

(11) The Markov parameter matrix for the deterministic part Hdi and for the stochastic part Hsti of the system are deéned

as1: Hs;id

CsAs 2Bs CsAs 3Bs ::: CsBs Ds ³ (12) Hs;ist

0 0 ::: 0 I ³

(13) IV. Estimation of the Markov parameter matrix

for the local model Local matrix input-output equation

It is essential to write the matrix input-output relation, that allows the extraction of the Markov parameter matrix.

The following theorem summarizes this relation.

Theorem 1:

Ys,!i =CsAis£1Xs¡ s+Hs;ids+Hs;ists (14)

The proof of the theorem is established in appendix 1.

Now, the objective is to eliminate the term depending on the state Xs and the noise E, in order to obtain the Toeplitz matrix

Hs;id . The proposed method uses least squares and may be summarize in the next theorem.

Theorem 2: Under the assumptions that:

1. the sthlocal model is stable, 2. the matrixU¡ shas full rank,

3. with an adequate value of i, A1 is neglected, we have:

Ys,!i (U¡ s)T(U¡ s(U¡ s)T)£1j!

!1Hs;id (15)

The proof of the theorem can be found in appendix 2.

Now, from equation 15, we have an estimation of the Markov parameter matrixHs;id .

Remark 1: to estimate the Markov parameter matrix Hs;id by equation 15, we need the knowledge of the weight- ing function ¡ s. We use recursive subspace identiécation to compute this weighting function in the next section.

V. Determination of the weighting function It is important to note that the previous result involves the state matrix of the system which depends on the active local model.

The recursive subspace approach is used to determine the switching time between local models. As the local mod- els are stable, the recursive subspace identiécation can be applied. This identiécation method does not allow to de- tect the change in inputs dynamics; therefore if a change

1The superscriptˆandˆstˆstand forˆdeterministicˆandˆstochas- ticˆrespectively.

(4)

is detected it corresponds only to change in the system dy- namics. We recommend to the reader, to refer to [4] for the details of the recursive subspace identiécation procedure.

To begin with, we make a érst computation of the weighting function with the recursive subspace algorithm.

Then we merge the models which are closed to each other, and we recompute the new weighting functions.

A. The’rst computation of the weighting functions The switching time is determined by the recursive sub- space identiécation algorithm. Here we summarize the main stages of the algorithm of Oku and al.[4].

Let us deéne:

£j = ¢

'i(1) 'i(2) ::: 'i(jà i) ³

(16) 'i(k) = ui(k) =

0 B@

uk ... ui+k£1

1

CA; k2[1; jà i]: (17)

Uj

ui(i) ui(i+ 1) ::: ui(j) ³ (18) Yj is deéned similarly to ( 18).

¦j which is the product of the extended observability matrix and the extended controllability matrix, can be es- timated by the formula:

¦^j = YjªU?

j £Tj¹j with ¹j = ´

£jªU? j £Tjµ£1

and ªF? =IÃ FT(F FT)(£ )F; where I is the identity matrix with the appropriate size and F is a matrix.

If we deéne Pj as Pj = (UjUTj)£1, ¦^j can be obtained also by a usual formulation of the recursive least squares method with a forgetting factor° (°<1, see [4]):

¦^j= ^¦j£ 1Ã ¯j(ej+ ^¦j£1qj)qjT¹j£ 1; (19)

¹j= 1

°(¹1Ã ¯j¹1qjqTj¹1); (20) Pj= 1

°(P 1Ã ®jP1ui(j)uTi (j)P 1); (21) YjUjT =°Y 1UjT£ 1+yi(j)ui(j)T; (22)

£jUjT =°£1UjT£ 1+'i(jà i)ui(j)T; (23)

®j

°+ui(j)TPj£1ui(j)³£1

; (24)

¯j =

¶ 1

®j +qTj¹j£1qj

££1

; (25)

ej=yi(j)à Yj£ 1UjT£1Pj£ 1ui(j)T; (26) qj1UjT£1P1ui(j)T à 'i(jà i); (27) Let ¸ be a threshold designed according to the Â2- distribution with`i degrees of freedom. The distance from the estimated parameter ¦^j to the true parameter ¦j, D( ^¦jj)is deéned as:

D( ^¦jj) =

T race ´

¦^jà ¦§j

µ 1

¾22 i

´£jªU? j £Tjµ ´

¦^jà ¦§j

µT

; (28) where 2j=YjÃY¢j, Y¢j is an estimation of Yj. An estima- tion of the variance of the modeling error ¾22is given by ( [4]):

^

¾22j= 1

i[(jà i+ 1)à 2mi)]¢ T race (Yjà ¦^j£jU?

j (Yjà ¦^j£j)T (29) Thechange test is deéned as (see [4]):

if D(¦^jj)<¸: no change has occurred, if D(¦^jj)>¸: a change has occurred.

Note that in the implementation, the true parameter¦j is replaced by an approximation: ¦§j (see [4]) computed as a least squares estimation of¦j over a sliding window:

¦§j =arg

minT race(Y§Ã ¦§£jU?

j (Y§Ã ¦§£j)T; (30) with

j

yi(jà L+ 2i) ::: yi(j) ³ (31)

£§j

'i(jà L+i) ::: 'i(jà i) ³

(32) Once a change at time instant t is detected, the recursive update equation are re-initialized at time instant t, the re- initialization technique is proposed in [4].

The érst computation of the weighting functions is de- scribed by the algorithm 1 exposed below.

Algorithm 1 (érst computation of the weighting func- tions):

Step1: (initial conditions)

set s¸ 1 and k¸ 0, where s is the local model index and k is the time index2.

Step 2: (change has not occurred)

if change doesnˆt occur (i.e. D(¦^j;¦§j)<¸) then:

k¸ k+1, go to step 4.

Step 3: (change has occurred)

if change occurs (i.e. D(¦^j;¦§j)>¸) then:

s¸ s+1and k¸ k+1.

Step 4: (computation of the weighting function) model "s" is active: !s;k¸ 1and!r;k¸ 0;8r6=s.

Step 5: (test of stop)

h¸ s(h is the number of identiéed models) if k<q then go to step 2,

if k=q then stop.

B. The fusion of model

After theérst computation of the weighting function by the preceding algorithm, we estimate the Markov param- eter matrix Hs;id (for the h local models ). Then, we seek the models which are ˆsimilarˆ. Firstly, we estimate the covariance of the Markov parameter matrixHs,id.

2The left arrow " " denote the replacement of the value of the left hand side, by the right hand side.

(5)

B.1 Covariance estimate

The variance of the estimate error ¾y can be unbiasely estimated by:

^

¾2y = 1

i(jà im)trace(""T); (33) where:

"=Ys;!i à Y^s;!i (see 9),Y^s;!i is an estimation ofYs;!i . The variance Hd

s , i of the output estimated Markov pa-

rameter matrix Hs,id is estimated by the following lemma.

Lemma 1:

Unbiased estimation of Hd

s , i is given by the formula:

^Hs , id =i^¾2y[U¡ s(U¡ s)T]£1; (34)

Now we can make a test on the models to énd the models which are closed to each other.

B.2 Theˆsimilarˆmodels

The aim is to énd the models close to each other (or

"similar"). For that purpose, we deéne the distance from the estimated Markov parameters of model s to the param- eters of the model r as:

d( ^Hs;id ;H^r;id ) =T race ´

( ^Hs,id à H^r,id) ^£H1s , r( ^Hs,id à H^r,id)Tµ

: (35) with:

^H

s , r = Ns

Ns+Nr^

Hs , id + Nr

Ns+Nr^

Hr , id; where Ns=Pq

k=1!s;k and Nr=Pq k=1!r;k.

If ± is a threshold designed according to the Â2- distribution with `i degrees of freedom, then we perform the followingˆsimilarity testˆ

B.21¤similarity test¤

The models s and r are similar if:

if d(H^s,id;H^r,id)·±: the model s and the model r are not ˆsimilarˆ,

if d(H^s,id;H^r,id) ±: the model s and the model r are ˆsimilarˆ.

B.3 Model merging

IfEs0 is theset of models which are "similar"to s0, the models belonging to this set are replaced by the new model s0². The new weighting function for the new model s0²is:

!²s0;k = X

s2Es 0

!s,k for k=1,...,q; moreover we estimate the Markov parameters of the new model s0².

Remark 2: It is necessary to make a new "similarity test"

(see B.21) for the merging of the new model (after aérst merging), because it can exist two "similar" local models which have not been detected in theérst step, because of

the insu–ciency of data used to identify the local models.

But, by the new computation of the weighting functions and Markov parameters, the number of measurements used to identify the new local models is greater than those which are used to estimate the previous local models.

The merging of the local models is described by algo- rithm 2.

Algorithm 2 (models merging):

Step 1: (initialization) st¸ 0;

Step 2: (computation of Markov parameters) compute the Markov parameters (by theorem 2) Step 3: (test)

Check the "similaritytest" (see B.21), and construct the set of models which are "similar".

Step 4: (replacement of the weighting functions) For each set of models Es, if cardinal of Es is greater than 1 then:

1) st¸ 1;

2)!s;k¸ !²s;k, and cancel the model r, r2Es and r6=s:

Step 5: (renumber the local models) Reorganize local model index.

step 6: (stop test) If st=1 then go to 1, If st=0 then stop.

Now we have the énal weighting function, we estimate the order and the parameters of each local model. The estimation procedure is described in next section.

VI. Estimation of the system parameters The goal of this section is to estimate the system order ns and matricesAs,Bs; CsandDs for each local model.

The system matrix Ds is directly obtained from the Markov parameter matrixHs,id (see 12).

Having the Markov parameters (by theorem 2), one can use the algorithm of Kung [3], Ho and Kalman [1], Era [2], Zeiger and McEwen [6] to estimate the system matricesAs, Bsand Cs.

In this paper we set i>2(ns +1). Let3 º=integer(i=2).

Following we summarize the Era algorithm (minimal and balanced realization [2]).

Build the Hankel matricesH0º;s andH1º;s which contains the Markov parameters and are deéned by:

Hkº;s = 0

BB B@

CsAksBs CsAk+1s Bs ¡¡¡ CsAk+ºs £ 1Bs CsAk+1s Bs CsAk+2s Bs ¡¡¡ CsAk+ºs Bs

... ... ... ...

CsAk+ºs £1Bs CsAk+º+1s Bs ¡¡¡ CsAk+2ºs £ 2Bs 1 CC CA (36) Make a singular values decomposition of the matrixHº;s0 :

3Integer(i) is the integer part of i.

(6)

Hº;s0 =

¶ U1

U2

£ ¶ S1 0 0 S2

£ ¶ V1T V2T

£

' U1S1V1T (37) where S2contains the neglected singular values.

The system order is equal to the number of singular val- ues in S1.

The extended observability matrix©º;s =U1S1/21 , and the controllability matrix Cº;s =S1/21 VT1 are computed. Note that:

H0º;sº;sCº;s,

The matrixCscan be determined from the érst`rows of

© º;s.

The matrixBs is equal to theérst m columns ofCº;s. The matrixAs is given by the formula:

As=S-1/21 UT1Hº;s1 V1S-1/21 .

VII. The simulation example We consider a third order system described as:

xk+1=Asxk+Bsuk

yk=Csxk+Dsuk+Kek

where s take the value 1 on the intervals [1, 999] and [1800, 2499] and the value 2 on the intervals [1000, 1799]

and [2500, 3500]; and we have:

A1 = 0

@ 0.32 0.31 0 -0.32 0.31 0

0 0 -0.18

1 A; B1=

0

@ 0.9 -0.7 0.71 -0.5 0.8 0.47

1 A C1 =

¶ -0.55 0.2 0.8 0.45 0.3 0.58

£

; D1=

¶ 0.97 0.63 -0.32 0.95

£

A2 = 0

@ -0.1 -0.4 0 0.5 -0.4 0

0 0 0.26

1 A; B2=

0

@ 0.1 -0.6 0.32 -0.66

0.3 0.82 1 A C2 =

¶ -0.8 -0.1 0.7 0.3 0.48 0.9

£

; D2=

¶ 0.5 0.3 -0.2 -0.5

£ :

Moreover uk 2R2, and yk2R2, the noiseek2R2 is a zero mean white noise and K=p1

5¢ I2.

We suppose that we have i+j-1=3500 measurements of the inputs and outputs. The inputs uk are white noises.

A. The’rst computation of the weights (use algorithm 1) For the implementation of the recursive subspace method, we adopt i=15. Since the dimension of the out- puts is`=2, the degree of freedom of theÂ2distribution is

`i=30. The exponential forgetting factor°is taken as 0.98 (see [4]).

: threshold (99.999%, Â2 distribution);¤: D(¡^j;¡§j) Figure 1: the switching times estimated by the recur- sive subspace algorithm

The recursive subspace algorithm determines the switch- ing times and shows four local models. The érst is ac- tive in time window [1:999], the second in time window [1000:1799], the third in time window [1800:2499] and the fourth in time window [2500:3500]. The switching time is correctly detected.

To compute the parameters for each local model, we set the index i equal to 15 (for each local model identiécation).

From the parameters obtained with the proposed method, we now estimate the poles for each local model (seeégure 2).

100 Monte Carlo realizations are done in each case.

Figure 2: the poles of the four local models according to!1;k,!2;k,!3;k and!4;k respectively.

B. The fusion of the weighting functions (algorithm 2) The estimated poles show that the local models 1 and 3 are "similar" (égure 2). The same remark hold for lo- cal models 2 and 4. That is conérmed by the "similarity test". The distance d(H^s;id ;H^r;id )of ˆsimilarity testˆallows to merge the models without ambiguity. For, hundred ex- periments we carried out the distance and we have:

Id(H^1,id ;H^2,id )2[2300;2800];

(7)

Id(H^1,id ;H^3,id )2[0:5;4];

Id(H^1,id ;H^4,id )2[2000;2800];

Id(H^2,id ;H^4,id )2[1;4];

-The threshold (99.999%,Â2distribution), ±=59. Then we propose to merge models 1 and 3, that allow to deéned a new weight!²1;k=!1;k+!3;k. We also merge the models 2 and 4 and deéne the new weight: !²2;k=!2;k+!4;k.

Figure 3: the estimated poles of the 1st and 2nd new local models according to!²1;k and!²2;k respectively.

Theégure 3 shows the 100 Monte Carlo realizations for the estimated poles of the new models obtained with !²1;k and!²2;k. As we can see, the new weighting functions!²1;k and !²2;k improve the variance of the estimated poles in each case. The estimated poles are unbiased.

VIII. Conclusion

Switching model identiécation is considered in this pa- per. The technique is based on subspace formulation and uses least squares method for the parameter estimation.

The switching model is supposed to be a weighted sum of a local models. A recursive subspace identiécation technique is used to determine the switching time and a merging al- gorithm is given to estimate the énal weighting function.

Finally we estimate the local systems order and parame- ters from the new estimation of the local models Markov parameters. An illustrating example shows the application of the method.

References

[1] Ho, B. and Kalman, R. E.,E ective construction of linear state- variate models from input/output functions. Proceedings. of 3rd Annual Allerton Conf. Circuit and System Theory, pp. 449-459, 1966.

[2] Juang, C., Applied system identi’cation. Prentice¤Hall, Engle- wood Cli s, NJ, 1994.

[3] Kung, S. Y.,A new Identi’cation and model reduction algorithm via singular value decompositions. Proceedings. Twelth Asilomar Conf. on Circuits, Systems and Computers, pp. 705-714, 1978.

[4] Oku, H., G. Nijsse, M.Verhaegen and V. Verdult.Change detec- tion in the dynamics with recursive subspace. Proceedings of the 40th CDC. Orlando, Florida. pp. 2297-2302, 2001.

[5] Van Overschee, P. and De Moor, B., Subspace identi’cation for linear systems-Theory, implementation, applications. Kluwer Academic Publishers, 1996.

[6] Zeiger, H. P. and McEwen, A. J.,Approximate linear realisations of given dimension via Ho¤s algorithm. IEEE Trans. on Auto.

Control, pp. 153-155, 1974.

IX. Appendix 1

Proof: The proof of theorem 1 is established in free noise case, the stochastic case is obtained easily.

Form equation 5 we have:

Ys,!i =

¤!s;iys;i !s;i+1ys;i+1 ::: !s;i+j£1ys;i+j£1 )Ys,!i =

¤!s;i(CsAis£ 1xs;1+CsAis£ 2Bsu1+:::+Dsui) :::

!s;i+j£1(CsAs 1xs;j+CsAs 2Bsuj+:::+Dsui+j£1) )Ys,!i =

¤!s;iCsAis£1xs;1 ::: !s;i+j£ 1CsAis£1xs;j +

¤!s;i(CsAs 2Bsu1+:::+Dsui) :::

!s;i+j£ 1(CsAis£2Bsuj+:::+Dsui+j£1) )Ys,!i =CsAs 1Xs¡s+

Hs;id 0 BB B@

!s;iu1 ::: !s;i+j£1uj

!s;iu2 ::: !s;i+j£ 1uj+1

... ... ...

!s;iui ::: !s;i+j£1ui+j£ 1 1 CC CA

Finally we obtain :

Ys,!i =CsAis£1Xs¡ s+Hs;ids

X. Appendix 2 Proof:

Ys,!i =CsAs 1Xs¡ s+Hs;ids+Hs;ists if Ai£1 is neglected then:

Ys,!i 'Hs;ids+Hs;ists

since Asis assumed to be asymptotically stable, the covari- ance matrix of the state sequence (lim

j!1

´1

j(Xs¡s)(Xs¡ s)Tµ ) is bounded. Note that, the elements of the weighting ma- trix¡s take the values 0 and 1.

By the least squares method we can estimate the Markov parameter matrixHs,id:

Ys,!i (U¡ s)T(U¡ s(U¡ s)T)£1'Hs;id

Références

Documents relatifs

Abstract The regime-switching L´evy model combines jump-diffusion under the form of a L´evy process, and Markov regime-switching where all parameters de- pend on the value of

This paper introduces a new model for the SoC estimation using a Switching Markov State-Space Model (SMSSM): the battery behavior is described by a set of potential linear state-

But, as the test leads to a delay for detection, the data immediately before the commutation times cannot be affected with certainty to a particular local model : these data are

Theorem 3 (covariance based subspace) Assume that Assumption 1 regarding unobserved inputs, and Condition 2 regarding instruments, are in force?. Then, R (16) i (N ) satisfies

In these regimes, it is possi- ble to identify interesting sub-regimes: in the 1990s, cartelization occurs during the summer when demand peaks, and competition increases during

Moreover, performing a WL simulation on the whole energy range for small lattices, in order to determine the nature of the transition before using the CMES method should be

In a phase of writer identification, using Kernel Orthogonal Mutual Subspace Method (KOMSM), subspace was calculated by the eigen vectors.. The result obtained through

Analyzing the behavior of the closed loop system our indirect approach is very different: a characteristic of the closed loop system is first obtained using projections of subspace,