• Aucun résultat trouvé

Determinant and inverse of a sum of matrices with applications in economics and statistics

N/A
N/A
Protected

Academic year: 2021

Partager "Determinant and inverse of a sum of matrices with applications in economics and statistics"

Copied!
25
0
0

Texte intégral

(1)

HAL Id: hal-01527161

https://hal.archives-ouvertes.fr/hal-01527161

Submitted on 24 May 2017

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

applications in economics and statistics

Pietro Balestra

To cite this version:

Pietro Balestra. Determinant and inverse of a sum of matrices with applications in economics and statistics. [Research Report] Laboratoire d’analyse et de techniques économiques(LATEC). 1978, 20 p., bibliographie. �hal-01527161�

(2)
(3)

APPLICATIONS IN ECONOMICS AND STATISTICS Pietro BALESTRA

Avril 1978

Le but de cette Collection est de diffuser rapidement une première version de travaux afin de provoquer des discussions scientifiques. Les lecteurs désirant entrer en rapport avec un auteur sont priés d'écrire à l'adresse suivante:

INSTITUT DE MATHEMATIQUES ECONOMIQUES

4 Boulevard Gabriel - 21000 DIJON - France.

(4)

Travaux DEMI, PUBLIES

N°1 Ptichcl PREVOT : Théorème du point fixe. Une étude topoloaique générale (juin 1974)

N°2 Daniel LEBLA!C : L'introduction dos consommations intermédiaires dens 1 e modèle de LEFEBER (juin 1974)

N°3 Colette POUNON : Spatial Equi 1 i bri um of the Sector i Quasi-Pcrfoct Compétition (scptembcr 1974)

N°4 Claude PONSARD : L'imprécision et son traitement en analyse économique (septembre 1974)

M°5 Claude PONSARD : Economie urbaine et espaces métriques (s�ptomorc 1974) N°6 Michel PREVOT : Convexité (mars 1975)

N°7 Claude PONS11RD : Contribution à une théo,rie des espaces économiques imprécis (avril 1975)

N°8 Aimé UOGT : Analyse factoriclle en composantes principales d'un caract2- w re de dimension-n (juin 1975)

N°9 Jacques THISSE et Jacky PERREUR : Relation between the Point of 'aximum Profit and the Point of Minimum Total Transportation Cost : A Restatement (juillet 1975)

N°10 Bernard FUSTIER : L'attraction des peints de venté dans des espaces précis et imprécis (juillet 1975)

N°11 Ré9is DELOCHE : Théorie des sous-ensembles flous et classification en analyse économique spatiale (juillet 1975)

N°12 Gérard LASSIBILLE et Catherine PARRr�1 : Analyse multicritère dans un contexte imprécis (juillet 1975)

N'13 Claude POWS11RD : On the Axiomatization of Fuzzy Subsets Theory (ju1yl975) N°14 miche PREVOT : Probability Calculation and Fuzzy Suhsets Theory

(august 1975)

(5)

N° 16 Jean-Pierre ;�,URAY et Gérard JURU :Introduction à la théorie dos es- paces multiflous (avril 1976)

N° 17 Roland LANTNER, Bernard PETITJEAN et t Ï �ari e-Cl aude PICHERY : Jeu de simulation du circuit économique moût 197ù)

? 18 Claude PONSARD : Esquisse de simulation d'une économie rénionale : l'apport de la théorie des systèmes flous (septembre- 1976)

? 19 1arie-Claude PICHERY : Les systèmes complets de fonctions d2 d:'m2nd�"

(avril 1 1977)

t;° 20 Gérard LASSIBILLE et hlain rlINGAT : L1 estimation de modèles à v(riJ:,1,�

dépendants di chotomi que - La sélection uni versiu-.i t\- et la réussite en première année d'économie

(avril 1977)

f21 Claude PONS!�RD : La régicn en analyse spatiale (��i 1977)

N° 22 Dan RALESCU : Abstract Models for Systems Identification (juin 1977) N° 23 J.rlARCI1AL et F.POULOiV : ï9ultiplicateur, graphes et chaines de

Markov (décembre 1977).

(6)

DETERMINANT AND INVERSE OF A SUM OF MATRICES

TITI APPLICATIONS

IN ECONOMICS AND STATISTICS

Pietro Balestra

In this note we présent some useful tools concerning the deter- minant and the inverse of the sum of two matrices. Some of the results shown here are found in the statistical and econometric literature, but often without proof and without a clear statement of the conditions under which they hold. Some other results,

however, are believed to be original.

A number of applications in the fields of économies and statistics are given.

Proposition 1

Let A be a square matrix od order n and let K and H be two (column) vectors of order n. Consider the matrix

B = A + KH' . Then:

IBI - IAI + H'A*K

where A* is the adjoint matrix of A.

Proofl) Write B. and A. for the j-th column of B and A. Clearly we have:

B. - A. + h.K

3 J J

where h.

3

is the j-th element of H. Using " a well-known result concerning the determinant of a matrix in which a column is a sum of two vectors, we obtain directly:

1) An alternative proof is the following:

JA + KH'l

Il A'I IO A'I Il A IAI

Il H'L

A result in Aitken fl, ch.4, Sect.31j shows that the determinant of the last matrix above is equal to -H'A*K, which completes the proof.

Rao 3 , p. 32] gives a similar result for K = H.

(7)

IBI = A1+hlK, B2, ... , p Ru

=

1 Al' B 2' / ... ,

B n 1 + hl 1 K,

B2, ... , P Bn1

=

1 Al' B2, ... ,

B n 1 +

hl 1 K,

A2, ... ,

An 1 .

We split again the second column of the first matrix in the above expression to get:

IBI = �A1, A2, B3" ... , 1 Rnl +

hl 1 K,

A A3, an

1 +

h2 2 1 Al' K,

A3, ... ,

An 1 .

The procedure may now be repeated to obtain finally:

IBI = JAI +

hl K,

A2, ... ,

A n | + ... +

hn A1, ... ,

An-1, K . Note now that

n hj 1 Al' ... , Aj-1,

K,

A. +1' ... , A 1 =

hjE s=1 kA.

where A . is the cofactor of a .. Collecting terms, we get:

n n

IBI = JAI + E h.

3

E k A =

JAI + H'A*K .

Corollary 1 (i) If

r(A) .2. n-2, then IPI = 0 . (ii) If r(A) = n-1, then

IBI = 0, iff H'A*K = 0 IBI 76 0, iff H'A*K 0 . (iii)If r(A) = n, then

IBI = lAI (1 + H'A- K)

and therefore B is non-singular iff H'A-IR 1 -1 . The proof of part (i) is obvious once it is recognized that the rank of a sum of two matrices is at most equal to the sum of the two ranks. Part (ii) and (iii) follow immediately from Proposition 1.

(8)

- 3 -

Example 1. The intra-class correlation matrix is proportional to the matrix B = (1-a)I + aLL', where a is a scalar, -(n-1) 1 a 1, and L is the sum vector (whose elements are all equal to 1). We put

(1-a)I = A, aL = K and L' = H'. Then:

(B) = J.Al (1 +

1a a L'L) = (1-a )n-1 (1-a+na)

The same technique can be used to find the eigenvalues.

Example 2. T. Amemiya [2] , for the regression problem in which the variance of the dépendent variable is proportional to the square of its expectation, computes the following asymptotic covariance matrix of the weighted least square estimator:

V(a*) = a2 A- 1 .

For the ML estimator (normal case) he obtains:

v (0-L) ����� (A- 1 + 2a2 aa' )

2Q + 1 n

where a is the vector of regression coefficients and A is a positive definite matrix. Using Proposition 1 (together with the fact that in Amemiya's model a'Aa=l), the generalised variance of the ML estimator is:

lv(�)l 2 ci 2+ 1 nJA -1 1 (1+2Q2) -

( 2 cy2+1) n-1

which is seen to be smaller than the generalised asymptotic variance of the weighted least square estimator.

Proposition 2

Let A be a square matrix of order n and let U and V be two matrices of order nxm. Consider the matrix

C = A + Ut . Then:

(9)

(i) ICI 1 A + QP' 1 . 1 RI, where I + V'A+u -V'P R =

[1

+ V'A+V

-v,p]

Q'U 0

and A+ is the Penrose-generalized inverse of A, Q is a nx(n-r) matrix whose columns are orthonormal vectors such that (1-axa ) = QQ' and P is a nx(n-r) matrix whose columns are orthonormal vectors such that (I-A +A) = PP', r being the rank of A.

(ii) The rank of R is given by r(R) = r (Q' U) + r(V'P)

+ r {[ 1- (V' P) (V' P) +J [1 + V ' A + V J [1 - (Q , V) + (Q , V � J} . (iii) C is non-singular iff R is of full rank m+(n-r),

that is iff the three following conditions hold simultaneously:

(a) r(Q'U) = n-r (b) r(V'P) = n-r

(c) r{[I - (V' P) (V' P) +J [1 + V ' A + VJ [1 - (Q , V) + (Q' V) ]} = m-(n-r)

?roof. Since (I-A+A) and (I-AA+) are idempotent symmetric matrices, ve can find matrices P and Q having the properties stated in (i) ibove. Note also that AP=0, Q'A=0, P'A+=0 and A+Q=0. Consider the natrix A+QP' and the matrix A++PQ'. It can be easily checked that

their product is the identity matrix. Hence they are non-singular, one Deing the inverse of the other.

iVe can now write

C = A + QP' + VV' - QP'

= (A + QP') + fu, -(1 I'V'1

ind therefore, using the properties of partitioned matrices:

""m+tn-r) ' 1 1

V ] [-U, Q] j

1

A+QP' 1

(10)

- 5 -

5ince A+QP' is non-singular, again using the properties of parti-

tioned matrices, we obtain: _

ICI = JA + 0 .P' 1

11 m+(n-r)- p: (A+

+ PQ') 1--U, Ql ! !

= 1 A + Qu' 1 11 1

+ V'A+U + V'PQ'U -V'P

= .

A + QP'I m

Q'U 0 j

Adding the second column multiplied by Q'U to the first column in the last matrix above the determinant does not change and we get R. This completes the proof of part (i).

For part (ii), we first apply theorem 5, Eq.(2.36) of MARSAGLIA-STYAN [6, p.274 to obtain:

I + V'A+U -V'P

l

+ r _________________ _

r[Q'u 01 + r (FI + V'A+U, -vipi

Q'U 0

LI - (Q'U 0)+(0'U u 0)]y

= r(Q'U)+ r{ [l+V'A+u] [l-(QIU) + (Q'U)],-V1P] } Then we apply again theorem 5, Eq.(2.35) to the second matrix above,

to get:

r{ [l+V'A+u] [I-(Q'U) + (Q'U)], -V'P} -

= r{[l-(V'P)(V'P)][l+V'Au][l-(Q'U)(0'U)j)+ r(V'P) from which the final result obtain.

Observe now that if r(Q'U) � n-r and/or r(V'P) n-r, the matrix R is singular. Hence for R to be non-singular, we must have r(Q'U) = n-r and r(V'P) = n-r. Inserting these conditions in the expression for r(R) we find

r(R) = 2 (n-r) +

r{[I-(V'P) (V'P)+J LI+V'A+U1 I-(Q'U)+(Q'U)1}

and for R to be of full rank m+n-r, the rank of the second matrix on the r.h.s. above must be equal to m-(n-2). This completes the proof of Pronosition 2.

(11)

Corollary 2.1

(i) If m n-r, then ICI 0

(ii) If r(I+V'A+U) m-(n-r), then ICI = 0. (The converse is not true) .

(iii)If A is non-singular (r=n), then jet = JAI Ii+v'a"1^

and therefore C is non-singular iff 1+V'A il - 0.

(iv) If m = n-r, m 0, then

zizi - A+QP') I IV'PI I Q'U)

and C is non-singular iff both V'P and Q'U are non-singular, that is iff

r(V'P) = r CV'(I-A+A)1 - m, and r(Q'U) = r[(I-AA+)U] = m .

(v) If r(A) = r � n and C is non-singular, then R 1 exists and is given by

R-1 R1 R2

R3 R4

where R1

= Y[X' (I+V'A+U) Yj ~1 X' R2

=

-YIX' (I+V'A+U)Y 1X' (I+V'A+U)U'Q(Q'UU'Q) 1+U'Q(Q'UU'Q)

1

R3

=

(P'W'P) 1P'V(I+V'A+U)YX' (I+V'A+U)YI 1X'-(P'W'P) 1P'V R4

= (P' W'P) 1P'V(I+V'A+U)U'Q(Q'UU'Q) 1

-(P'VV'P) 1P'V(I+V'A+L1)YX' (I+V'A+U)Y 1X' -(I+V'A+U)UIQ(Q,UIJIQ)- 1

X being a mx(m-n+r) matrix whose columns are orthonormal vectors such that I-V'P(P'VV'P)- 1PIV = XX' and Y being a mx(m-n+r) matrix whose columns are orthonormal vectors such that I-U1Q(O'UU'Q)~ 1Q'U = YY'.

Furthermore, the determinant of C is given by ICI = I A+QP' I (I+V'A+U)YÎX' (I+V'A+U)Y_ 1X'·

·( I+V' A+U ) +V' PQ' U .

(12)

- 7 -

Part (i) follows from the fact that r(UV') �� m and r(A+UV') r+m.

For Part (ii) we observe that when r(V'P) = r(Q'U) = n-r, the ranks of Ql-(V'P) (V'P)+] and I- (Q' U) + (Q' U)1 are equal to m- (n-r) . Hence for the rank of frI-(VIP)(V'P)+][I+VIA +U7[I-(Q'U)+(Q'U)lto be equal to m-(n-r), the rank of I+V'A+U must at least be equal to m-(n-r).

For Part (iii), we simply point out that A = A 1 and that both Q and P are empty. Hence the result follows.

For Part (iv), we note that V'P and Q'U are both square matrices.

Therefore:

I+V'A+u -V'P -V'P I+V'A+u

Q'U 0

1 (-,)m

0 Q'U

- IV'PIIQ'UL

Q'U 0 0 Q'U

Note also that if r(V'P) = m, then r(V'PP'V) = r{V(I-AA)V) = m and the rank of V'(I-A+A) must equal m. Conversely, if r(V'PP') = m, then r(V'P) = m. The same reasining applies to Q'U.

As for Part (v), by simple multiplication it is easy to check that RR 1 - I, if the inverse matrices used in the construction of the Ri exist. Now, for R to be non-singular r(V'P) must be equal to n-r. Hence P'W'P is square and non-singular, and

I- (V' P) (V' P) + - I-V' P (P' VV' P) 1P' V

which is symmetric and idempotent of rank m-(n-r).TA7e can then find a mx(m-n+r) matrix (with orthonormal columns) such that

I-V'P(P' W 'P) 1P'V) - XX'. Similarly, since r(Q'U) = n-r, we can find a mx(m-n+r) matrix (with orthonormal columns) such that

I-U'Q(Q'UU'Q) 1Q'U_ - YY'. The remaining condition for R to be non-singular is

r {XX1 (I+V'A+U) YY1 } = m-(n-r)

which obviously implies X'(I+V'A+U)Y to be of rank m-(n-r) and thus non-singular. The matrix R 1 is therefore the inverse of R.

Finally, to compute the determinant of R, we write i

R2 I+V'A+U !

(I+VIA+U)R2-VIP

(13)

Now, since Q'UY = 0,

Q'UR 2 =

I. Therefore, using the properties of partitioned matrices, we find successively:

IRI = I+V'A+U-(I+V'A+U)R2Q'U+V'PQ'U

= II+V'A+U+(I+V'A+U)YrX' (I+V'A+U)Y' 1X' (I+V'A+U) (I-YY')- -(I+V'A+U)(I-YYI)+VIPQIUI

= I+V' A+U+ ( I+V' A+U ) Y IX' ( I+V' A+U) YJ 1X' ( I+V' A+U) - -(I+V'AU)YY'-(I+V'AU)(I-YY')+V'PQ'U)

= (I+V'A+U)YCX° (I+V'A+U)Y 1X' (I+V'A+U)+V'PQ'UI . The proof of the corollary is therefore complète.

Corollary 2.2 Consider the matrix C of Pronosition 2. Then:

(i) If A is symmetric, we have:

r

ICI = IRI n .

where the i, i=l, .... r, are the r non-zero eigenvalues of A and R is the matrix defined in Lemma 2, with P = Q.

(ii) If A is symmetric and m = n-r, we have:

ICI = IV'(I-A+A) Ul

il x. X

(iii) If A is symmetric and idempotent and m = n-r, we have:

ICI = )V(I-A)U .

Proof. Since A is symmetric, there exists an orthogonal matrix P, partitioned as P =P, P], which diagonalyzes it, i.e.:

P'AP = Pj=

[

0- P' '

0 0

where A is a diagonal matrix of order r containing the non-zero eigenvalues of A. If then follows that A = P11P', A+ - PA 1P',

AA+ - A A = PP' and 1-A�A = PP'. Now, it is easy to obtain successively:

A 0 IA+PP'I - jP^A+PP1)?! =

[0 0,]

. 0 I

which proves Part (i). As for Part (ii), referring to Part (iv) of

(14)

- 9 -

Corollary 2.1, we have IV'PI!PUL = IV'PP'ul = V'(I-AA)u). . Finally, for Part (iii), it suffices to note that when A is symmetric and

idempotent, the non-zero eigenvalues are ail equal to one, and A+ - A.

Example 3. The Bordered Hessian Matrix.

The following Bordered Hessian matrix:

H = -

, :]

is encountered in many economic problems such as the maximisation of a function of n variables subject to m constraints, m n. The matrix U is the nxn Hessian matrix of second order partial derivatives and P is the nxm matrix of partial derivatives of the m constraints.

It is well known that if U is of full rank n:

IHI = |u| l-p'u"1?

_

and therefore H is non-singular iff Ip'u-lpi f 0. Furthermore, if this last condition is met, the inverse of H can be expressed as:

H =

q2 H4

(P'U"1?)"1?^"1 -(P'U"1?)"1 Ail these results can be easily checked using the properties of par- titioned matrices. Now it is possible to extend these results to the

case of a singular matrix U simply by applying the prece-ding Propositions and corollaries (noting the similarity of the structure of H and the

structure of R). Let r be the rank of U, r n. We can state the following:

(i) r(H) = 2r(P) + r{ [l-PP+] u[l- (P ' ) +P } ] (by Proposition 2, (ii)).

(ii)H is non-singular iff the two following conditions hold simulta- neously :

(a) r(P) = m

(b) r{[l-PP+]u[l-(PI)+P1]} = n-m (by Proposition 2, (iii)).

(15)

(iii) If the two above conditions are met, the inverse of H can be expressed as:

H -1 = H1 H2

, _2 H4

where

H 1 F(F'UF)-1Fl

H 2 -F(F'UF) 1F'UP(P'P) 1+P(P'P) 1

H 4 -(P'P) 1P'UP(P'P) 1+(P'P) 1P'UFfF'UF) 1F'UP(P'P) 1 and F is a nx(n-m) matrix of orthonormal column vectors such that FF' = I-P(P'P) 1P' (by Corollary 2.1,(v)).

(iv) If H is non-singular, its déterminant is given by IHI = IUF(F'UF) 1F'U - PP'l

(by Corollary 2.1, (v)).

The reader should realize at this point that, whenever U is of full

rank, the two above expressions for the inverse of H are identical. This can be checked by showing that Hl H1*

To this end, we note, first, that H1P = 0, so that:

H 1FF' = H-1i-P (p - P) -1p Hl . Next, we observe that:

H1UF = F - U 1P(P'U 1P) 1P'F = F . We then obtain successively:

H 1FF' = H1

H1FF'UF = H1UF = F H1F = F(F'UF) 1 H FF1 = F(F'UF) 1F'

1 H1

We are sure that the inverse of F'UF exists from the fact that

H1F(F'UF) - F. Since the rank of F is equal to n-m, the rank of F'UF must (at least) be equal to n-m.

(16)

- 11 -

Example 4. The Slutsky effect in consumer demand theory.

The Fundamental Matrix Equation of consumer demand theory, obtained by

Barten 41, is: -

Barten

[4J, U

P 6Q/6y oO/op

,]

0 AI

] P' 0 -616Y -6xlP- 1 -Q'

«

where U is the Hessian of the utility function u = u(Q) , n is the vector of quantities purchased by the consumer on n qoods, P is the price

vector (of order n), y is the consumer's income and X is the marginal utility of money.

To solve this equation, we need to find the inverse of the bordered

Hessian matrix. Typically, the economist procedes on the assumption that U is non-singular. This is too restrictive, since, for a contrained

maximum of the consumer problem, we only need a strictly quasi-concave utility function. Now it is well known that an important class of strictly quasi-concave functions (those which are homogenous of degree one) exhibits a singular Hessian (of rank n-1).1)

A more general solution of the Fundamental Equation is needed, valid for all strictly quasi-concave utility functions. Referring to the in- verse of the bordered Hessian given in Example 3 (with m = 1), we

find: .

6Q/6y = H =

-F(F'UF)- 1FIIJP(PIP)-1+P(Plp)- 00. /oP' = aHl-H2' - aF(F'UF) 1F'-H2Q' 6/6y = -H = (P'P) 1P'UP(P'P) 1

+(P'P) 1P'UF(F'UF) 1F'UP(P'P) 1 oÀ/oP' = -ÀH2

+ H4'

1) The problem of a singular Hessian arises probably more often in production theory, where the hypothesis of constant returns to scale is frequently used. To avoid this particular difficulty, Theil

[12] assumes a production function of the form log q =

f(log x).

A more direct approach, in our opinion, could be based on the inverse of the bordered Hessian matrix given in Example 3.

(17)

Combining the first two equations, we get:

6Q/6P' =

XH1 - (oQ/oy)Q'

which is the Slutsky equation for a price change. The first term on the right-hand side,

ÀH1' represents the total substitution effect of a price change, while the second term , -(6Q/ôy)O', represents the in- come effect of a price change.

Proposition 3

Consider the matrix C of Proposition 2 with A non-singular and I+V'A U also non-singular. Then C

-1

exists and is given by:

C"1 = A 1 - A -1 U(I+V'A -1 U) -1 V'A -1

Proof. Develon the identity:

(A+LTV' ) 1 (A+UV' ) - I in the followinq way:

c"1A+c"1UV = I

c-l+c-1UV'A-l = A-1 (1)

c-lu+c-1UV'A-lu = A 1t1

C-1U(I+V'A-1U) = A-1u (2)

Since the matrix in parenthesis in (2) is non-singular, we can solve for C lU and insert the result in (1). After re- arranging terms, the final result is reached.

Corollary 3.1 For the matrix of Proposition 1, with A non-singular and 1 + H'A~1K f 0, we have

(A+KH') 1 - A 1 - 1 A-1KH'A-l 1+H'A K

Proof. Trivial application of Proposition 3.

1) The same result is found in RAO [9, p.33] and MADANSKY r5, p.9]

without proof.

(18)

- 13 -

Example 5. For the matrix of Example 1, we obtain immediately:

-1 _

1 1 a I _

1+ 1 na .

( a 1-a)

LL' = 1-a 1 I - (1-a) (1-a+na) LL

*

1-a �'

Example 6. Up-Dating Least-Squares Estimates.

Consider the following regression problem:

y = XB + e, E(c) = 0, E(ee') = o2l with k explanatory variables and n observations.

The least-squares estimates are given by:

6(n) = (XIX) -1 x 1 y

S2- (n) -

a2(X'X)"1

SS (n) = y'y - '�n)X'y

where the subscript n on the least-squares estimator, on its variance- covariance matrix and on the sum of squared residuals indicates that they are based on a sample of size n.

Let us assume now that we have one additional observation drawn from the same population, i.e.

Yn+1 = Xn+1B + 'n+1 where

xn+1 is the row vector of the k explanatory variables for the new observation.

Given the new expanded sample, we can get by the usual formulae the new least-squares estimates. The problem is: can we up-date the old least-squares estimates without computing the new ones from scratch?

The answer to this question is yes.

For the expanded sample we get:

For the

y "F X t

e

(19)

and the least-squares estimator is:

(n+1) (X'X +

'WA+l*"1**'* + xn+1Yn+l) .

Using Corollary 3.1 for the inverse in the above expression, one gets upon simplification:

(n+l)

=

" (n) + 1+a X-X)- 1x n+l (y n+l - XA+1* (n)) l

where a =

xn+1(X'X) -1 x n+ 1.

The matrix (X'X) 1 is known from the pre- ceding round of estimation. Note that the last parenthesis above is the computed error associated with the new observation based on the preceding least-squares estimator.

The formula above designates the recursive (sequential) least-squares estimator which may also be derived as an application of Kalman Filter

(see S. Schleicher [lû]).1^

For the up-dated formulae for variances and sum of squarred residuals, using the same technique, we get:

(n+1) (n) \

- a2(1+a) 1 \ (n)

n+1 n+1

(n) \

ss (n+l) ss (n) +

1+a (Yn+1 n+1 (n)

)2 .

Corollary 3.2

Let A be a square matrix of order n and let K,H,S and T be four (column) vectors of order n.

Consider the matrix

F = A + KH' + ST'

and assume that A is non-singular and that

a 1a2 - a3a4 0

1) An exhaustive discussion of recursive least-squares formulae with an extention to two-stage least-squares estimators can be found in Garry D.A. PHILLIPS La].

(20)

where

a 1 1 + H'A- K a2 = 1 + T'A 1S a3

= T'A- K

a4 = H'A- 113

Then F 1 exists and is given by:

F- 1 A 1 1 -A- 1(a KH'+a qT'-a qH'-a KT')A- 1 .

The proof is simply an application of Proposition 3 with U = [K SI and V = [H TI -

Example 7. The inverse of the first order moving average covariance matrix.

The variance-covariance matrix of a first order moving average sto- chastic process is proportional to the following matrix:

1+c -c o 0

-c 1+c 2 -c ... 0

B - o -c 1+c 2 0

0 0 0 1+c2

which is the familiar tri-diagonal matrix. Analytical expressions for the exact inverse of the tri-diagonal matrix are hard to find in the literature. Balestra r3, p.601 gives a recursive formula in terms of successive determinants for a slightly différent parametrisation of the tri-diagonal matrix (the matrix B divided by 1+c2). Translated into the notation of the present note, his results may be summarized as follows:

IB (n) 1 = (1+c2)|B(n-l)! " c2|B(n-2)! ' n 2

I B (n) 1

= IB (s) 1-IB (n-s) 1- c2 R

(s_1) I I B (n-s-1) I n s+1

1D cJ lIB(i-1) I !13

(n-j)1 I ' j where

B(r)

denotes a matrix of the same form as B but of order r and

(21)

where B.. denotes the cofactor of b.. for a matrix B of order n. By convention it is assumed that

B(o) - 1 and

IB (1) 1 = 1+c

The parametrisation used in this note affords a considerable simpli- fication of the above formulae. Using the formula for

IR (n)l | recursi- vely, one

IB (n)1 1 + c2 + c4 + ... + c2n _ 1 +

c2 2n+2 , c 1 so that the cofactor B.. for a matrix of order n can be expressed

1) ):

�� -

13 (1-c 2 2

_

Calling b1J the typical element of the inverse of B, we find : b ij c i-i (1-c 2i )(1-c 2(n-j)+2 -

, _ j, _ . ,

Here we derive an analytic expression for B 1 using Corollary 3.2.

Without loss of generality, we shall assume that the parameter c is less than unity in absolute value. (if it were bigger than one, it suffices to divide each element of B by c 2

We write B in the following form:

B = A + KK' + SS' where

1 -c o 0 o c o

-c 1+c 2 -c ... o 0 0 0

A = o -c 1+c2 0 0 , ''K o S = o

0 0 0 1+c 1 o 0

ooo

0 -cl

11

o c

i

1) For the special case where c = 1, the following formulae apply:

�B (n)1 n+1,

Bij - i (n-j+1) , i j .

2) A similar formula is found in a forthcoming book by Nerlove,Grether and Carvalho E7,Appendix E. Thèse authors also note that Theil

Ell,pp.211-2231 gives an asymptotic expression for the inverse of B of infinite order, but with five diagonals.

(22)

- 17 -

The matrix A is known to any student of econometrics as the inverse of the autocorrelation matrix, i.e.

1 c c 2

... c n-1

c 1 c c n-2

-112 . 1 - c n-3

1-c 2

...

c n-1 c n-2 c n-3 1

n being the order of A. The expression for B given above is suitable for application of Corollary 3.2, with K = H and S = T. We find immediately:

1

n+l c 3 4

1-c and therefore

B-1 A-1

1-n+2 A- 1(KY'+SS'-c n+l SK-c n+l KS')A-1

This expression can be simplified further. Call ri 1 1

' c n-i' n-2

c c

2 . n-3

Z1

= c and Z

=

c

c n-1 1

It is immediately verified that A- 1K =

c 2 Z and A~1S =

c 2 Z2 . .

1-c 1-c

We therefore can state the result in the following simpler form:

' = '-

(1-c 2n.2,,, 2. (Z 1 zi + Z2Z2)

n+3 (1-c +2) (1-c2)

(Z lZ2' +

Z2 zl

Références

Documents relatifs

Similarly to [15], we also used variational techniques to obtain the desired variational properties of the sum or to the correspondent variational systems; however, in this article,

“COWRITE_DR” is significant and it is equal to 1.367, implying that an individual that has published at least one paper with his/her PhD supervisor has on average a 48.7% increase

Asymptotic of the terms of the Gegenbauer polynomial on the unit circle and applications to the inverse of Toeplitz matrices... In a first part we are interested in the asymptotic

AMS Subject Classification (2010): 00-02, 90B06, 49J45, 90C25, 91B40, 91B50, 91D10 35Q91 Keywords: traffic congestion, wardrop equilibrium, optimal flows, equilibrium problems,

Geroldinger concerning the minimal number of elements with maximal order in a long zero-sumfree sequence of a finite Abelian group of rank

The implementation of a study and research path during three consecutive editions of a Statistics course for Business Administration sheds light on different

The core of random matrix theory (RMT) is a corpus of exact results concerning the canonical Gaussian ensembles of matrices, invariant under three types of transformations:

This problem finds its motivation in cryptanalysis: we show how to significantly improve previous algorithms for solving the approximate common divisor problem and breaking