• Aucun résultat trouvé

Computing and estimating information matrices of weak ARMA models

N/A
N/A
Protected

Academic year: 2021

Partager "Computing and estimating information matrices of weak ARMA models"

Copied!
42
0
0

Texte intégral

(1)

HAL Id: hal-00555305

https://hal.archives-ouvertes.fr/hal-00555305

Preprint submitted on 12 Jan 2011

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Computing and estimating information matrices of weak ARMA models

Yacouba Boubacar Mainassara, Michel Carbon, Christian Francq

To cite this version:

Yacouba Boubacar Mainassara, Michel Carbon, Christian Francq. Computing and estimating infor- mation matrices of weak ARMA models. 2011. �hal-00555305�

(2)

weak ARMA models

Y.Boubaar Mainassara a

, M.Carbon b

, C. Franq a

a

UniversitéLilleIII,EQUIPPE-GREMARS,BP60 149,59653 Villeneuved'Asqedex,

Frane.

b

UniversitéRennes2et Ensai

Abstrat

Numerous time series admit weak autoregressive-moving average (ARMA)

representations, in whih the errors are unorrelated but not neessarily in-

dependentnormartingaledierenes. Thestatistialinfereneofthisgeneral

lass ofmodelsrequiresthe estimationofgeneralizedFisherinformationma-

tries. We give analyti expressions and propose onsistent estimators of

these matries, at any point of the parameter spae. Our results are illus-

tratedbymeansofMonteCarloexperimentsandby analyzingthedynamis

of daily returnsand squared daily returnsof nanial series.

Key words: Asymptoti relativeeieny (ARE),Bahadur's slope,

Information matries, LagrangeMultiplier test, Nonlinearproesses, Wald

test, Weak ARMA models.

1. Introdution

Thelassofthestandard ARMAmodelswithindependenterrorsisoften

judged toorestritivebypratitioners, beausethey are inadequate for time

seriesexhibitinganonlinearbehavior. Even whentheindependene assump-

tion isrelaxedand theerrorsare onlysupposedtobemartingale dierenes,

the ARMA models remain often unrealisti beause suh models postulate

that the best preditor is a linearfuntion of the past values.

Email addresses: mailto:yaouba.boubaarmainassarauniv-lille3.fr(Y.

BoubaarMainassara),mailto:arbon.mihelyahoo.a(M.Carbon),

mailto:hristian.franquniv-lille3.fr(C.Franq)

(3)

neessarilyindependenterrorsismuhmoregeneralandaommodatesmany

nonlineardata-generatingproesses(seeFranq,RoyandZakoïan,2005,and

the referenes therein).

For standard ARMA models, it is well known that the asymptotivari-

ane of the least squares estimator (LSE) is of the form σ2Jθ−10 , where σ2

is the variane of the errors and Jθ0 is an informationmatrix depending on the ARMA parameter θ0 (see e.g. Brokwell and Davis, 1991). For weak

ARMA models,the asymptotivarianeofthe LSEtakesthesandwih form

Jθ−10 Iθ0Jθ−10 where Iθ0 is aseond informationmatrix dependingon θ0 and on

fourth-order moments of the errors. The estimation of the asymptoti in-

formation matries Jθ0 and Iθ0 is thus neessary to evaluate the asymptoti

auray of the LSE of weakARMA models.

Inthe frameworkof (Gaussian) linearproesses, the problemof omput-

ing the Fisher information matries and of their inverses has been widely

studied. Various expressions of these matries have been given by Whittle

(1953),Siddiqui(1958),Durbin(1959)and BoxandJenkins(1976). MLeod

(1984),Kleinand Mélard(1990, 2004)andGodolphinandBane(2006)have

given algorithms for their omputation. For few partiular ases of weak

ARMA models, the matries Iθ0 and Jθ0 have been omputed by Franq

and Zakoian (2000, 2007) and Franq, Roy and Zakoian (2005). In all the

above-mentioned referenes, the information matries are always omputed

at the true parameter value θ0. For some appliations, in partiular to de- termine Bahadur's slopes under alternatives, it is neessary to ompute the

informationmatries atθ 6=θ0.

Theaimof thepresent paperistoomputeand estimatethe information

matries Jθ and Iθ ata point θ whih isnot neessarily equal to θ0.

The rest of the paper is organized as follows. In Setion 2, we present

the weak, strong and semi-strong ARMA representations and reall results

onerning the estimation of the weak ARMA models. Setion 3 displays

the main results. We desribe how to obtain numerial evaluations of Iθ

and Jθ,uptosome tolerane,and weproposeonsistentestimators forthese

information matries. Setion 4 studies the nite sample behavior of the

estimators and ompare the Bahadur slopesof two versions ofthe Lagrange

multipliertest fortestinglinearrestritionsonθ0. Forthelatterappliation, itis neessary toompute Jθ atθ6=θ0. Conludingremarksare proposedin

Setion 5. The proofs of the main results are olleted in the appendix.

(4)

We rst introdue the notions of weak and strong ARMA representa-

tions, whih dier by the assumptions on the error terms. We then reall

results onerningthe estimation of the weakARMA models, and introdue

extended informationmatries.

2.1. Strong, semistrong and weak ARMA representations

Foralinearmodeltobequitegeneral,the errortermsmustbe thelinear

innovations, whih are unorrelated by onstrution but are not indepen-

dent,normartingaledierenes, ingeneral. Indeed,theWolddeomposition

(see Brokwelland Davis(1991),Setion5.7) stipulates thatany purely non

deterministi stationaryproess an beexpressed as

Xt=

X

ℓ=0

ϕǫt−ℓ, (ǫt)∼WN(0, σ2), (1)

where ϕ0 = 1, P

ℓ=0ϕ2 < ∞, and the notationt) ∼ WN(0, σ2) signies

thatthe linearinnovationproesst)isaweakwhitenoise, thatisastation-

ary sequene of entered and unorrelated random variables with ommon

varianeσ2. In pratie the sequene ϕ isoften parameterizedby assuming that Xt admits an ARMA(p, q) representation, i.e. that there exist integers

p and q and onstants a01, . . . , a0p, b01, . . . , b0q,suh that

∀t ∈Z, Xt

p

X

i=1

a0iXt−it+

q

X

j=1

b0jǫt−j. (2)

Thisrepresentationissaidtobeaweak ARMA(p, q)representationunderthe assumptiont) ∼ WN(0, σ2). For the statistial inferene of ARMA mod-

els, the weak white noise assumption is not suient and is often replaed

by the strong white noise assumptiont) ∼ IID(0, σ2), i.e. the assump-

tion thatt) is an independent and identially distributed (iid) sequene of random variables with mean 0 and ommon variane σ2. Sometimes an

intermediate assumption is onsidered for the noise. The sequenet) is

said to be a semistrong white noise or a martingale-dierene white noise,

and isdenotedbyt)∼MD(0, σ2),ift)isastationarysequene satisfying E(ǫtu, u < t) = 0 and Vart) =σ2. An ARMA representation (2) willbe alled strong under the assumptiont) ∼ IID(0, σ2) and semistrong under

the assumptiont)∼MD(0, σ2).

(5)

that of semistrong white noise, and the latter is more restritive than the

weak white noise assumption, beause independene entails unpreditabil-

ity and unpreditability entails unorrelatedness, but the reverses are not

true. Consequently the weak ARMA representation are more general that

the semistrongand strongones, what we shematize by

{Weak ARMA} ⊃ {Semistrong ARMA} ⊃ {Strong ARMA}. (3)

Anyproess satisfying(1) isthelimit,inL2 asn→ ∞,of asequene ofpro-

esses satisfying weak ARMA(pn, qn) representations (see e.g. Franq and Zakoïan,2005, page 244). In this sense, thesublass of the proessesadmit-

tingweakARMA(pn, qn)representationsisdenseinthesetofthe purelynon determinististationaryproesses. Simpleillustrationsthatthelastinlusion

of(3)isstritaregivenbythevastlassofvolatilitymodels. IndeedGARCH-

type models are generally martingale dierenes (beause nanial returns

are generally assumed to be unpreditable) but they are not strong noises

(in partiular, beause of the volatility lustering, the squared returns are

preditable). Many nonlinear models, suh as bilinear or Markov-swithing

models, illustrate the rst inlusion in (3), sine they admit weak ARMA

representation (see Franq, Roy and Zakoïan, 2005, setion 2.3) whih are

not semistrong, beause the best preditor is generally not linear when the

data generating proess (DGP) is nonlinear. To x ideas, we give below

a simple illustrative example, whih was not given by the above-mentioned

referenes.

Example 2.1 (Integer-valued AR(1) and MA(1)). MKenzie (2003)

reviews the literature on models for integer-valued time series. Letbe the

thinning operator dened by

a◦X =

X

X

i=1

Yj,

where(Yj)isaniidountingseries,independentoftheinteger-valuedrandom variable X, with Bernoulli distribution of parameter a∈[0,1). The integer-

valued autoregressive (INAR) model of order 1 isgiven by

∀t ∈Z, Xt=a◦Xt−1+Zt (4)

(6)

whereZt isaninteger-valuediidsequene,independentoftheountingseries, with meanµ and variane σ2. Clearly the best preditorof Xt islinear sine

E(Xt|Xu, u < t) = aXt−1. Moreover we have Var(Xt|Xu, u < t) = (1−a)aXt−12. We thus have the semistrong AR(1) representation

Xt=aXt−1 +µ+ǫt, (ǫt)∼IID

0, aµ 1−a +σ2

.

Similarly to (4) the integer-valued moving-average INMA(1) is dened by

∀t∈Z, Xt=Zt+a◦Zt−1.

Straightforward omputations show that EXt = µ(1 +a), Var(Xt) = σ2 + a(1−a)µ+a2σ2 and Cov(Xt, Xt−1) = aσ2, from whih we dedue the weak

MA(1) representation

Xt =µ(1 +a) +ǫt+bǫt−1, (ǫt)∼WN 0, σ2ǫ ,

where b ∈ [0,1) and σǫ2 > 0 are solutions of b/(1 + b2) = ρX(1) and (1 +b2ǫ2 =Var(Xt). This MA(1) representation is not semistrong beause

E(Xt |Xt−1 = 0) = E(Xt |Zt−1 = 0) = µ does not oinide with the linear

predition givenby the MA(1) model when a6= 0.

Finally we have shown that an INMA(1) is a weak MA(1) and that a

INAR(1) isa semistrong AR(1).

2.2. Estimating weak ARMA representations

We now present the asymptoti behavior of the LSE inthe ase of weak

ARMA models. The LSE is the standard estimation proedure for ARMA

models anditoinideswiththe maximum-likelihoodestimatorintheGaus-

sian ase. It will be onvenient to write (2) as φ0(B)Xt = ψ0(B)ǫt, where B is the bakshift operator, φ0(z) = 1−Pp

i=1a0izi is the AR polynomial

and ψ0(z) = 1 +Pq

j=1b0jzj is the MA polynomial. The unknown parame- ter θ0 = (a01, . . . , a0p, b01, . . . , b0q) is supposed to belong to the interior of a

ompat subspae Θ of the parameter spae Θ :=

θ = (θ1, . . . , θp+q) = (a1, . . . , ap, b1, . . . , bq)∈Rp+q : φ(z) = 1−

p

X

i=1

aizi and ψ(z) = 1 +

q

X

i=1

bizi

have alltheir zeros outside the unit disk}.

(7)

Sineθ ∈Θ,thepolynomialsφ0(z)and ψ0(z)havealltheir zerosoutsidethe

unitdisk. Wealsoassumethatφ0(z)andψ0(z)havenozeroinommon,that p+q >0anda20p+b20q 6= 0 (byonvention a00=b00 = 1). Theseassumptions are standard and are alsomade for the usual strong ARMA models.

Forallθ ∈Θ,let

ǫt(θ) =ψ−1(B)φ(B)Xt=Xt+

X

i=1

ci(θ)Xt−i.

Given a realization of length n, X1, X2, . . . , Xn, ǫt(θ) an be approximated, for 0< t≤n, by et(θ) dened reursively by

et(θ) =Xt

p

X

i=1

θiXt−i

q

X

i=1

θp+iet−i(θ)

where the unknown starting values are set to zero: e0(θ) =e−1(θ) = . . . = e−q+1(θ) =X0 =X−1 =. . . =X−p+1 = 0. The random variable θˆn is alled

LSE if itsatises, almost surely,

Qn(ˆθn) = min

θ∈ΘQn(θ), Qn(θ) = 1 2n

n

X

t=1

e2t(θ).

The asymptotibehavior of the LSE is wellknown in the strong ARMA

ase, i.e. under the assumptiont) ∼ IID(0, σ2). This assumption being

very restritive, Franq and Zakoïan (1998) onsidered weak ARMA repre-

sentations of stationaryproesses satisfying the following assumption.

A1 : E|Xt|4+2ν <∞ and P

k=0X(k)}2+νν <∞ for some ν >0,

where αX(k), k = 0,1, . . ., denotethe strong mixingoeients of the pro-

ess (Xt) (see e.g. Bradley,2005, for a review onstrong mixingonditions).

As notedbyFranq andZakoïan(2005),Assumption A1an bereplaed by

A1' : E|ǫt|4+2ν <∞ and P

k=0ǫ(k)}2+νν <∞ for some ν >0.

A straightforward extension of Franq and Zakoïan (1998) thus gives the

followingresult.

(8)

Lemma 2.1 (Franq and Zakoïan, 1998). Let(Xt)be a stritlystation-

ary and ergodi proess satisfying the weak ARMA model (2) witht) ∼ WN(0, σ2). Under the previous assumptions and Assumption A1 or A1',

√n

θˆn−θ0

d

;N(0,Ω =J−1IJ−1) as n → ∞, (5)

where I =Iθ0, J =Jθ0 =Jθ0, with Iθ =

+∞

X

h=−∞

Cov

ǫt(θ)∂ǫt(θ)

∂θ , ǫt−h(θ)∂ǫt−h(θ)

∂θ

, Jθ = E∂ǫt(θ)

∂θ

∂ǫt(θ)

∂θ , Jθ =Eǫt(θ)∂2ǫt(θ)

∂θ∂θ +E∂ǫt(θ)

∂θ

∂ǫt(θ)

∂θ .

In the strong ARMA ase, we have I =Is :=σ2J and Ω = Ωs :=σ2J−1. In

the semistrong ARMA ase, i.e. underthe assumptiont)∼MD(0, σ2),we

have

I =Iss:=Eǫ2t∂ǫt0)

∂θ

∂ǫt0)

∂θ .

Notethatwe introduethetwoversions Jθ andJθ beausethefollowingtwo

estimators of J an be onsideredn= 1

n

n

X

t=1

∂et(ˆθn)

∂θ

∂et(ˆθn)

∂θ , Jˆn = 1 n

n

X

t=1

et(ˆθn)∂2et(ˆθn)

∂θ∂θ + ˆJn. (6)

ThematriesJθ,Jθ andIθanbealledinformationmatries. Aswewillsee inSetion 4.2 they determine the asymptotibehavior oftest proedures on

θ0. They are also involved in other inferene steps, suh as in portmanteau adequay tests (see Franq, Roy and Zakoian, 2005).

3. Main results

MLeod (1978) gave a nie expression for J, as the variane of a VAR

model involving onlythe ARMA parameter θ0 (see (8.8.3) inBrokwell and

Davis, 1991). Franq, Roy and Zakoïan (2005) obtained an expression of I

involvingtheARMAparameterθ0 andthefourth-ordermomentsoftheweak noiset)(withtheirnotations, J =Λ

Λ andI =Λ

Γ∞,∞Λ whereΛ

dependsonθ0 andΓ∞,∞dependsonmomentsoft)). Forertainstatistial

appliations, it is interesting to obtain similar expressions for Iθ, Jθ and Jθ

when θ 6=θ0. This is the subjetof the next subsetion.

(9)

3.1.1. Matrix Jθ

Dierentiating the two sides of the equation φ(B)Xt = ψ(B)ǫt(θ), for i, k = 1, . . . , p and j, ℓ = 1, . . . , q, we obtain

−Xt−i =ψ(B) ∂

∂ai

ǫt(θ), 0 =ǫt−j(θ) +ψ(B) ∂

∂bj

ǫt(θ) 0 =ψ(B) ∂2

∂ai∂akǫt(θ), 0 = ∂

∂aiǫt−j(θ) +ψ(B) ∂2

∂bj∂aiǫt(θ) 0 = ∂

∂b

ǫt−j(θ) + ∂

∂bj

ǫt−ℓ(θ) +ψ(B) ∂2

∂bj∂b

ǫt(θ).

We thushave

∂ai

ǫt(θ) =−ψ−1(B)Xt−i =−ψ−1φ−10 ψ0(B)ǫt−i :=−

X

h=0

cahǫt−i−h

∂bjǫt(θ) =−ψ−2φ−10 φψ0(B)ǫt−j :=−

X

h=0

cbhǫt−j−h,

2

∂bj∂ai

ǫt(θ) =ψ−2φ−10 ψ0(B)ǫt−i−j :=

X

h=0

cabh ǫt−i−j−h,

2

∂bj∂b

ǫt(θ) = 2ψ−3φ−10 φψ0(B)ǫt−j−ℓ :=

X

h=0

cbbhǫt−j−ℓ−h,

and2ǫt(θ)/∂ai∂ak = 0. Moreover

ǫt(θ) =ψ−1φ−10 φψ0(B)ǫt :=

X

h=0

chǫt−h.

The followingresult immediatelyfollows.

(10)

Proposition 3.1. The elements of the matrix Jθ and Jθ are given by Jθ(i, k) = Jθ(i, k) =σ2

X

s=0

cas+k−icas, Jθ(p+j, p+ℓ) = σ2

X

s=0

cbs+ℓ−jcbs, Jθ(p+j, p+ℓ) = σ2

X

s=0

cs+j+ℓcbbs +Jθ(p+j, p+ℓ), Jθ(i, p+ℓ) = σ2

X

s=max{0,i−ℓ}

cas+ℓ−icbs,

Jθ(i, p+ℓ) = σ2

X

s=0

cs+i+ℓcabs +Jθ(i, p+ℓ),

for 1≤i≤k ≤p and 1≤j ≤ℓ≤q.

On the web page of the authors, programs written in R are available

for omputing the information matries dened in this paper, as well as

their estimates. For example, the following funtion infoJ() omputes Jθ

when, in Rlanguage, θ0<-(ar0,ma0) and θ<-(ar1,ma1). The trunation parameterMisdisussedinSetion3.2below. Thisfuntionusesthefuntion

prod.poly()whihmakestheprodutofthe2polynomials,andthefuntion

ARMAtoMA() of the pakage stats.

# Produt of 2 polynomials

prod.poly<- funtion(a,b) {

p<-length(a); q<-length(b)

if(p<=0|q<0)stop("a or b is invalid")

<-rep(0,(p+q))

for(h in 2:(p+q)){

imin<-max(1,h-q); imax<-min(p,h-1)

for(i in (imin:imax))[h℄<-[h℄+ a[i ℄*b[ h-i℄

}

[2:(p+q)℄

}

# Computation of the information matrix J at \theta=(ar1,ma1)

infoJ<- funtion(ar0,ma0,ar1,ma1 ,M=2 00) {

p<-length(ar1); q<-length(ma1); p0<-length(ar0); q0<-length(ma0)

matJ.theta<-matrix(0,nr ow= (p+q ),n ol= (p+q ))

if(p>0){ # _h^a + top-left orner of J

p1<-p0+q

if(p1==0) ar2 <- ()

if(p1>0) ar2 <- -prod.poly((1,-1*ar0), (1, ma1 ))[2 :(p 1+1) ℄

(11)

for(i in (1:p)){

for(k in (1:p)){

matJ.theta[i,k℄<-sum(h .a[( abs (k-i )+1) :(M +1)℄ *h .a[1 :(M- abs (k-i )+1) ℄)

}

}

}

if(q>0){ # _h^b + bottom-right orner of J

p1<-p0+2*q

if(p1==0) ar2 <- ()

if(p1>0) ar2 <- -prod.poly(prod.poly(( 1,ma 1), (1, ma1 )), (1,- 1*a r0)) [2:( p1+ 1)℄

q1<-p+q0

if(q1==0) ma2 <- ()

if(q1>0) ma2 <- prod.poly((1,-1*ar1), (1,m a0) )[2: (q1 +1)℄

h.b<-(1,ARMAtoMA(ar =ar2, ma=ma2, lag.max=M))

for(j in (1:q)){

for(l in (1:q)){

matJ.theta[p+j,p+l℄<-sum (h. b[( abs( l-j) +1) :(M+ 1)℄ *h. b[1: (M- abs( l-j) +1) ℄)

}

}

}

if(p>0&q>0){ # ross bloks

for(i in (1:p)){

for(l in (1:q)){

indmin1<-max(0,i-l)+l-i+ 1

indmin2<-max(0,i-l)+1

indmax1<-M-max(0,i-l)

indmax2<-indmax1-l+i

matJ.theta[i,p+l℄<-sum( h.a[ ind min1 :ind max 1℄* h.b [ind min2 :in dmax 2℄)

matJ.theta[p+l,i℄<- matJ.theta[i,p+l℄

}

}

}

matJ.theta

}

3.1.2. Matrix Iθ

We nowsearh similartratable expressions for Iθ. Let

Γ(m, m) =

+∞

X

h=−∞

Covtǫt−m, ǫhǫh−m). (7)

In the strongase, we have

Γ(0,0) =µ4−σ4, Γ(m, m) = Γ(m,−m) =σ4, Γ(m, m′′) = 0, (8)

with µ4 = Eǫ41, m 6= 0 and |m| 6= |m′′|. Simpliations may also hold in semistrong ases. Indeed, onsider the aset) ∼ WN (0, σ2ǫ) under the

followingsymmetry assumption

t1ǫt2ǫt3ǫt4 = 0 when t1 6=t2, t1 6=t3 and t1 6=t4. (9)

(12)

it is shown that, in partiular, GARCH models with fourth-order moments

and symmetri innovations satisfy (9). Many other martingale dierenes

satisfy this assumption. In this semistrong ase, we have

Γ(0,0) =

X

h=−∞

Cov2t, ǫ2t−h), Γ(m, m) =Eǫ2tǫ2t−m, Γ(m, m′′) = 0 (10)

when m6= 0 and |m| 6=|m′′|.

Example 3.1. For a GARCH(1,1) modelof the form

ǫt =√

htηt, t= 1,2, . . .

ht =ω+αǫ2t−1+βht−1, (ηt)∼IID (0,1)

with ω > 0, α≥0, β ≥0 and α2412+ 2αβ <11 we obtain Γ(0,0) = Eνt2 (1−β)2

(1−α−β)2, Γ(1,1) = Eνt2

α+ α2(α+β) 1−(α+β)2

+ Eσt22

Γ(m, m) = (α+β)Γ(m−1, m−1) +ωEσ2t, m >1,

witht2 =Eη41(Eσt4+ 1−2Eσt2),t2 = ω

1−α−β, Eσt4 = ω2(1 +α+β)

(1−α214−β2−2αβ)(1−α−β).

Proposition 3.2. The elements of the matrix Iθ are given by

Iθ(i, k) =

+∞

X

h1,h2,h3,h4=0

ch1cah2ch3cah4Γ(h2 +i−h1, h4+k−h3),

Iθ(j, ℓ) =

+∞

X

h1,h2,h3,h4=0

ch1cbh2ch3cbh4Γ(h2 +j −h1, h4+ℓ−h3),

Iθ(i, ℓ) =

+∞

X

h1,h2,h3,h4=0

ch1cah2ch3cbh4Γ(h2 +i−h1, h4+ℓ−h3),

1

Thelatteronditionsandneessaryandsuientfortheexisteneofanonantiipa-

tivestationarysolution withfourth-ordermoments(see e.g. Example2.3 in Franq and

Zakoian,2010).

Références

Documents relatifs

Notes: These figures show the difference in health outcomes between the treatment and control groups, by months since stove construction in the village, conditional on village x

L’accès à ce site Web et l’utilisation de son contenu sont assujettis aux conditions présentées dans le site LISEZ CES CONDITIONS ATTENTIVEMENT AVANT D’UTILISER CE SITE WEB.

Since all push and take operations occur in a single thread, and steal operations never alter the value of b, the elements of (β n ) correspond to writes to b in program order

Section 2 shown that the least squares estimator for the parameters of a weak FARIMA model is consistent when the weak white noise ( t ) t∈ Z is ergodic and stationary, and that the

The dashed lines (green color) correspond to the self-normalized asymptotic significance limits for the residual autocorrelations obtained in Theorem 4.. In these Monte

The dashed lines (green color) correspond to the self-normalized asymptotic signicance limits for the residual autocorrelations obtained in Theorem 3.5 ... The solid lines (red

In this paper we consider a general linear rational expectations framework that nests linearised dynamic stochastic general equilibrium (DSGE) models, in which the Kalman filter is

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des