• Aucun résultat trouvé

A new statistical procedure for testing the presence of a significative correlation in the residuals of stable autoregressive processes

N/A
N/A
Protected

Academic year: 2022

Partager "A new statistical procedure for testing the presence of a significative correlation in the residuals of stable autoregressive processes"

Copied!
21
0
0

Texte intégral

(1)

A new statistical procedure for testing the presence of a significative correlation in the residuals of stable

autoregressive processes

Frédéric Proïa

Colloque Jeunes Probabilistes et Statisticiens CIRM - Marseille

16-20 avril 2012 INRIA Bordeaux Sud-Ouest Institut de Mathématiques de Bordeaux

(2)

Plan

1 Introduction

Framework of the study Hypothesis

Stability conditions

2 On the autoregressive parameter Estimation and convergence Central limit theorem Rates of convergence

3 On the serial correlation parameter Estimation and convergence Central limit theorem Rates of convergence

4 Testing the residual autocorrelation of the AR(p)process The Durbin-Watson statistic

Test statistic

Empirical power and comparisons

5 Conclusion

(3)

Introduction

Framework of the study

Significance of the serial correlation in the residuals.

Autoregressive process of orderp.

For alln≥1,

( Xn = θ1Xn−1+. . .+θpXn−pn εn = ρεn−1+Vn

whereX00are square-integrable andX−1,X−2, . . . ,X−p=0.

Preliminary study.

The empirical process is asymptotically stationary.

(Xn)has an autoregressive behavior.

Objectives.

Sharp analysis on the least squares estimators ofθandρ.

Statistical procedure for testingH0: “ρ=0”vsH1: “ρ6=0”.

Comparison of the empirical power of the test with commonly used procedures.

Use of the Durbin-Watson statistic.

ArXiv : 1203.1871.

(4)

Introduction

Hypothesis

Causality of the model.

Compact expression, for all 1≤t≤N, A(B)Xtt

where the polynomialA(z) =1−β1z−. . .−βpzppρzp+1and β= θ1+ρ θ2−θ1ρ . . . θp−θp−1ρ0.

Ais causal :A(z)6=0 for allz∈Csuch that|z| ≤1.

kθk1<1 and|ρ|<1 imply causality.

OnN, causality often coincides with asymptotic stationarity.

Autoregressive structure of order at leastp.

θp6=0.

Moments on the noise.

(Vn)i.i.d. such thatE[V1] =0,E[V12] =σ2andE[V14] =τ4.

Possible to do without i.i.d. assumption.

(5)

Introduction

Stability conditions

Serial correlation stability.

|ρ|<1 translates moments properties from(Vn)to(εn).

Autoregression stability.

kθk1<1 translates moments properties from(εn)to(Xn).

Lemma (Stability, M. Duflo)

Assume that(Vn)is a sequence of independent and identically distributed random variables such that, for some a≥1,E[|V1|a]is finite. Then,

n

X

k=0

|Xk|a=O(n) a.s. and sup

0≤k≤n

|Xk|=o(n1/a) a.s.

More generally.

For allj≥0,

n

X

k=1

Xk−jXk=O(n) a.s.

(6)

Plan

1 Introduction

Framework of the study Hypothesis

Stability conditions

2 On the autoregressive parameter Estimation and convergence Central limit theorem Rates of convergence

3 On the serial correlation parameter Estimation and convergence Central limit theorem Rates of convergence

4 Testing the residual autocorrelation of the AR(p)process The Durbin-Watson statistic

Test statistic

Empirical power and comparisons

5 Conclusion

(7)

On the autoregressive parameter

Estimation and convergence

Least squares estimator.

For alln≥0, denote by

Φpn= Xn Xn−1 . . . Xn−p+10

and

Sn=

n

X

k=0

ΦpkΦpk0+S.

In the AR(p)structureXn0Φpn−1n, for alln≥1,

θbn= (Sn−1)−1

n

X

k=1

Φpk−1Xk.

Sis a symmetric positive definite matrix added to ensure the invertibility ofSn.

(8)

On the autoregressive parameter

Estimation and convergence

Convergence.

Theorem

We have the almost sure convergence

n→∞lim bθn a.s.

The limiting value is given by

θ=α(Ip−θpρJp

whereIpis the identity matrix of orderp,Jpis the exchange matrix of orderpand α= (1−θpρ)−1(1+θpρ)−1.

Strong consistency whenρ=0 under the stability conditions (Lai and Wei, 1983).

(9)

On the autoregressive parameter

Central limit theorem

Central limit theorem.

Theorem

Assume that(Vn)has a finite moment of order 4. Then, we have the asymptotic normality

√n θbn−θ

L

n→∞−→ N(0,Σθ).

The asymptotic covariance matrix is given by

Σθ2(Ip−θpρJp) ∆−1p (Ip−θpρJp) where∆pis the positive definite almost sure limiting matrix ofσ−2Sn/n.

pis the covariance matrix of the standard stationary process built from(Xn).

Whenρ=0,Σθ= ∆−1p (see e.g. Brockwell and Davis, 1991).

(10)

On the autoregressive parameter

Rates of convergence

Quadratic strong law.

Theorem

Assume that(Vn)has a finite moment of order 4. Then, we have the quadratic strong law

n→∞lim 1 logn

n

X

k=1

θbk−θ θbk−θ0

= Σθ a.s.

Law of iterated logarithm.

Theorem

Assume that(Vn)has a finite moment of order 4. Then, we have the law of iterated logarithm

lim sup

n→∞

n 2 log logn

θbn−θn−θ 0

= Σθ a.s.

(11)

Plan

1 Introduction

Framework of the study Hypothesis

Stability conditions

2 On the autoregressive parameter Estimation and convergence Central limit theorem Rates of convergence

3 On the serial correlation parameter Estimation and convergence Central limit theorem Rates of convergence

4 Testing the residual autocorrelation of the AR(p)process The Durbin-Watson statistic

Test statistic

Empirical power and comparisons

5 Conclusion

(12)

On the serial correlation parameter

Estimation and convergence

Estimated residual set.

For all 1≤k≤n, let

εbk=Xk−θbn0Φpk−1.

Least squares estimator.

In the AR(1)structureεn=ρεn−1+Vn, for alln≥1,

n= Pn

k=1εbkk−1 Pn

k=1k−12 .

Convergence.

Theorem

We have the almost sure convergence

n→∞lim ρbn a.s.

The limiting value is given by

ρpρθp whereθpis thep−th element ofθ.

Whenρ=0,ρbnconverges a.s. to 0.

(13)

On the serial correlation parameter

Central limit theorem

Central limit theorem.

Theorem

Assume that(Vn)has a finite moment of order 4. Then, we have the joint asymptotic normality

√n

θbn−θ ρbn−ρ

L

n→∞−→ N(0,Γ).

In particular,

√n

ρbn−ρ L

n→∞−→ N(0, σ2ρ) whereσ2ρ= Γp+1,p+1is the last diagonal element ofΓ.

The asymptotic covariance matrix is given by Γ =

Σθ θpρJpΣθe θpρe0ΣθJp σ2ρ

where

σρ2=PL0pPL−2α−1θpΛ1p0JpPL+ α−1θp2

λ0.

Whenρ=0,σ2ρ2p.

(14)

On the serial correlation parameter

Rates of convergence

Quadratic strong law.

Theorem

Assume that(Vn)has a finite moment of order 4. Then, we have the quadratic strong law

n→∞lim 1 logn

n

X

k=1

ρbk−ρ2

2ρ a.s.

Law of iterated logarithm.

Theorem

Assume that(Vn)has a finite moment of order 4. Then, we have the law of iterated logarithm

lim sup

n→∞

n 2 log logn

ρbn−ρ 2

ρ a.s.

(15)

Plan

1 Introduction

Framework of the study Hypothesis

Stability conditions

2 On the autoregressive parameter Estimation and convergence Central limit theorem Rates of convergence

3 On the serial correlation parameter Estimation and convergence Central limit theorem Rates of convergence

4 Testing the residual autocorrelation of the AR(p)process The Durbin-Watson statistic

Test statistic

Empirical power and comparisons

5 Conclusion

(16)

Testing the residual autocorrelation of the AR(p) process

The Durbin-Watson statistic

Introduced by Durbin and Watson.

Middle of last century.

Testing residual autocorrelation in standard regression analysis.

Biased in the dependent framework.

For alln≥1,

bDn= Pn

k=1(bεk−bεk−1)2 Pn

k=0k2 .

Properties.

Asymptotic equivalence

bDn

+∞2(1−ρbn) a.s.

Convergence

n→∞lim Dbn=2(1−ρ) =D a.s.

Central limit theorem

√n

Dbn−D L

n→∞−→ N(0,4σ2ρ).

Rate of convergence

bDn−D2

=O

log logn n

a.s.

(17)

Testing the residual autocorrelation of the AR(p) process

Test statistic

Test statistic.

For alln≥1,

bZn= n 4bθp,n2

Dbn−22

wherebθp,n2 is thep−th element ofbθn.

Theorem

Assume that(Vn)has a finite moment of order 4,θp6=0andθp6=0. Then, under the null hypothesisH0: “ρ=0”,

Zbn L n→∞−→ χ2

whereχ2has a Chi-square distribution with one degree of freedom. In addition, under the alternative hypothesisH1: “ρ6=0”,

n→∞lim Zbn= +∞ a.s.

Assumption{θp 6=0}not restrictive :{θp6=0} ∩ {θp =0} ⇒ {ρ6=0}.

H0not rejected ifZbn≤za, for a risk 0<a<1.

(18)

Testing the residual autocorrelation of the AR(p) process

Empirical power and comparisons

Empirical power on simulated data.

(Vn)i.i.d.N(0,1),p=3,θ= 0.1 −0.2 0.60

,n=500.

On large samples.

More powerful than theportmanteautests of Box-Pierce and Ljung-Box.

Wrong variance in the asymptotic normality : theportmanteautests overestimateH0.

Equivalent to theh-testof Durbin and to the Breusch-Godfrey test.

(19)

Testing the residual autocorrelation of the AR(p) process

Empirical power and comparisons

Empirical power on simulated data.

(Vn)i.i.d.N(0,1),p=3,θ= 0.1 −0.2 0.60

,n=30.

On small-sized samples.

More powerful than all other tests, except underH0.

Always more sensitive to the presence of correlation in the residuals.

Performs pretty well.

(20)

Plan

1 Introduction

Framework of the study Hypothesis

Stability conditions

2 On the autoregressive parameter Estimation and convergence Central limit theorem Rates of convergence

3 On the serial correlation parameter Estimation and convergence Central limit theorem Rates of convergence

4 Testing the residual autocorrelation of the AR(p)process The Durbin-Watson statistic

Test statistic

Empirical power and comparisons

5 Conclusion

(21)

Conclusion

Thank you for your attention !

References.

Bercu, B., and Proïa, F.A sharp analysis on the asymptotic behavior of the

Durbin-Watson statistic for the first-order autoregressive process.ESAIM Probab. Stat.

16 (2012).

Bitseki Penda, V., Djellout, H., and Proïa, F.Moderate deviations for the Durbin-Watson statistic related to the first-order autoregressive process.arXiv 1201.3579. Submitted.

(2012).

Durbin, J., and Watson, G. S.Testing for serial correlation in least squares regression.

I-II-III.Biometrika 37-38-57 (1950-51-71).

Malinvaud, E.Estimation et prévision dans les modèles économiques autorégressifs.

Review of the International Institute of Statistics 29 (1961).

Nerlove, M., and Wallis, K. F.Use of the Durbin-Watson statistic in inappropriate situations.Econometrica 34 (1966).

Park, S. B.On the small-sample power of Durbin’s h test.Journal of the American Statistical Association 70 (1975).

Références

Documents relatifs

To sum up, we establish the asymptotic behavior of the Bickel–Rosenblatt statistic based on the residuals of stable and explosive autoregressive processes of order p (p &gt; 0)..

Compute the p-value of the test at level 5% for comparing the empirical variance of X to σ 2 0 = 2, assuming the data

To sum up, we establish the asymptotic behavior of the Bickel-Rosenblatt statistic based on the residuals of stable and explosive autoregressive processes of order p (p

Proposition 5.1 of Section 5 (which is the main ingredient for proving Theorem 2.1), com- bined with a suitable martingale approximation, can also be used to derive upper bounds for

In other words, no optimal rates of convergence in Wasserstein distance are available under moment assumptions, and they seem out of reach if based on current available

Finally, note that El Machkouri and Voln´y [7] have shown that the rate of convergence in the central limit theorem can be arbitrary slow for stationary sequences of bounded

Abstract: We established the rate of convergence in the central limit theorem for stopped sums of a class of martingale difference sequences.. Sur la vitesse de convergence dans le

On the Almost Sure Central Limit Theorem for Vector Martingales: Convergence of Moments and Statistical Applications... Centre de recherche INRIA Paris