• Aucun résultat trouvé

ON THREE CLASSES OF TIME SERIES INVOLVING EXPONENTIAL DISTRIBUTION

N/A
N/A
Protected

Academic year: 2022

Partager "ON THREE CLASSES OF TIME SERIES INVOLVING EXPONENTIAL DISTRIBUTION"

Copied!
13
0
0

Texte intégral

(1)

INVOLVING EXPONENTIAL DISTRIBUTION

GEORGIANA POPOVICI

The paper discusses some stochastic properties and parameter estimation for three types of stationary AR(1) processes involving the Exponential distribution in three different ways: Exponential innovations, Exponential stationary distribu- tion, Exponential conditional distribution. These three approaches lead to time series which are not equivalent, like it was the case for AR(1) processes involving the Gaussian distribution. Parameter estimation is performed by means of the Conditional Least Squares method and/or the Conditional Maximum Likelihood method. The asymptotic behaviour of the estimation is discussed.

AMS 2000 Subject Classification: 62M10.

Key words: autoregressive processes, exponential distribution, parameter estima- tion.

1. INTRODUCTION

Time series data occur in a variety of disciplines including engineering, finance, sociology and economics among others. One of the most studied time series is the stationary, Gaussian AR(1) process which is a Markov process.

Among its remarkable properties we underline the fact that it can be de- fined/generated in three equivalent ways: starting from Gaussian innovations, or from a Gaussian stationary distribution, or from a Gaussian transition con- ditional distribution. This property was the starting point of our study, as we focused on three possible approaches of an AR(1) process involving Exponen- tial distributions.

The ARMA time series with Exponential distributions have been dis- cussed by several authors, such as Li and McLeod, Gaver and Lewis, Grun- wald et all and they have been systematically analyzed in Popovici (PhD Thesis) [12]. The paper discusses the properties and the parameter estimation for three types of stationary AR(1) processes involving Exponential distribu- tions: EIAR processes (with Exponential innovations), EAR processes (with Exponential stationary distribution) and ECLAR processes (with Exponential transition distribution). Unlike the Gaussian case, the corresponding AR(1) time series {Xt, t}are no longer equivalent.

MATH. REPORTS12(62),1 (2010), 45–57

(2)

The Conditional Least Squares (CLSE) and the Conditional Maximum Likelihood (CMLE) methods are used in order to estimate the parameters of these processes. The asymptotic behaviour of the estimators is obtained on using the general results which characterize the two methods (CLSE, CMLE).

2. DEFINITIONS AND PROPERTIES

An AR(1) time series{Xt, t∈Z}is defined through the linear equation Xt = φ1Xt−1t, where {εt, t∈Z} is a sequence of independent, identical distributed random variables called innovations.

Definition 1. A stationary AR(1) process Xt1Xt−1t

with φ1 ∈ (0,1) and {εt}t a sequence of independent, identical distributed random variables it is called EIAR(1)ifεt have an Exponential distribution, Expo(µ),whereµ∈(0,∞) (E(εt) =µand V (εt) =µ2).

Proposition 1.For theEIAR(1)process{Xt, t∈Z}, the mean and the variance of the stationary distribution are

E(Xt) = µ

1−φ1, V (Xt) = µ2 1−φ21 and the characteristic function satisfies the relation

ϕX(t) =ϕX1t)·(1−itµ)−1.

Proof of Proposition 1. From the linear AR equation we have E(Xt) =φ1E(Xt−1) +E(εt).

Since the process {Xt, t∈Z} is stationary, we obtain E(Xt) = µ

1−φ1

. Similarly,

V (Xt) =φ21V (Xt−1) +µ2. This implies that,

V(Xt)(1−φ21) =µ2. Hence

V(Xt) = µ2 1−φ21, Next,

ϕX(t) =E eitXt

=E eit(φ1Xt−1t)

=E eitφ1Xt−1

·E eitεt ,

(3)

ϕX(t) =ϕX1t)·ϕεt(t), ϕX(t) =ϕX1t)·(1−itµ)−1. Definition 2. A stationary AR(1) process

Xt1Xt−1t

with φ1 ∈ (0,1) and {εt}t a sequence of independent, identical distributed random variables it is called EAR(1) if the stationary distribution of the process is Expo(µ) (E(Xt) =µand V (Xt) =µ2).

Proposition 2. The characteristic function of the innovations corres- ponding to an EAR(1) process is

φεt(t) =

φ1+ (1−φ1)· 1 1 +µt

.

Proof of Proposition 2. We have ϕX(t) =

1 1 +µt

. Next,

ϕX1t) =

1 1 +µφ1t

, φεt(t) = ϕX(t) ϕX1t). This implies that

φεt(t) =

1 +µφ1t 1 +µt

, φεt(t) =

φ1+ (1−φ1)· 1 1 +µt

.

Proposition3. Let{Xt, t∈Z},{Yt, t∈Z}be two independentEAR(1) processes, with the same parameters φ1 ∈(0,1), µ∈(0,∞).Then, their sum

Zt=Xt+Yt, t∈Z is a Gamma AR process GAR(1) (2, φ1, µ).

Proof of Proposition 3. A stochastic process {Xt, t∈Z} is called a Gamma Autoregressive process of order 1 and parameters (p, φ1, µ), denoted GAR(1) (p, φ1, µ),if it satisfies the equation

Xt1Xt−1t

(4)

with φ1 ∈(0,1),

ξt=





















G0,t with the probability φp1, G1,t with the probability

p 1

φp−11 (1−φ1),

. . . .

Gk,t with the probability p

k

φp−k1 (1−φ1)k,

. . . .

Gp,t with the probability (1−φ1)p,

whereG0is a random variable with the Dirac distributionδ0,and{Gi,t, t∈Z}

i = 1, . . . , p are independent sequences of independent, identical distributed random variables, such that Gi,t has a Gamma distributionGamma(i, µ) for every t.

Let {At, t∈Z} be a GAR(1) (p, φ1, µ) process and {Bt, t∈Z} be a GAR(1) (q, φ1, µ) process with the same parametersφ1 ∈(0,1), µ∈(0,∞). The processes are independent. Then, their sum {Ct, t∈Z}

Ct=At+Bt is a GAR(1) (p+q, φ1, µ) process.

Let

At1At−1+D1, ϕD1(t) = (φ1+ (1−φ1E1)p, where E1 has an Exponential distribution, Expo(µ).

Let

Bt1Bt−1+D2, ϕD2(t) = (φ1+ (1−φ1E2)q,

where E2 has an Exponential distribution, Expo(µ). E1, E2, D1 and D2 are independent

Ct1Ct−1+D1+D2,

ϕD1+D2(t) =ϕD1(t)ϕD2(t) = (φ1+ (1−φ1E)p+q, where E has an Exponential distribution, Expo(µ).

It implies that{Ct, t∈Z} is a GAR(1) (p+q, φ1, µ) process.

Ifp= 1, we deal with an EAR(1) process.

Notice that the EAR(1) process with the parameters φ1 and µ can be written as

Xt1Xt−1+ (1−I1)E1, Yt1Yt−1+ (1−I2)E2,

where I1 and I2 are random variables with Bernoulli distribution B(1, φ1), E1 and E2 are random variables with Exponential distribution Expo(µ), and

(5)

I1, I2, E1, E2 are independent. Then,

Zt=Xt+Yt1(Xt−1+Yt−1) + (1−I1)E1+ (1−I2)E2. We denote by

B = (1−I1)E1+ (1−I2)E2. Then,

Zt1Zt−1+B.

Let

A=

0 with the probability φ21,

Eft with the probability 2φ1(1−φ1), Gft with the probability (1−φ1)2,

where fEt has an Exponential distribution, Expo (µ), fGt has a Gamma distri- bution Gamma(2, µ) andEft ,Gft are independent.

ϕA(t) =E eitA

21eit0+ 2φ1(1−φ1)E eitE˜

+ (1−φ1)2E eitG˜

=

21+ 2φ1(1−φ1E˜(t) + (1−φ1)2ϕ2E˜(t) = φ1+ (1−φ1E˜

2

. On the other hand, we have

ϕB(t) =ϕ(1−I1)E1+(1−I2)E2(t) =Eh

eit[(1−I1)E1+(1−I2)E2]i

=

= Z

R4

eit[(1−i1)e1+(1−i2)e2]dP◦(I1, I2, E1, E2)−1 =

= Z

R4

eit[(1−i1)e1+(1−i2)e2]dP ◦I1−1dP ◦I2−1dP◦E1−1dP ◦E2−1 =

= Z

R2

eite1(1−i1)dP ◦I1−1dP ◦E1−1 Z

R2

eite2(1−i2)dP◦I2−1dP ◦E2−1=

= Z

R2

eite1(1−i1)dP◦I1−1dP◦E1−1 2

=

= Z

R

Z

R

eite1(1−i1)dP◦I1−1

dP ◦E1−1 2

=

= Z

R

φ1eit0+ (1−φ1) eite1

dP◦E1−1 2

=

=

φ1+ (1−φ1) Z

eitEdP 2

= (φ1+ (1−φ1E)2. So, ϕA(t) =ϕB(t).

Definition 3. A stationary AR(1) process Xt1Xt−1t

(6)

with φ1 ∈ (0,1) and {εt}t a sequence of independent, identical distributed random variables it is calledECLAR(1)if the transition distribution is Expo, F(· |Xt−1=x) = (β+φ1x).

Proposition 4. For the ECLAR(1) process {Xt, t∈Z} the mean and the variance of the stationary distribution are

E(Xt) = β 1−φ1

denoted

= µ, V (Xt) = µ2 1−2φ21. Proof of Proposition 4. From the linear AR equation we have

E(Xt) =φ1E(Xt−1) +β.

Hence

E(Xt) = β 1−φ1

. Similarly,

E Xt2

=E (β+φ1Xt−1)2+ β+φ1Xt−1

2

= 2E(β+φ1Xt−1)2. It implies that

1−2φ21

E Xt2

= 2

β2+ 2βφ1 β 1−φ1

,

E Xt2

= 2β2

1+φ1

1−φ1

1−2φ21 , V (Xt) = 2β2

1+φ1

1−φ1

1−2φ21 − β2 (1−φ1)2. After calculation we have

V (Xt) = β2

(1−φ1)2 1−2φ21. We denote

β 1−φ1

=µ.

Then

V (Xt) = µ2 1−2φ21.

Proposition 5. For the ECLAR(1) process, the mean and the variance of the innovations are

E(εt) =β, V (εt) = µ2 1−φ21 1−2φ21 . Proof of Proposition 5.

E(εt) =E(Xt)−φ1E(Xt−1).

(7)

It implies that

E(εt) = (1−φ1) β 1−φ1

V (Xt) =φ21V (Xt−1) +V (εt), V (Xt) = 1−φ21 µ2 1−2φ21.

A simulation study was performed in order to establish the behaviour of these three processes near the border of the stationarity domain (Popovici, 2008) [13].

3. PARAMETER ESTIMATION

Parameter estimation has been performed by means of the CLS method and/or the CML method. The general properties of these two methods are presented in Klimko and Nelson [8] and in Basawa and Prakasa-Rao [1].

EIAR(1) process

The parameters of an EIAR(1) process can be estimated by the CLS method, using the linear form of the conditional expectation. Also, one can take advantage of the simplicity of the innovations and apply the ML method, as suggested by Li and McLeod.

• CLSE for the process EIAR(1)

a) Construction of the estimator. We use the Conditional Least Squares method for estimating ψ= (φ1, µ),starting from the mean

E(Xt|Xt−1 =xt−1) =φ1xt−1+µ.

The associated sum of squares is

Qn(ψ) =

n

X

t=2

[Xt−(φ1xt−1+µ)]2,

∂Qn(ψ)/∂φ1=

n

X

t=2

2 [Xt−φ1Xt−1−µ] [−Xt−1] = 0.

Then,

(3.1) −

n

X

t=2

XtXt−11

n

X

t=2

Xt−12

n

X

t=2

Xt−1 = 0.

(8)

On the other hand, we have

∂Qn(ψ)/∂µ=

n

X

t=2

2 [Xt−φ1Xt−1−µ] (−1) = 0,

n

X

t=2

Xt1

n

X

t=2

Xt−1+µ(n−1) = 0,

(3.2) bµ=

n

P

t=2

Xt−cφ1 n

P

t=2

Xt−1

n−1 .

We substitute (2) in (1) and we obtain

1=

(n−1)

n

P

t=2

XtXt−1

n

P

t=2

Xt−1 n

P

t=2

Xt

(n−1)

n

P

t=2

Xt−12n

P

t=2

Xt−1

2 .

Proposition 6. For the process EIAR(1) cφ1 and µb are asymptotically independent and normally distributed

√n

1−φ1

∼N 0,(1−φ1)2·(1 +φ1) 2

!

and √

n(µb−µ)∼N 0, µ2 .

Proof of Proposition 6. The model fulfils the regularity conditions from Klimko and Nelson [8]. Thus, the asymptotic distribution of

n cφ1−φ1

√n µb−µ

is Gaussian, with mean 0. We calculate the covariance matrix:

Let

g(ψ) =φ1Xt−1+µ, ∂g

∂φ1 =Xt−1, ∂g

∂µ = 1, ∂g

∂φ1∂µ = 0, V11=E(Xt−1)2 = µ2

1−φ1

1 1 +φ1

+ 1

1−φ1

, V22=E r2

= 1, V12= 0 =V21, V ar(εt) = 1µ2. Then,

V =

µ2 1−φ1

h 1

1+φ1 +1−φ1

1

i 0

0 1

!

and V−1=

1

µ2 1−φ1

h 1 1+φ1+1−φ1

1

i 0

0 1

! .

(9)

It implies that,

C=

(1−φ1)2·(1+φ1)

2 0

0 µ2

!

b)Li and McLeod’s method. Let (x1, . . . , xn) be an observed trajec- tory. The values of the innovations are {xt+1−φ1xt, t= 1, . . . , n−1}.Since the innovations are independent, identical distributed random variables with an Exponential distribution, the likelihood function is

L=

n−1

Y

t=1

1 µe

(Xt+1−φ1Xt)

µ .

The MLE is obtained in the traditional way lnL=−(n−1) lnµ− 1

µ

n−1

X

t=1

(xt+1−φ1xt). We differentiate with respect toµ and we get

∂lnL

∂µ =−(n−1) µ + 1

µ2

n−1

X

t=1

(xt+1−φ1xt). We denote

X(1)= 1 n−1

n−1

X

t=1

Xt, X(2) = 1 n−1

n−1

X

t=1

Xt+1. Then

µb=X(2)−φ1X(1). We differentiate with respect toφ1 and we get

g(φ1) = ∂lnL

∂φ1 = 1 µ

n−1

X

t=1

Xt= (n−1)X(1) X(2)−φ1X(1)

. We have

g(0) = (n−1)X(1) X(2)

, g(1) = (n−1)X(1) X(2)−X(1) . From these formulae above one can get cφ1.

EAR(1) process

We use the CLS method to estimate the parameters of an EAR(1) pro- cess.

• CLSE for the process EAR(1)

(10)

a)Construction of the estimator. The conditional sum of squares is Qn(ψ) =

n

X

t=2

[xt−(φ1xt−1+µ(1−φ1))]2.

The CLSE estimators for φ1 and µ are obtained by minimizing Qn(ψ) with respect to ψ= (φ1, µ)

1=

n

P

t=2

XtXt−1−(n−1)−1

n

P

t=2

Xt

n

P

t=2

Xt−1 n

P

t=2

Xt2−(n−1)−1 n

P

t=2

Xt−1

2

and

bµ=

n

P

t=2

Xt−cφ1 n

P

t=2

Xt−1

(n−1)(1−φc1) .

b)Proposition 7. For the processEAR(1)cφ1 andµbare asymptotically independent and normally distributed:

√n

1−φ1

∼N 0,1−φ21 and

√n(µb−µ)∼N

0,µ2(1 +φ1) (1−φ1)

.

This result follows from the general properties of the CLS method.

ECLAR(1) process

We use the CML method to estimate the parameter of an ECLAR(1) process.

• CMLE

a) Construction of the estimator. The likelihood function corres- ponding to an observed trajectory x= (x1, . . . , xn) is

L(φ1, β) =

n

Y

t=2

f(xt|xt−11, β). The log-likelihood is

n

X

t=2

ln (φ1Xt−1+β)−

n

X

t=2

Xt φ1Xt−1+β. We differentiate with respect toβ and φ1.

(11)

We get

f1(β, φ1) =

n

X

t=2

Xt

1Xt−1+β)2

n

X

t=2

1

φ1Xt−1+β = 0 and

f2(β, φ1) =

n

X

t=2

XtXt−1

1Xt−1+β)2

n

X

t=2

Xt−1

φ1Xt−1+β = 0.

We differentiate again to get the JacobianJ J11(β, φ1) =

n

X

t=2

1

1Xt−1+β)2 −2

n

X

t=2

Xt

1Xt−1+β)3, J12(β, φ1) =J21(β, φ1) =

n

X

t=2

Xt−1

1Xt−1+β)2 −2

n

X

t=2

XtXt−1

1Xt−1+β)3, J22(β, φ1) =

n

X

t=2

Xt−12

1Xt−1+β)2 −2

n

X

t=2

XtXt−121Xt−1+β)3.

The values of the estimators are obtained by solving the linear system with matrix J and the constant termf.

b)The simulation study for the estimator. A simulation study has been performed in order to characterize the precision of the estimators.

We generate 1000 trajectories of length n = 1000 for the process ECLAR(1). For each generated trajectory, the estimation cφ1,βb

has been obtained with the Newton-Raphson’s method.

The initial value φ(0)1 , β(0)

: φ(0)1 close to zero andβ(0) = n1

n

P

t=1

Xt and the error equal to 10−3.

We considered the following situations

1 = 0.2, β= 5), (φ1 = 0.5, β = 5), (φ1 = 0.7, β= 5) the results for (cφ1,β) are presented in Tables 1–3.b

We use the programC++

Table 1

The statistic properties forφ1= 0.2, β= 5

Min 1st Qu Median Mean 3rd Qu Max std.

cφ1 0.1794 0.1936 0.1986 0.1997 0.2055 0.2217 0.008772516 βb 4.979 4.993 5.000 5.000 5.006 5.027 0.00999688

From Table 1 both estimators cφ1 and βbare stable. cv(b φc1) = 4.4% and cv(b βb) = 0.2%.

(12)

Table 2

The statistic properties forφ1= 0.5, β= 5

Min 1st Qu Median Mean 3rd Qu Max std.

φc1 0.4175 0.4840 0.4862 0.4847 0.4923 0 0.5437 0.002014333 βb 4.811 4.971 5.009 5.010 5.062 5.183 0.08310194

From Table 2 we can see thatcφ1 is more stable thanβ.b cv(cb φ1) = 0.42%

and cv(b β) = 1.66%.b

Table 3

The statistic properties forφ1= 0.7, β= 5

Min 1st Qu Median Mean 3rd Qu Max std.

φc1 0.6987 0.7125 0.7230 0.7377 0.7430 0.8111 0.04412418 βb 4.636 4.894 4.994 4.990 5.077 5.344 0.1319506

From Table 3 we can see that cφ1 and βbare not stable comparing with the other two cases. cv(cb φ1) = 6% andcv(b β) = 2.6%.b

REFERENCES

[1] B. Basawa and B. Prakasa-Rao,Statistical Inference for Stochastic Processes.Academic Press, London, 1980.

[2] L. Billard and F.I. Mohamed, Estimation of the parameters of an EAR(p) process.

Journal of Time Series Analysis12(1991), 179–192.

[3] P. Brockwell and R. Davis, Time Series: Theory and Methods. Springer Series in Sta- tistics, 1987.

[4] G.E.P. Box and G.M. Jenkins, Time Series Analysis: Forecasting and Control. Holden Day, San Francisco, 1970.

[5] D.P. Gaver and P.A.W Lewis, First order autoregressive gamma sequences and point processes.Advances in Applied Probability12(1980), 727–745.

[6] G.K. Grunwald, R.J. Hyndman, L. Tedesco and R.L. Tweedie, Non-Gaussian condi- tional linear AR(1)models.Australian New Zealand Journal of Statistics42(4)(2000), 479–495.

[7] W.K. Kim and I.K. Kim,Estimation for the Exponential ARMA model.Korean Statis- tical Journal9(1994), 239–248.

[8] L.A. Klimko and P.I. Nelson, On conditional least squares estimation for stochastic processes.Annals of Statistics6(1978), 629–64.

[9] A.J. Lawrance and P.A.W Lewis, The exponential autoregressive moving average EARMA(p, q) process.Journal of the Royal Statistical Society. Ser. B42(1980), 150–

161.

[10] A.J. Lawrance,The innovation distribution of a gamma distributed autoregressive pro- cess.Scandinavian Journal of Statistics9(1982), 234–236.

[11] W. Li and A. McLeod, ARMA Modelling with Non-Gaussian Innovations.Journal of Time Series9(1988),2, 155–168.

(13)

[12] G. Popovici, Inference statistical topics for time series. PhD Thesis, University of Bucharest, 2008.

[13] G. Popovici,On the Behaviour of the AR processes with Exponential distribution near the stationarity border: A simulation study. Scientific Bulletin, University of Pite¸sti, Nr.

14(2008), 1–12.

[14] C.H. Sim,First-order autoregressive models for gamma and exponential process.Journal of Applied Probability27(1990), 325–332.

Received 20 February 2009 University of Bucharest

Faculty of Mathematics and Computer Science Str. Academiei 14

010014 Bucharest, Romania gpopovici@fmi.unibuc.ro

Références

Documents relatifs

Simulation results show that the exponential force model agrees with the measurements as well as the linear force model in the cutting force prediction, and it is able to

In this paper, we present three remarkable properties of the normal distribution: first that if two independent variables ’s sum is normally distributed, then each random

In summary, sc injection of 5 mg/kg EGCG 4 times a week led, compared to the other two dose–route protocols, to the highest average reduction in serum CK levels, the second

Regarding the literature on statistical properties of multidimensional diffusion processes in absence of jumps, an important reference is given by Dalalyan and Reiss in [9] where, as

Présenté dans quatre salles et trois cabinets, le nouvel accrochage du MAH met en évidence la place de la peinture italienne et espagnole dans l’histoire des collections

Distribution of time of flight for (a) the no-slip walls cavity flow; (b) the Kenics mixer; (c) the multi-level laminating mixer; (d) the “F” mixer.. The dotted line stands for the

Wood: Polygon Forest contour: Polyline Edge: Segments. 7 (2/2) 10 10 10_1

Pour résoudre le problème de l'inconfort thennique de l'habitat expérimental, nous allons utiliser le modèle de simulation pour analyser la contribution de