• Aucun résultat trouvé

Models by

N/A
N/A
Protected

Academic year: 2022

Partager "Models by"

Copied!
128
0
0

Texte intégral

(1)
(2)
(3)
(4)

Inferences in Volatility Models

by

©Vickneswary Tagore

A thesis submitted to the School of Graduate Studies in partial fulfillment of the requirement for the Degree of

Doctor of Philosophy in Statistics

Department of Mathematics and Statistics Memorial University of Newfoundland

August, 2010

Newfoundland and Labrador Canada

(5)

Abstract

In some real life time series, especially in financial time series, the variance of the re- sponsesover time appear to be non-stationary. The changes in thevarianc sofsuch data are usually modeled through adynamic relationship among thesevariances, and subsequently the responses are modeled in terms of the non-stationaryvariances. This type of time series model is referred to as the stochastic volatility model. However,ob- taining the consistent and efficient estimators for the parametersofsuchamodelhas been proven to be difficult. Among the existing estimation approaches, the so-called generalized method of moments(GMM)and the quasi-maximum likelihood(QML) estimation techniques are widely used. In this thesis, we introduce a simpler method of moments(8MM),which, unlike the existingGMMapproach, does not require an arbitrarily large number of unbiased moment functions to construct moment estimat- ingequations for the parameters involved. We also demonstrate numerically that the proposed8MI\I'approach is asymptotically more efficient than the existingQ~lL approach We also provide another simpler 'working' generalized quasi likelihood

(6)

(WGQL) approach which is similar but different than the SM 1 approach. Further- more, the small and large sample behavior of theSM~,Iand WGQL approaches are examined through a simulation study. The effect of the SMI\I estimation approach is also examined for kurtosis estimation.

In volatility models mentioned above, the responses are assumed to be uncorre- lated. However, in some situations, it may happen that the responses become influ- enced by certain time dependent covariates, and as opposed tothestandardstochastic volatility models, the responses become correlated. In the later part of thesis, we in- troduce an observation-driven dynamic (ODD) regression model with non-stationary error variances, these variances being modeled as in the standard stochasticvolatil- ity models. We refer to such a model as the observation driven dynamic-dynamic (dynamic2 )(0000) volatility model. The parameters of this wider model are es- timated by using a hybrid estimation technique by combining the GQL and Sl\IM approaches.

(7)

Dedication

Dedicated to my late father Saravanamuthu Pathmanathan, my mother Nageswary Pathmanathan and my husband Mariathas Judes Tagore.

(8)

Acknowledgements

Firstofall,IamverygratefultomysupervisorProfessorB.C.Sutradharforhisin pi- rational guidance, encouragement, enduring patience and constant support through- out my Ph.D program. I am also grateful to him for providing me with sufficient time for my research discussion. My thesis work was extremely beneficial from his vast research experience in different research areas.

I would like to thank all my supervisory committee members: Drs. G.Fan and G.

I sincerely acknowledge the financial support provided by the School of Graduate Studies, Department of .1athematics&Statistics and my supervisor in the form of graduate fellowships, teaching assistant and research grant. I would like to say special thank you to all staff of the Department of~lathematicsand Statistics: Ros, Wanda, Leonce,Jennifer, Jackie, Tom, Debbie, I<erriand Zachery for their kindness and great help.

(9)

I sincerely thank the external examiner, Professor Bruce Smith, Dalhousie University, and the internal examiners Drs. Alwell Oyet and J.C.Loredo-Osti, for their helpful remarks and suggestions on the original version of the thesis.

Most importantly, I wish to thank my parents, especially to my late father, for his encouragement from my early childhood

I would like to say my special thanks to my husband Judes for his end less care during the program.Without his help and encouragement, thisprograrn would not have been completed

I like to thank entire extended family for their eternal love, emotional support and encouragement during this program.

Iwishtotothankmyfriendsandwell-wisherswhodirectlyorindirectlyencouraged and helped me throughout my program.

(10)

Contents

Acknowledgements

List of Figures

Existing Volatility Models 1.1.1 Stochastic Volatility (SV)~Iodel 1.1.2 ARCH/GARCH \lodels Two Existing Estimation \Iethods for SV \Iodel . 1.2.1 Generalized~1ethodof Moments(G\I~I)approach 1.2.2 Existing Quasi Maximum Likelihood (QML) approach Objective of The Thesis

(11)

2Proposed Estimation Technique in Stochastic Volatility Models A Simple Method of Moments (S:'11\I)

Unbiased Moment Estimating Equations for "II and(7~in Finite

Moment Estimation in Large Time Series

A Generalized Quasi Likelihood (GQL) Method in Finite Time Seri s 29 2.2.1 GQL Estimating Equation for(7~

2.2.2 GQL estimating equation for "II

A Working Generalized Quasi Likelihood (WGQL) Method in Finite

2.3.1 WGQL Estimating Equation for(7~

2.3.2WGQL Estimating Equation for "II Asymptotic Variances Comparison of the Estimators

Asymptotic Variances of the SM1\1 Estimators 2.4.2 Asymptotic variances of the WGQL Estimators 2.4.3 QML Estimators and Their Asymptotic variances

Asymptotic Variance Comparison: An Empirical Study

3Small and Large Sample Estimation Performance of the Proposed SMM and WGQL Approaches: A Simulation Study 3.1 Small Sample Case .

(12)

3.2 LargeSampleCase

3.3 Interpretation of the Small and Large SampJe Simulation Resuits 3.4 True Versus Estimated Kurto is under the SV :--IodeJ

4ExtendedStochasticVolatilityModels Model and the Properties ARemarks on Stationarity Estimation of the Parameters in 0000 Model

GQLapproach GQLestimatingequation forfJ GQLestimatingequation forB SMM estimating equation forQ=b,,(7~)'

5ConciudingRemarks

Appendices

Bibliography

(13)

List of Tables

Asymptotic variance comparison of SM1\I, WCQL and QML estimators with selected parameter values .

Simulated mean (SM), simulated standard error (SSE) and simulated mean square error (SMSE) of the SMM estimates based on small time series with T=100, 200, 300&500 for selected parameters values by using 1000 simulations.

Simulated mean (SM), simulated standard error (SSE) and simulated mean square error (SMSE) of the WCQL estimates based on small time series with T=100, 200, 300&500 for selected parameters values by using 1000 simulations.

Simulated mean (S1\<I), simulated standard error (SSE) and simulated mean square error (SMSE) of the SMM estimates based on large time series for selected parameters values by using 1000 simulations.

(14)

Simulated mean (SM), simulated standard error (SSE) and simulated mean square error (SMSE) of WGQL estimates based on large time series for selected parameters values by using 1000 simulations.

(15)

List of Figures

True and estimated kurtosis with volatility parameters '1'1=0.5,(7~=

True and estimated kurtosis with volatility parameters '1'1=0.5,(7~=

(16)

Chapter 1

Introduction

The analysis of Gaussian time series data with non-stationary means and a suitable correlation structure has a long history both in statistics and econometrics litera- ture. See for example, Box-Jenkins (1994) and Harvey (1989). In a financial time series analysis, it was, however, observed by (Black and Scholes (1972, pAl6)) that the variances in stock returns and/or exchange rate data may changeover various time intervals. Intheirconcludingremarks,theseauthors,therefore,emphasizeclon research in this direction. Note that the modelling of this typeofnon-stationaryvari- ancesbasecltimeseriesdataisdoneeitherbyusingasuitabledynamicrelationship for the variance at a given interval with the variances from the past time intervals;

or by using a suitable dynamic relationship for the variance at a given time inter- val with the past squared observations. The former model is generally referred to

(17)

as the stochastic volatility (SV) and the later model is known as the autoregressive conditional heteroscedastic (ARCH) model. When the variance at a given time point satisfies a relationship with past variances as well as past squared observations, the model is referred to as the GARCH (Generalized ARCH) model. Since the pioneering work of the nobel laureate Engle (1982) [see also Engle and Kraft (1983), Engle et.al (1985), Engle and Bollerslev (1986), Bollerslev et.al (1992), Engle and Kroner (1995), Engle (2004)], the aforementioned models have been widely applied in the economet- ric literature. For example, we refer to Taylor (1982), Anderson and Sorensen (1996), Harvey et.al (1994), Ruiz (1994), Taylor (1994), Durbin and Koopman (2000), Broto and Ruiz (2004) Bollerslev (1986), Engle and Gonzalez-Rivera (1991) for the appli- cation of SV, ARCH and GARCH models

As far as inferences in the SV, ARCH and GARCH models are concerned, the aforementioned papers including the papers by Bodurthaand Mark (1991), Simon (1989) use the so-called generalized method of moments (GMM) and/or quasi- max- imumlikelihood (QML) approaches. These approaches are however numerically cum- bersomeand also they may not be efficient as compared to certain simplerapproaches In Chapter 2, we provide some simpler and efficient approaches for the estimation of the parameters of the SV model. However, before developing the new approaches, we, for convenience, explain the existing inferences such as GMM and QML techniques in Section 1.2.1 and 1.2.2 respectively. In Section 1.1, we present the existing SV model

(18)

along with its basic properties. The ARCH model is also discussed in brief in the

1.1 Existing Volatility Models 1.1.1 Stochastic Volatility(SV)Model

Lety,betheresponseattimet(t=1,... ,T).Supposethatu;denotes a random and unobservedvarianceofy,.lfu;were,however,known,theniti referred to as the conditional variance ofy" i.e, Var(Yilu,) =u;.Note that the conditional Gaussian time series data{y,}with mean zero and variance{unmay be modelled as

y, =U,f, t=1, ,T, (11)

[Taylor (1982),forexample] where error variablesf,'s are independentlyandidenti- cally (iid) normally distributed with mean zero and variance 1,thatis,f,'.!:JN(O,1).

Also,f, and u,areassumed to be independent. Sinceu; is unknown and it is reason- able to assume that the correlations betweenu;'smay decay as time lag increases, Taylor (1982), Anderson and Sorensen (1996) among others, have u ed Gaussian AR(1) type process to model the variances. That is,

In(u;) =h,= -Yo+-y\h'_l+'7,; t= 2, ..,T, (1.2) where-yo is the intercept parameter, -y\ is the volatility persistence parameter and '7,~ N(O,u~)withu~as the measure of uncertainty about future volatility. Note

(19)

that ifhI! <I, thenh,'s follow a stationary AR(I) process. In (1.2), similar to Lee and Koopman (2004,eqn (l.1c)) it is reasonable to assume that

Without any loss of generality we will use this stationarity assumptionsallthrough

1.1.1.1 Basic Properties of Stochastic Volatility Model:

(a) Asymptotic Mean, Variance and Covariance in{Yi}

Sinceh,follows the AR(I) process (1.2), it may be shown that the unconditional meanofYt is zero by(1.1).That is,

E[Yi]= Eu,ElYila,]=Ela,Elf,]]= 0, (1.4)

asa;and f,areindependent. Next, theunconditionalvarianceofy" Le., Var(Yi),we

Var(Yi) =Eu:!Var(Yila;)]+Varu1IE (Yila;)]

=Eu:!a;Var(f,)]=Ela;]. (1.5)

Sinceh,has the AR(I) relationship as in (1.2), it is clear that forhI!<I,h,=loga;

has the asymptotic mean and variance given by

limVarlh.l= .,----"--o=a:.(1.6)

(20)

respectively [see also Harvey, 1994, p.249, Jacquier, 1994, p.3 6 and Anderson and Sorenson, 1996, p.331j. ow, asa;= exp(h,), whereh,follows the normal distribu- tion by (1.2), by using the moment generating function ofh"weobtaintheasymptotic varianceofy,as

".!.~Var(Y,) = ,l.!.mE[a;l= exp(/Lh+~)=a2(say) (1.7)

[see also Tsay, 2005, p.134, Mills, 1999, p.127, Jacquier, 1994, p.386, and Anderson and Sorenson, 1996, p.331]. Next, by using the definition of the covariance, th lagk (k=1, ... , t-1) unconditional covariance between Y,_kand Y, may be computed as

COV(Y,-k,Y,) = E[(Y,-k - E[Y,-k])(Y, - E[Y,J)]=E[Y,-kY,] - E[Y,-k]E[l'i1

asE[Y,j=0;a,and€,areindependent;andalso€,'.!!!N(0,1)

(b) Asymptotic Mean, Variance and Covariance in{y,2}.

otethat the mean, variance and covariance of{y,} are given by (1.4), (1.5) and (1. ), respectively. However, as one is interested to fit theSV model(1.1) -(1.2) to the data, it is important to estimate the parameters O,/'I anda;involved in (1.2).

Consequently, it is natural that these parameters be estimated byu ing{Y,2} rather than{Y,}.The unconditional meanofy,2 is

(21)

which is given by (1.7). Next, becauseE[Etl=3, in the fashion similar to that of (1.7), we compute the limiting variance ofr,? by using

,1~n~ Var[}~2]=,1~~[E[Y.']- (E[y'2]f]=,1~~Ea;E[<TtEtl<Tt]- ,,'= 3tl!.~E[<Tt]- ,,'

=3exp[2/lh+2<T~]-,,'

(1.10)

[see also Tsay,2005, p.134, Mills, 1999, p.128, Jacquier, 1994, p.386 and Anderson and Sorenson, 1996,p.331]. Next, by definition, the limiting lag k (k=I,.,t-l)covariance between the squared responsesY.=-kandy'2is given by

Next, by using the dynamic relationship (1.2) and the moment generating function, we obtain,

(112)

[see also Mills,1999, p.128, Anderson and Sorenson 1996, p.331 and Jacquireet.al, 1994, p. 387]. It then follows from (1.11) and (1.12) that

(22)

see also Mills, 1999, p.128, Anderson and Sorenson 1996, p.331 and Jacquireet.al, 1994,p. 387. Consequently, by applying (1.10) and (1.13),oneobtainstheasymptotic lagk (k=1,... ,t-l) correlation between the squared responsesY,:k and1';'as

t~~Corr[l';:k'I';'J= ,'~~ [JV~~~~~)~~~~)']

eXP(21-'h+(7~)(exP['Yk~]-1) (exP['Yf(7~]-1) exp[2 I-'h+(7~][3 exp(7~)- 1) [3exp(7~)- I] .

(1.14)

Note that, the numerator in (1.14) lies between 0 to00and the denominator lies between 2 toDO.Hence, the asymptotic lag k correlation betweenI';:kandYt'is bounded between zero to 1. That is,

o<tl~Corr[l';:k'Y,')<1.

Asymptotic Kurtosis:

ote that it is standard to use the kurtosis to explain the volatility in the data. By definition, the limiting (t-too) kurtosis under theSV model has the formula given by

t~~K(Y,) =,'~n (~~q)2=t~~ (~llf1,=3exp(7~)>3, (1.15)

[see also Harvey 1994, p.249,~,lills,1999, p.128]. Hence, the volatility model (1.2) produces larger kurtosis when compared to the Gaussian kurtosis, Also, the peak

(23)

appears to depend on the volatility parameter values1'1anda~(see also 1.15). Thus, it is essential to estimate the model (1.2) parameters. namely1'1anda~consistently and efficiently.

1.1.2 ARCH/GARCH Models

Asopposedtotheabovemodel,in1982,Englesuggestedanobservationdrivenmodel, that is, ARCH model, to study the time varying observed variances.In the ARCH model, the conditional variance of the time series{y,} is a deterministic function of lagged values of the squared observations [Engle, 1982]. That is,

az=C>o+C>tYZ_l+ ...+C>pYZ_ p t=1,2 ...,T.

(1.16)

Further, Bollerslev (1986) generalized the ARCH model, by expressing the conditional variance as a function of lagged squared observations and lagged varia nces. That is, t=1,2 ... ,T, (1.17)

which is referred to as the generalized ARCH (GARCH) model.

However, in the thesis, we concentrate only on the SV model (1.1) - (1.2) and a generalization to be discussed in Chapter 4. Thus. there will be no further discussion of the ARCH/GARCH models.

(24)

"Ve now turn back to the SV model and briefly discuss two widely used techniques for the estimation of the parametersCrland(1~)of this SV model

1.2 Two Existing Estimation Methods for SV Model There exist many approaches for the estimation of the volatility parameters namely, /'O,/'Iand(1~,involved in the SV model (1.2). For example, we refer to the (1) generalized method of moments (GMM) [Melino and Turnbull, 1990, Anderson and Sorenson 1996], (2) Quasi maximum likelihood (QML) [Harvey et.aI1994, Ruiz, 1994], (3) simulation-based maximum likelihood (SML)[Danielsson 1994 and Danielsson and Richard 1993], and (4) Bayesian Markov chain Monte Carlo (MCMC) analysis ap- proach [Jacquier et.al 1994)].Since the GMM and QML approaches are compu- tationally less cumbersome as compared to the SML and MCMC approaches, they have been widely used over the last three decades, especially inalargetimeseries et up. For recent discussion on these two methods, we for example, refer to Anderson and Sorenson (1997) and Ruiz (1997). For convenience, these two approaches are presented in brief in the following two ections.

1.2.1 Generalized Method of Moments (GMM) approach The GMM approach (Hansen (1982)) utilizes a large number of unbiased moment functions of the absolute and/or squared responses. More specifically, Anderson and

(25)

Sorensen (1996, p. 350-351) have used 34 unbiased moment functions (see also An- derson and Sorensen (1997)) to construct the GMM estimating equations mainly for

"IIandlT~parameters. These 34 moment functions are given as

g'4=Y: - Ely:], g',4+1=!Y,Y,-t1- EIY,Y,-t1, g',I4+1=YZYZ-I - EIyZYZ_,],

g',24+1= ly,IYZ-l-Elly,IYZ-I] 1= 1, ,10. (1.18)

and they are used to construct the GMM estimating equation fora' =hO,"I1,lT~)' as given by

(1.19)

g(a')= ~ ~g,(a') with g,(a')=Ig,,(a'), ... ,g,,34(a')]', (1.20)

with /\=Cov(g(a')) as an optimal choice. Later on, Anderson and Sorensen (1997, section 3, p.399-400) have used 24 unbiased moment functions out of 34 functions shown in (1.18). Their 24 functions are:

g"=lytI-EIY,I, g'2=YZ - EIYz], g'3=lytl3-EIY,13, g'4=Y: - Ely:]

g',4+1=ly,Yt-t1-EIY,Yt-t1, gt.l4+t=YZYZ-l-EIYZYZ-I]' 1=1, .,10 (1.21)

(26)

It should be clear from (1.1 ) and (1.21) that there is no guidelines available how the moment functions such as 34 functions in (1.1) and 24 functions in (1.21) were chosen, when in fact. one can think of infinite number of such func tions(Melinoand Thrnbull (1990, p. 250)). This raises a concern about such an estimation procedure where an arbitrary large number of functions are needed to estimate a small number of parameters.

Furthermore, since the construction of the moment functionsgt,(j=1, ... ,24or34) requires the computation of expectation of different functions which may not be easy to simplify, and because the computation of the weight matrix /\ can be compli- cated, the GMM approach on the whole becomes very cumbersome. We, therefore, do not include this approach for the comparison with our proposed approaches that we discuss in Chapter 2.

1.2.2 Existing Quasi Maximum Likelihood (QML) approach

As an alternative to theGM~1approach, there also exist a QML (Quasi maximum likelihood) approach for the estimation of the parameters of the SV model (1.1)- (1.2). For example, we refer to [Nelson(198 ), Ruiz (1994), Harvey et.al (1994), and

~lills(1999, p.130-131)]. This Q1Lmethod is developed first by formulating a quasi (pseudo) likelihood (QL) based on normal approximation to the log chi-square dis- tribution of tit=logf~-E[IOgf~),where ft~N(O,1), and then ma;"imizing this

(27)

quasi-likelihood with respect to the desired parameters. Note that this QL abbrevia- tion may confuse the QL used in the generalized linear model (GLM) set up, where QL is constructed by using the first two moments of the data. Due to this approxi- mation the QML approach is supposed to lose efficiency [Broto and Ruiz (2004)1 in estimating the parameters. This approach is, however, not so cumbersome as com- pared to theG~[~[approach

We now present theQ~lLapproach in brief. For this purpose, we re- express the model(l.l)as

z,=logy~ =10gO"~+logf~

=E[logf~1+logO"~+u,

'"1<1+logO"~+u, t=1, ... ,T, (1.22)

where f,~N(O, 1) and 1<1=-1.27. Further,u, follows the log chi-square distribution with mean zero and variance 1<2=1f2/2[Abramovitz and Stegun (1970, p. 943)1. It then follows that the exact likelihood function for 1'1 andO"~is given by

L(')'I,0"~lzI,Z2, ,zr)=11,ADg(Z,-I<I-lnO"n f(O";)gf(O"~IO"~_I)dO"; .. dO"} (1.23)

whereg(u,)represents the logX2(0,1<2) distribution

It is, however, clear from (1.23) that the integration over the random variances O"f, ... ,O"}is complex mainly because they follow the dynamic reJaitonship (1.2).

(28)

Some authors have used an alternative 'working' ML approach, namely, a quasi- maximum likelihood (QML) approach. See. for example, elson(1988), Harvey et.al (1994), Ruiz(1994), J{oopmanetal (1995, chapter 7.5) and Mills (1999, p 130-131) Specially, to approximate the exact likelihood function in (1.23), Ruiz (1994) and Harvey et.al (1994), for example, have approximated the distribution by pretending that z=(ZI,Z2, ,ZT)' follow a quasi-multivariate normal distribution. This leads to a quasi-likelihood, which is maximized to obtain the QML estimates for1'1 and<T~.This approach is computationally feasible for the estimation of the required parameters, specially as compared to the GMM approach. For this reason, we will include theQML approach for asymptotic variance comparison with our proposed estimates. This comparison will be done in Chapter 3

1.3 Objective of The Thesis

Even though the volatility models are very important to study thedynamicchange in variances in a time series, and also these models are widely ued, there is no user friendly (simple) estimation techniques available for the inferences in stochastic volatility model. This is because, as explained in Sections 1.2.1 and 1.2.2, the exist- ing GMM approach is arbitrary and cumbersome, whereas the QML approach may not be efficient (as compared to other simpler approaches) even ifit is known to be

(29)

feasible computationally.

One of the main objectives of the thesis is to develop a simpler and efficientesti- mation approach, specially as compared to the Q!\lL approach, given that the GMr-l approach is very cumber ome and hence it is not practical. Furthermore, there are many situations where it may be appropriate to consider correiated observations con- ditional on the variances whereas in theSV model (1.1)-(1.2), the responses{y,} are uncorrelated conditional on {a,}. Innon-volatilitysetup,thistypeofcorrelation models for observations (dynamic model) has been discussed by some authors. For example, we refer to Bun and Carree (2005), and Rao, Sutradhar and Pandit (2010) in the longitudinal set up. [n the thesis, we consider this type of dynamic model for timeseriesobservations,asopposedtothelongitudinalobservations,conditionalon the heteroscedastic errors of the series

In Chapter 2, we propose two new simpler estimation approaches as compared to the existing approaches. These new approaches are developed by using only few appropriately selected unbiased moment functions. and they will be referred to as the simple method of moments(S~lr-I)and 'working' generalized quasi likelihood (WGQL) approaches. It is argued that as opposed to the existing GMM approach usingar- bitrarilyselected 24 or 34 unbiased moment functions, for example, it is enough to consider only 2 or 3 unbiased moments to construct the proposed SMM and WGQL

(30)

estimating equations. However, the important task is to find the best way to solve the estimating equations to be constructed by using these few moment functions. The constructionoftheS1V':VI approach both for finite and asymptotic cases is discussed in details. However,fortheWGQLapproach, we provide the construction in details for the finite case only. The construction for the asymptotic case can be done eas- ily. umerical algorithms are also provided to make these approaches user friendly.

To examine the asymptotic behavior of the proposed SMM and WGQL approaches, in this chapter, we provide an asymptotic efficiency comparison between these two approaches. Furthermore, since the existing QML approaches is computationally manageable, we have inc!uded this approach in our asymptotic efficiencycomparison.

Based on the numerical algorithm developed in Chapter 2, in Chapter 3, we con- duct a simulation study, first to examine the finite sample performances of the pro- posed SMM and WGQL approaches. ext, we continue the simulation study to ex- amine their large sample performances. For this purpose, we provide both imulated mean and standard errors of the propo ed estimators. Note that these large sample basedsimulatedstandarderrorsarecomparablewiththestandardcrrorsreportedin Chapter 2. In same chapter, the effects of estimation of the volatility parameterson the kurtosis are examined for small as well as moderately large timeseries

In Chapter 4, we extend the SV model in (1.1)-(1.2) to an observation driven

(31)

dynamic model set up. This extended model, unlike the SV model, can accommo- date correlated responses conditional on the non-stationary variancesoftheseries For simplicity, we will however, consider the lag 1 conditional dependenceamongthe observations conditional on the variances. This generalized model will be referred to as the observation-driven dynamic dynamic (0000) volatility model. The pro- posed SMM approach will be used to estimate the parameters of this 0000 volatility model, whereas the regression effects and dynamic dependence paramcter will bees- timated through a generalized quasi-likelihood (GQL) approach

The thesis is concluded in Chapter 5, with some remarks on possible future works

(32)

Chapter 2

Proposed Estimation Technique in Stochastic Volatility Models

Note that because of the importance of volatility model (1.1)-(1.2), there has been an enormous effort, in the past to obtain consistent and efficient estimates of the parameters of this model. As mentioned in the last chapter, we refer to the GMM, QML, SML and MCMC methods for the estimation of the parameters involved in the SV model. Note however that among all these approaches, theG~I1 and QML approaches are still widely followed in practice even though these approaches are ei- ther complex and arbitrary. Also, there is no guarantee that one method will be more efficient than the other (see for example, Anderson and Sorensen (1997, Sections 4-5), Ruiz (1997)). The relative performance of theGM~1andQ~Lapproaches is given

(33)

mainly because of the fact that other appraaches are either computationally more involved or less efficient than these approaches

Since the GMM and QML approaches are still considered to be complicated, in this chapter we investigate for any possibleimpler estimation approaches.I'dore specifically, in Section 2.1 we develop a moment technique which, unlike the GMIVI approach, uses only two unbiased moment functions to construct theestimatingequa- tion for two important volatility parameters of the SV model. A mentioned earlier, we refer to this method as the SMM (Simple Method of Moments) approach. In Sec- tion2.2, we provide a similar but different approach, namely, a 'working' generalized quasilikelihood (WGQL) approach. In Section 2.4, we compute the asymptotic vari- ances of these SMM and WGQL estimators, which are, subsequently, used in Section 2.5 for a numerical comparison. Also the variances of the estimators are compared with the modified QML approach. ote that the Gl\IM approach will not be consid- ered for comparison, as it was indicated in (1.18)-(1.19) that it uses arbitrarily large number of moment functions, which is not user friendly

2.1 A Simple Method of Moments (SMM) For simplicity, similar to Ruiz (1994), Anderson and Sorenson (1997) and Brato and Ruiz (2004), we choose 'Yo=0 under the volatility model (1.2).Similarly, even though Ina;under the SV model, (1.2) is upposed to be a randomN(0, I~~'Y;)

(34)

variable, for convenience, one may choo e a small value fora~such thatlna~--+O.

Now, for the construction of the moment estimating equations for the main parame- ters, namely 1', anda~,we choose only two unbiased moment functions as shown in Section 2.1.1. A justification for selecting two such moment functions isalsooutlined.

2.1.1 Unbiased Moment Estimating Equations for"11andO"~

in Finite Time Series

2.1.1.1 Selection of Moment Function for Estimatinga~

Note that it follows from the model(1.1) - (1.2) that if h,=loga~were following a white noise series with mean 0 and variancea~,that isE[all=Var(Y,)=h'(a~),a suitable constant function ofa~,then one would have estimatedh'(a~)consistently

1T

by u ingS,=T~[y,-E(Y,)f . This is because 1T

E[SII=T~Var(Y,)=h'(a~) (2.1)

1T ote that asE[Y,)=0 [see also 1.4],S,has the simple form asS,=T~y~.However, under the present model,atsare unobservable and their log values atisfy a non stationary Gaussian AR(1) type relationship given by (1.2). with errors '1,~N(O.a~).

This leads to the expectation ofS, as a function of both 1'1 anda~,instead ofh·(a~).

(35)

Suppose that

(2.2)

forasuitableknownfunction'g,'. Weevaluatethisg,(.) function in Theorem2.1.l

Note that between 1, anda~involved ing,(.),1, is known to be a bounded parameter. That is,bd <1. This assumption makes the AR(l) process for lnaf=h, to be stationary. But, unlike 11,a~>0 can take any value in the real line [0,001·

However, sinceE[Sd=h'(a~)in the white noise case, we suggest to exploitS, for the construction of the estimating equation fora~,even if the series is not white noise. This is because, the desired estimating equation should also be valid for the white noise case. Thus S,-g,(.) would be considered as the best possible unbiased moment function for the estimation ofa~That is, we solve the SMM estimating equation

fora~,by using a suitable value for 11 such thatbll<1We now return to the derivationofg,(.) as in the following theorem, before we provide theselection for the unbiased moment function for the other parameter 1,

Theorem 2.1.1.TheunconditionalexpectationJorS, is givenby

(36)

1T

Proof.Since 51=T f;YZ,we write its unconditional expectation as

E[5,j=E01nE[~~Y,2IUZ]=~Eo1nE[YI2+ ~Y,2IUZ]

= ~[Eo1n E[yj2Iu;]+ Eo1nE[~Y,2IuZ]]

= ~[Eo1nE[f;U;lu;]+~Eo1nE[fZUZluz]]

= ~[EM[U;]+~ENduZ]]' (2.5)

withU[T)=(u;, ,u}),whereEMdenotes the model based expectation, where the assumed model foru;is given by (1.2). Now for a known value ofu~=u~osuch that lnu~----; 0, we rewrite the expectation in (2.5) as

(2.6)

We now deriveE,.dunfor (2.6). First by using the recurrence relation hip from the equation (1.2), we write the general form foru; as

uZ= exp (1':-llnu;o + 1':-27/2 + 1':-3'13 + 1':-47)4 + . + 1'17)<-1 + 7)1)

= eXP(I':-llnu;o+~l'r1J(t-r)) t=2, .,r (2.7) ext, since 7),~ N(O,u~),by using normal moment generating function E(e"')=

(37)

exp(a~/2),it follows from (2.7) thatEAdaZlis given by

EAda~1= exp('Y:-11na;o+~ a~ ~1';')

= exp('Y:-1lna;o+~ {\-_'Y~~I}) t=2, ... ,T. (2. )

Now by usingEM[a~1from (2.8) in (2.5), we obtainE[Sd=91(.) as in the theorem.

Note that even if 1'1 is known, the solution of (2.3) requires a good initial value for a~,which we suggest to obtain by solving an asymptotic unbiased estimating equation whereas the estimating equation in (2.3) is valid for any t~2. Note that fort-t , I'YII,-I -to as l'Yd<1.It then follows that

.'~~E[y,2] =,l~~ EM[a~]

=exp[~C~'Yf)]'

(2.9)

We now want to construct an unbiased moment function as a reflection of the limiting property shown in (2.9). Forthis,onecanfindaTosuchthatforanyt>To,'Y:-1-t0 for l'Yd<1and write a basic statistic as

(2.10)

(38)

I T

)~!1JoE[5101= T _ To,=E,,l!.~ElY?]

=exp[~C~'Y[)]=91Ot'Y,,<7;') (2.11) Thi asymptoticexpectationisquitesimpleforthederivationofaninitialvaluefor

<7;', for an initial valueof'Yl ='Y,(O). For'Y' ='Y,(O), let <7?,(O) be the solution of

(2.12)

(2.13)

2.1.1.2 Selection of Moment Function for Estimating 'Y, Next, to construct an unbiased estimating equation for'Yl, we first ob erve that 'Y\

is the lag 1 dependence parameter in the Gaussian AR(I) model (1.2). We therefore,

I T

choose a lagIbased function given by 5, =T-=-!~YZ-,YZto construct the moment equation for'Y\. Suppose that the expectation of 5, as a function of both 'YI and<7?, is denoted by 9,('YI, <7;', <7[0)· One may then solve the SMM estimating equation for 'Ylgivenby

(2.14)

for known value of <7;' = <7;'(0). We now derive the formula for 9,t'Y" <7;', <7[0) which is given in Theorem 2.1.2 below.

(39)

Theorem 2.1.2. TheunconditionalexpectedvalueojS2 is

g2h",U~,ui)= T~lHo eXP(-Yllnuio+~)

+~exp(-y:-' In uio +"1:-2In uio +~{(1+ "IIf~"Iii + I})].

(2.15)

= T~1[EalT)E[fif~uiu~luiu~)+~EalnE[f;_tf;u;_,u;lu;_,u;l]

= T~1[E[uiu~)+~E[u;_,u;l]=T~1HoEM[u~l+~EM[u;_,U;l],

= T~lHo [exP("Illnuio+~)]

+~exp(-y:-llnuio+"I:-2Inuio+~{(1+.)2~;1+1})]

= T~lHo [exP("Illnuio+~)]+gP("(I'u~)] (say), (2.16)

withu[T)=(u;, ... ,u})

Note that the derivation forEM[uF-IU;J,the expectation of the pairwise products of

(40)

ai-I and al is lengthy but straightforward. For convenience, we provide the deriva- tion forEA/[ai-Iall in Appendix A. For convenience, we, re-write the formula for EA/[al_lalJ from the appendix. The formula is:

(2.17)

Note that, in the above discussion, we have given a justification for the selection of two unbiased moment functions in (2.3) and (2.14) for the estimation ofa~and 'YIrespectively. We have also indicated that a good initial value ofa~=a~(O)can be computed from the asymptotic moment equation fora~given in (2.13). To make this estimation approach user friendly, we now give a numerical algorithm for solving (2.3) and (2.14) fora~and'YIrespectively, by using the initial value ofa~(O)obtained from (2.13)

Algorithm

Step 1: For a small initial value'YI=-/1(0) andaf=afowe choosea~(O),an initial value ofa~by (2.13)

Step 2: Once the initial values are choosen/computed as in StepI, we solve52 - g2(-y"a~,afo)=0 by (2.14) iteratively to obtain an improved value for'YI.The

(41)

iterative equation has the form

where -il(r) is a value of 1'1 at r'h iteration, and[.I-;,(r)is the value of the ex pres- sion in the squared bracket evaluated at 1'1=7,(r).By following the formula for

g2(I'"a~,aro)from (2.15), the derivative ofag2(')'~~~,aro)in (2.18) has the formula given by

a{g2(')'~~~,aro)} = T~1[aroexp(1'Ilna;o+~)1'1

+(~exP(l'f-llna;o+l'f-2Ina;o+ ~{(1+1'1)2~I';l+l})

((t-lhf-2 Ina;o+(t - 2hf-3Ina;o

+~{2(l+1',) ~1';1+(l+1',)2 ~(2Ihf21-I)))] (2.19)

Step 3: The estimate of 1'1 obtained from Step 2 is then used to solveSI-g'(1'I,a~,aro)=

oin (2.3) iteratively to obtain an improvement overa~(O) The iterative equation

where;~(r)is the value ofa~at r1h iteration, and[.I.;(r)is the value of the ex pres- sion in the square bracket evaluated ata~=a~(r).By following the formula for

gl(l'l,a~,aro)from (2.4), the derivative ofagl(')'~~~,aro)in (2.20) has the formula

(42)

given by

This 3 steps cycle of iteration continues until convergence. Let the final estimates obtained from(2.18 ) and (2.20) be denoted by'iI,SMMand';~,SMMrespectively.

2.1.2 Moment Estimation in Large Time Series

In this case we provide the estimating formulas for"Iand a;' by using{ydfor t>To,whereTois sufficiently large andII'd<1 leading,,:-1--tO. For this purpose, a;'=a;,(O)is still evaluated from(2.13) by solving 5 10 - 910(.)=0 (2.12) where 510=T~To,=t+lYi

By the same token, we now consider 520=T _~o_1t=t+2 yi-Iyiand solve the estimating equation

(2.22)

for"I,where

920(')'1, a;')=)~ooE[5201=T _~o_1t=t2t~~E[Y;:I Y,2)

= T -~o-1,=t+)2JnEM[ai_Iail

= exP[1:;'"J (2.23)

where the formula of EA/[a;_la?lisgiven in (2.17)

(43)

Algorithm forT~00

As far as the algorithm for this largeTo case is concerned, we summarize it as follows.

Step 1: For a small initial value ')') =')',(0) and a; = a;o we choosea~(O),as an initial value ofa~by (2.13).

Step 2: Once the initial values arc chooscnjcomputed as in Step 1, we solve (2.22) for')') iteratively to obtain an improved value for')'). The iterative equation has the

i',(r+1)=i')(r)+[~r[S20-920]

=i')(r)+[expC~~')'J((l~~))2)r[S20-920]['1 (2.24)

where-iJ(r) is a value of')') atrthiteration, and [.]-;,(,) is the value of the expression in the square bracket evaluated at ')') =i',(r)

Step 3: We use improved ')') from (2.24) in Step 2 and solve (2.13) to obtain an improvedasymptoticestimatefora~.

This cycles of iteration continues until convergence.

(44)

2.2 A Generalized Quasi Likelihood (GQL) Method in Finite Time Series

In Section 2.1, we proposed a user friendly simple method of moment (SMM) approach to estimate the volatility parameters for both finite and large time series case. How- ever, there exists a relatively new, namely, the generalized quasilikelihood (GQL) approach [Sutradhar (2004), Mallick and Sutradhar (200 )]thatyieldsefficientesti- mates. [n Section 2.2.1 and 2.2.2 we show how to construct this GQL approach for (T~and 1'1 respectively. Note that even though we provide the theoretical formulas for the covariance matrix involved in the GQL estimating equations, it can however be very time consuming to compute the inverse of such covariance matrices,needed for solving the GQL estimating equations. To avoid this numerical complexity, we provide some approximations to the construction of covariance matricesinvolved in the equations. This will naturally yield approximate GQL estimates for the parame- ters. For convenience we refer to this approximate GQL approach as a 'working' GQL (WGQL) approach, and provide the estimating equations for(T~and 1'1 in Section 2.3.1andinSection2.3.2,respectively.

(45)

2.2.1 GQL Estimating Equation fora~

Note that in Section 2.1,'L;=,Ylwas equated to its expectation to construct an un- biased moment estimating equation in theS~[Mapproach. In the CQL approach, the same squared responses are used but in a different way. To be specific, in this approach, a quadratic form in the distances of the squared responses and their ex- pectation, is minimized with respect to the desired parameters. Let

u=[y~, ,yl, ... ,y}j'

with its unconditional expectation as

A=E[U]=[A". ,At, ... ,ATj'.

(2.25)

(2.26)

Further, let E be the covariance matrix ofu. In this CQL approach, the quadratic distance function, namely

(2.27)

is minimized with respect toa~,to obtain the estimating equation for this parameter.

To be specific, the CQL estimating equation fora~is given by

(2.28) [Slitradhar(2004),MalliCkandSutradhar(2008)]Where~isthederivativeofA with respect to (w.r.t)a~

For (2.28), we now provide the formulas for A and E as in the following two theorems

(46)

Theorem 2.2.1. For known o}=aro, the elements of the unconditional expectation ,Xare given by

fort=l (2.29) fort=2, .,r.

Proof.Note that

(2.30)

by assumption that ar is known.

Now, for t=2, ... ,T, we write

Sincea;andf;areindependentandf,~N(O,l),oneobtains (231)

The formula forEM[a;1for t=2, ... , T is given by (2.8). Hence the theorem

Theorem 2.2.2. Let the diagonal elements of1;bealL=Var(Y?) and the lag k off diagonal elements bea'_k,'=Cov(Y,:k' y,2). The formulas for the variances are given by

(47)

and for k=1, .. .,t-1 and t=2, ., T, the lag k covariances have the formulas as fort-k= 1, eXP['Y:-llnaro+'Y:-k-llnaro

+~((1+'Y~)2 '~2'Yrl+~ 'Y~')]- A'_kA,

fort=2, ... ,T (2.33)

where A, is given by (2.29) Proof. Toobtainatt, we write

= 3EAlla:] - A; fort = 1, ... , T (2.34)

This is becauseElfil= 3.Note that for t=l, all = VarIY,2] =3ato -A~

t=2, . . ,T, recall from (2.7), that

By some algebra, we can write

with ar = aro Next, since 71t'.!!1N(O,a~),by using normal moment generating function E(e"') =exp(a~/2),it follows from (2.35) thatEAllat] is given by

EAllat]=eXP(2'Y:-llna~o+2a~ ~'Yr') t=2, ... ,T (2.36)

(48)

Now by usingEM[a}]from (2.36) in (2.34), we obtain the Var(Y?) = at< as in theo-

Next, the derive the lagk=1,.,t-1 and t=2, ... ,T, unconditional covariance be- tweenYt~kandytWe write

(2.37)

withE,~N(O,l) andEiandafare independent. Note that the derivation for E[aLkai)is lengthy, which is given in appendixA.For convenience, here we re-write the formula forEA/[af_kailfrom the appendix. The formula is'

EM[a?_kan =exp ('Y~t-k)-Ilna;o+'Y:-Ilna;o

+~[(1+'Y~f t~2"1;1+~"Ii'] ), (2.3 ) withE[a;a~l=a;oA2,whereA2 is computed from (2.29). Now by using this formula from (2.38) into (2.37), we obtain the lag k covariances betweenY,~kandy,2as in

Computational Formula for(1~Estimate:

Note that A and E in (2.28), are functions of "II anda~. ow, it follows from

(49)

(2.28) that for known "II, the iterative equation forO'~may be expressed as

when~i:2:is tfle del'ivati've of A w.r.tO'~.By ((2.29)' the derivative has the formula

~ =ex+:-tlnO'~o+~~'Y~rm ~'Y~r], (2.40)

for t = 2, .. , T. For t=1 case,~=0.In (2.39)O'~(r)is the value ofO'~atrth iteration, and[.j';;(r)isthevallleoftheexpressioninthesquare bracket evaluated at 0';'=i7;'(r).Let the final estimate from (2.39) is denotedbyi7;',CQD

2.2.2 GQLestimating equation for/'1

otethat in Section 2.2,"L.;=2yZ-tYZwas equated toitsexpectationtoconstruct an unbiased moment estimating equation for "II' In the CQL approach, we use the same lag 1 pairwise squared responses but in a different way. To be specific, in this approach, a quadratic form in the distances of the lag 1 pairwise squared responses and their expectation, is minimized with respect to the "It. Let

(2.41)

and its lInconditional expectation is given by

1/I=E[V] = [1/I12, ... ,1/It-l.t, ,1/IT-t,Tj', (2.42)

(50)

wheret/J'-I"=E[yLty~].Further, letr1 be the covariance matrix ofv.In the GQL approach,thequadraticdistancefunction, namely

Q"= (v-t/J)'o-'(v-t/J) (2.43)

is minimized with respect to"fl'to obtain the GQL estimating equation for this parameter. To be specific, the GQL estimating equation for"flhas the form

(2.44) at/J'

[Sutradhar(2004), Mallick and Sutradhar (2008)] wherea::;;is the derivative w.r.t

We now provide the formulas fort/Jandr1 in (2.44) in the following theorems.

Theorem2,2,3. For known a;=a;o, the elements of the unconditional expectation aregivenby

Proof. otethat

(2.46)

(51)

whereas fort=3, ,T,the expectations of the products of lag 1 squarecl observations are given by

asEt~N(O,l) and E1 anda;are independent. Further note that the formula for EM[az-1,a;J was already given in (2.17)

This completes the proof of the Theorem 2.2.3

Theorem 2.2.4.Let the diagonal elements ofo. bew" =Var(Y;~1y,2)and the lag lojf diagonal elements be W'-l,t =COV(Y;~1Y;2, Y;2Y,~1)'The formulas for the variances are given by

1

9atoexP[2"lna;o+2a~]-1/!;2

WIt=Var[Y,~1y;2J= 9eXP[2(,:-2lna;o +,:-llna;o) +2a~((1+'1)2 L:~;;g,;I+l)]-1/!Z-I"

and for t=2, ... ,T and lag1covariances have the formula as fort=2

fort=3,

W'_l,'= COV(Y,:'1y,2,y;2Y;~1)=3exp([,r-2+ 2,:-1 + ,:] Ina;o

+[~((1+'1)4~,;i+ (2+'1)2 + 1)]) -1/!t-1,,1/!t,'+1 Proof.First derive the formula for variances,

Références

Documents relatifs

For each model, we propose to characterize the threshold region using some deterministic lower bounds on the estimator’s mean square error (i.e., lower bounds w.r.t.

Manuel Combes, Cécile Grigné, Laurent Husson, Sébastien Le Yaouanq, Marc Parenthoën, Chantal Tisseau, Jacques Tisseau.. To cite

/Ŷ ƚŚŝƐ ƉĂƉĞƌ͕ / ƉƌŽƉŽƐĞ ƚŽ ĚĞĐŽŵƉŽƐĞ ŶŽŶͲůŝŶĞĂƌŵŽĚĞůƐ ĚĞĚƵĐĞĚ ĨƌŽŵ Ă ůĂƚĞŶƚ

With these results we shall calculate in section 2.4 the number of points of M include in intervals of the form [X,Y [when X and Y belong to M.. Finally, we shall deduct in section

JUNEJA, On the mean values of an entire function and its deriva- tives represented by Dirichlet series, Ann. AWASTHI, On the mean values of an entire function and its

As for strong ergodicity at time 0, if (P n ) n≥1 is weakly ergodic at time 0 and (ii) from Theorem 2.4 holds, then it does not follow that it is strongly ergodic at time 0..

The ex- perimental results show that HSA is superior to the other methods we have applied to the CP problem, including these based on Genetic Algorithm (GA) [4], SMT used sep-

Lastly, the ex- pansion of an educational game designed to develop emotion recognition skills in children with autism spectrum disorder is used to illustrate how simulated