HAL Id: halshs-00188535
https://halshs.archives-ouvertes.fr/halshs-00188535
Submitted on 17 Nov 2007
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of
sci-entific research documents, whether they are
pub-lished or not. The documents may come from
teaching and research institutions in France or
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
Extreme Distribution of a Generalized Stochastic
Volatility Model,
Aliou Diop, Dominique Guegan
To cite this version:
Aliou Diop, Dominique Guegan. Extreme Distribution of a Generalized Stochastic Volatility Model,.
South African Journal of Statistics„ 2003, 37, pp.127 - 148. �halshs-00188535�
stochastic volatility model AliouDIOP a;b y DominiqueGUEGAN c a
UFR de Mathematiques Appliquees et d'Informatique, B.P. 234
Universite Gaston Berger, Saint-Louis, Senegal,
e-mail : adiop@ugb.sn
and
b
Universite de Reims, Laboratoire de Mathematiques, UPRESA 6056
B.P. 1039, 51687 Reims Cedex2 France,
aliou.diop@univ-reims.fr
c
E.N.S. Cachan, GRID UMR CNRS C8534,
61 Avenuedu President Wilson, 94235, CachanCedex, France,
e-mail : dguegan@wanadoo.fr
Abstract
WegivetheasymptoticbehavioroftheextremevaluesofStochasticVolatility
model (Y
t )
t
when the noise follows a generalizederror distribution (GED). This
classofdistributionwhoseapresentationhasbeenstudiedinBoxandTiao(1973)
forinstanceincludesinparticulartheGaussianlaw. Inthispaper,weshowthatin
thegeneralcontext,thenormalizedextremesofalog-transformationof(Y
t )
t
con-verges in distribution to the double exponential distribution. Weinvestigate the
importance of the dierent assumptions using Monte Carlo simulations. We also
dealwith thenitedistancebehaviorofthenormalizedmaxima. Thein uence of
theparametersofthemodelsisdiscussed.
keywords: Extremevaluetheory,tailbehavior,stochasticvolatilitymodel,GED
distribution.
runningtitle: Extremesofstochasticvolatility
y
Correspondingauthor.
The class of stochastic volatility models has its roots and applications in
nance and nance econometrics. Indeed, volatilityplays a central role in
the analysis of a lot of phenomenon in these domains. There exists a lot
of versions of stochastic volatility models in the literature. Here we are
interested bya discrete time version (t2 Z)of volatilitymodelintroduced
rstbyTaylor(1986). ThismodelappearsasaparticularmodeloftheSARV
modelintroducedbyAndersenin1994. Theunivariatevolatilitymodel(Y
t )
t
thatwewillconsiderherecan be denedbythefollowingequation :
Y t =exp( t 2 )" t (1) with t = 1 X j=0 j Z t j (2)
where is a positive constant, ("
t )
t
a sequence of independent and
iden-tically distributed (iid) random variable (r.v.) and (Z
t )
t
a sequence of iid
Gaussianr.v. withmeanzero, variance
2
Z
andindependentofthesequence
(" t ) t . The parameters( j
) aresuchthat
P j0 j j j 2 <1. Inthe literature,
generally it is assumed that "
t
and Z
t
are not correlated with each other
forall t. Forsimplicityhere we willassume independencebetween thetwo
sequences. Themodelcanpickupthekindofasymmetricbehaviorwhichis
oftenfoundinstockprices,andanegative correlationbetween"
t
andZ
t
in-ducesaleverageeect. Thisexplainswhypractitionersoftenusethismodel.
Thebehavioroftheautocorrelationfunctionandthepower-momentsforthe
processdenedby(1)-(2) arewellknownwhen"
t
followsaGaussianlawor
aStudentlaw,see forinstanceHarvey(1993), Taylor(1986), Ghyselset al.
(1996) orShepard(1996).
Recently banks, insurance companies faced with questions concerning
ex-tremal rare events. In insurance, these extremal events may clearly
corre-spond to individualclaims which by far exceed thecapacity of a single
in-surance company. In nance these extremal events can present themselves
spectacularlywhenevermajorstockmarketcrashes like theonein1987
oc-curs forinstance. The rst preoccupation forthese institutionsis to dene
a well-functioningrisk management and controlsystemto cautionto these
problems. Thus, they need stochastic methodology forthe construction of
variouscomponentsofsuchtools.
In that context, it is important to know, for instance the extremal
distri-bution of the dierent process which are used, until now, to characterize
process(Y t
) t
denedby(1)-(2)whenthenoise("
t )
t
followsageneralized
er-rordistribution (GED). Thisclass ofdistribution whosea presentationhas
beenstudiedinBoxandTiao(1973)forinstance,includesinparticularthe
Gaussianlaw. Thislast casehas beenstudiedbyBreidt and Davis (1998).
Here we show, in a general context specied in the next section, that the
normalizedextremesofalog-transformationof(Y
t )
t
convergeindistribution
tothedoubleexponentialdistribution. Sinceweareinterested byafeasible
application of these results on real data, we investigate the importanceof
thedierentassumptionsusingMonteCarlo simulations. Thus,weareable
to showthatsome assumptionsarenotsoimportant intherealitycontext.
Nevertheless ourresultsobtainedinanasymptoticcontext can be verybad
at nite distance. This last situation is classical in extreme value theory,
indeed,someresultsobtainedinanasymptoticcontextarenotalways valid
withnitesamples. Obviously,this impliessome diÆcultiesto use directly
these resultsinview to constructa risk management theory.
Ourpaperisorganizedinthefollowingway: Sectiontwocontainsstatistical
propertiesoftheprocess(1)-(2)when ("
t )
t
followsaGEDdistribution. The
extremal behavior of a log-transformation of (Y
t )
t
is presented in Section
three. InSectionfourwefocusonnitesamplesbehaviorforthenormalized
maximum of thelog-transformation of (Y
t )
t
. Section ve is devoted to the
conclusion. The proofsarepostponedinanAppendix.
2 Stationarity of the process (Y
t )
t .
In this section we precise some properties of the autocorrelation function
and the power-moments of the process (Y
t ) t dened in (1)-(2) when (" t ) t
followsa GED distribution. TheGED densityisdenedby
f " (x)=c 0 exp( kjxj ); (3) withc 0
>0, >0 and k>0. Theexpression(3) canalso be written:
f " (x)= exp( 1 2 j x j ) 2 1+ 1 ( 1 ) ; >0 (4) with =[ 2 2 ( 1 ) ( 3 ) ] 1 2 :
Thisclass ofdensitycontainsthenormal density( =2),the Laplace
den-sity( =1) andhastheuniformdensityasalimit( !+1). Itwasrstly
t t
adensitydependsonthetail-thicknessparameter . Forinstance,if =2,
then"
t
N(0;1),whilefor <2thedistributionhasthickertailsthanthe
Gaussiandistribution. Nowweprecisesomeresultsconcerningthemoments
oftheprocess(1)-(2) assumingthat thedistributionof("
t )
t
isgiven by(3).
Proposition 1 (Covariance and strict stationarity)
If the process (Y
t )
t
follows the model (1)-(2) driven by a GED noise ("
t )
t
withindex then,
i)the power-moments of the process (Y
t ) t are givenby E(Y r t )= 8 < : r ( 1 ) r 2 1 ( r+1 ) ( 3 ) r 2 exp( r 2 8 2 ) if r iseven 0 if r isodd (5) and var(Y r t )= 8 > > > > < > > > > : 2r ( 1 ) r 2 ( r+1 ) 2 ( 3 ) r exp( r 2 4 2 )[K r exp( r 2 4 2 ) 1] if r is even 2r ( 1 ) r 1 ( 2r+1 ) ( 3 ) r exp( r 2 2 2 ) if r is odd (6) where K r = ( 1 ) ( 2r+1 ) ( r+1 ) 2 : (7)
ii)The excesskurtosis of Y
t is given by =3( K 2 3 exp( 2 ) 1); (8) where K 2 isgiven by (7) with r=2.
iii)The r-thautocorrelation function isequal to:
(r) h = 8 > > > < > > > : exp( r 2 4 (h)) 1 K r exp( r 2 4 (0)) 1 if r is even exp( r 2 4 ( (h) (0))) if r is odd (9) where
(:) is the autocovariance function of the stationary process (
t )
t .
Hence, the process (Y
r
t )
y
This propositionensures that the process (Y t
) t
dened by(1)- (2) and the
powers ofthisprocessare stationary. We aregoingto use thisresult inthe
nextsection.
3 Asymptotic behavior of the maxima of the
pro-cess (X
t )
t .
In this section we study the asymptotic behavior of the distribution of a
classof transformationsofthe process (Y
t )
t
. Firstof all,we dene 8t,
X t =ln( Y t ) 2 = t +ln" 2 t : (10) Nowwe put: t =ln" 2 t forall t2Z; (11)
thustheprocess (X
t )
t
becomes asum oftwo independentprocesses :
X t = t + t : (12)
Themodel(12)canbeconsideredasaregressionmodelwithlog-GEDnoise
( t
). The asymptotic behavior of the maxima for a regression model with
particular noises has been studied by Diop and Guegan (2000). We refer
alsotoHorowitz(1980),BalleriniandMcCormick(1989)andNiu(1997)for
regression models involving non stationarities. When ("
t )
t
ischaracterized
byaGEDdistribution,wedenoteF thedistributionfunctionof(X
t )
t
. First
ofall, we studytheasymptoticbehaviorof thedistributionF.
Proposition 2 Let (X
t )
t
be the process dened by (12), assume that the
distributionof ("
t )
t
is (3), then the asymptotic behavior of F is :
F(x) = PfX t >xg (13) p exp n x 2 2 2 + axlng(x) 2 2x(1+R ) 2 +Alng(x) bxlng(x) 2 g(x) 2ln 2 g(x) 2 2 cx 2 g(x) + 1 8 2 ( 1 2 1) 2 )+ 1 2 ( 1 2 1 )+C+O( ln 2 g(x) g(x) ) o where a= 2 ; b=( 2 ) 3 ; c= ( 2 ) 3 R 2 2 2 + 2 ; R=lnk 2 ; A= 4 2 2 (R+1)+ 1 3 2 ;
C = 2 2 2 R 2 +( 1 2 1 4 2 2 )R 1 + 1 2 +ln 4 ( 2 )c 0 p k 2 ; and g(x)= 2x+1 1=2:
Proof : SeetheAppendix.
Remark : When the noise ("
t )
t
follows a Gaussian distribution, then
c 0 = 1 p 2 , =2, k = 1 2
and we ndthe resultsof Breidt and Davis (1998,
p.667). We dene now M n = max(X 1 ;X 2 ;:::;X n
), n 2, the maxima of the
process (X
t )
t
. Here,we investigate the asymptoticdistributionof M
n . Theorem 1 Let(X t ) t
beaprocessdenedin(12). Assumethatthe density
of ("
t )
t
is given by (3). Assume also that
(h)lnh ! 0, as h ! +1,
where
(h)denotesthe autocorrelation functionofthe process(
t ) t dened in(2) and 1 2
. Thenthereexist normalizingconstants (a
n >0) and (b n ) such that P[a n (M n b n )x] ! exp( e x ); (14) where a n = (2lnn) 1 2 ; d n =(lnn) 1 2 and b n =c 1 d n +c 2 lnd n +c 3 +c 4 lnd n d n + c 5 d n ; with c 1 =(2 2 ) 1 2 ; c 2 =a; c 3 =a[ 3 2 ln2+ln 2 ln 2 2 1]; c 4 =( 1 3 2 ) p 2 ; c 5 = 1 2(2 2 ) 1 2 n a 2 +(1 R+ 1 2 ln2 2 ) 2 (1 a)+( 2 4 ( 1 2 1 )+2+2 2 )( 1 1 2 ) +(3 a) 2 lna 2 2 ln( a 2 c 0 p p k )+ 2 ln(2) o :
Proof : SeetheAppendix.
Remark : We note that the coeÆcient a
n
in (3.12) of Breidt and Davis
(1998) obtained with a Gaussian law for ("
t )
t
seems to contain a
compu-tational error which is also highlighted by simulation results in the next
(X t
) t
using nite samples.
Inthissectionwe willstudythebehaviorof thedistributionofthemaxima
of (X
t )
t
given by (11)-(12) for dierent versions of the process (
t )
t
. We
precise now these versions. First we consider the iid stochastic volatility
modelwith aGED noise("
t ) t : theprocess( t ) t isdenedby: t =Z t ; fZ t gisan iidN(0; 2 ): (15)
Secondly we studythe rst-orderautoregressive stochastic volatility model
(ARSV) witha GEDnoise("
t ) t : the process ( t ) t isdened by: t = t 1 +Z t ; fZ t giidN(0; (1 2 ) 2 ); (16)
with jj < 1. Finally, we consider the long memory stochastic volatility
model (LMSV), (see Breidt et al. (1998)) with a GED noise("
t ) t and the process ( t ) t is denedby: (1 B) d t =Z t ; fZ t giidN(0; 2 2 (1 d) (1 2d) ); (17)
withjdj<1=2. Inthefollowing,wehighlightthein uenceoftheparameters
on the limiting distribution for these dierent models. We study also the
importanceofthe dierentassumptionsmade inthetheorem 1.
4.1 Assumptions
Theresult statedin thetheorem 1 requiresthe condition
(h)lnh!0 as
h !1. We illustrate thisconvergence for nitesamples. The model (15)
corresponds to an iid process, thus
(h) = 0, h 6= 0. The model (16) is
an AR(1) process, thus
(h) decreases with the rate of
jhj
. The model
(17)isaFARIMA(0 ,d, 0)process: itisknownthat
(h)C(d)h
2d 1
,as
h ! 1, where C(d) is some constant which depends on d, thus the speed
ofconvergence of
(h)lnh towards 0 isvery slow.
Now we investigate empirically the behavior of
(h)lnh, as h ! 1 for
the model (16) and (17). In Figure 1, the graphes (a), (b), (c), (d)
cor-respond to the model (16) for dierent values of the parameter ( =
0:2; 0:5; 0:95; 0:99). The graphes(e), (f), (g),(h) correspond tothe model
(17)fordierent valuesoftheparameter d (d=0:1; 0:2; 0:3; 0:4). Forthe
model (16), when we are far from the non stationary case ( = 0:2; 0:5),
the decreasing of
(h)lnh to zero is very fast. Now when d is small
(d = 0:1; 0:2) the decreasing of
(h)lnh is not so fast than the one
ob-tainedwiththeAR(1) model. Butthisdecreasingbecomesquickerassoon
asd=0:3. Ford=0:4, whichcorrespondsto astronglongmemory
behav-ior,
In view to compare our results with those of Breidt and Davis (1998),
we use in our simulations the same value as them for
2 ( 2 = 2:3976,
0.6933; 0:0953), for(=0:95) and ford (d=0:4).
In Figures 2-6, we give in solid lines the empirical distribution functions
for1000 normalizedmaxima[a
n (M
n b
n
)]obtainedfrom 1000 replications
of samples of size 1000 and in dotted lines the limiting double
exponen-tial distribution. The quality of the convergence depends on
and a n ( the coeÆcient a n is given by a n = (2lnn) 1 2
). The in uence of the
tail-thickness parameter of the GED distribution is also studied in Figures
2-4. Wepresentthelimitingdistributionwhen =2(Gaussiancase), =1
(Laplace distribution which is thicker tail than the Gaussian distribution)
and = 3 (thinner tail than the Gaussian distribution). The asymptotic
behavior of the normalized maxima is closely related to the value of
as
shown in Figures 2-4. We also study in Figures 5-6 the sensitivity of the
convergence obtained in (14), with respect to the parameters and d
re-spectively when (
t )
t
follows the models (16)-(17). We detail now these
results.
4.2.1 Gaussian noise ( =2)
The behavior of the asymptotic distribution of a
n (M n b n ) when we use
a Gaussian driven noise ("
t )
t
is given in Figure 2. In each row,
2 is con-stant ( 2
=2:3976, 0.6933; 0:0953). First column corresponds to the iid
stochasticvolatility modelwith (
t )
t
denedin (15), second columnto the
ARSVmodelwith (
t )
t
dened in(16), thirdcolumn to the LMSV model
with ( t ) t dened in (17). When 2
= 2:3976, the approximation to the
empiricaldistributionfunctionofa
n (M
n b
n
); n=1000 bythedouble
ex-ponentialdistributionisverypoorwhateverthedependencestructuretaken
for (
t )
t
. For nite samples, the approximation gives better results when
2 =0:6933 and 2
=0:0953 asshowninthe lowerpanelsof Figure 2.
4.2.2 GED noise with =3
We turnnowto the caseof thestochastic volatilitymodel(12) driven with
a noise("
t )
t
characterized by tail-thicknessparameter =3, see Figure 3:
thesolidlinesrepresenttheempiricaldistributionofthenormalizedmaxima
a n (M n b n
), with n = 1000 and the dotted lines represent the double
exponential distribution. Looking at the nine graphes of the gure 3, we
see that for nite samples the approximation of the empirical distribution
with the double exponential distribution is better when (
t )
t
follows the
model (17) with d =0:4 and
2
=0:6933. This reveals that thecondition
althoughinthat lastcase, thecondition
(h)lnh!0 ash!1,fails.
4.2.3 Laplace noise =1
We now abstract from the thinner tailed case and consider a stochastic
volatilitymodeldrivenbyLaplacenoise =1. TheresultisgiveninFigure
4. By reading across the rows of this picture, the panels again show the
in uenceof thevalue of the parameter
2
. More
2
is small, better is the
rateofconvergenceofthenormalizedmaximaof(X
t )
t
denedin(12)tothe
double exponential distribution. The dependence structure in (
t )
t seems
notto have a great in uence on thisconvergence. A comparaison between
Figure2 andFigure 4shows clearlythattheconvergenceof thenormalized
maximaisbetterwhenthedrivennoise("
t )
t
isGaussian. Despitethechange
ofthescaleinthex-axisofFigure4,thecomparaisonbetweenFigure3and
Figure 4 corresponding respectively to = 3 and = 1 reveals a better
approximation when =3.
4.2.4 In uence of the autoregressive parameter
Here we assume for simplicitythat the noise ("
t )
t
follows a Gaussian law
and we investigate the in uence of the autoregressive parameter to the
asymptotic behavior of a n (M n b n
) for the process (X
t )
t
dened in (12)
whentheprocess(
t )
t
follows themodel(16). Todeterminethesensitivity
oftheresultswithrespectto thisparameter,dierentalternative valuesfor
have been used : =0:2, =0:95 and =0:99. The results are given
inFigure5. By lookingacrosstherows of Figure5,wenotethat more is
small,better istherateofconvergence. Whentheautoregressiveparameter
is equal to 0.99, the approximation is dramatically poor whatever the
valueof
asshowninthelastcolumnofFigure5. Apossibleexplanation
isthefactthattheunderlyingprocess tendsto become nonstationarywhen
the parameter is close to 1. It seems that this situation needs other
investigations. Thiscasehasbeenalreadypartiallystudied,seeforinstance,
Horowitz (1980),Balleriniand Mc Cormick(1989) and Niu (1997).
4.2.5 Importance of the long memory parameterd.
Figure6showsthein uenceofthelongmemoryparameterdto the
conver-gence of the normalized maxima of the process (X
t )
t
dened in (12). For
sake of concisness, we assume that the noise ("
t )
t
follows a Gaussian
dis-tribution. We compare theempirical distribution function of a
n (M n b n )
whentheprocess(X
t )
t
followstheLMSVmodelandthedoubleexponential
distribution when d = 0:1, d = 0:2 and d = 0:4. It seems that the values
of the parameter d have small in uence on the convergence (14). With
-nite samples, when d = 0:4 the condition
other cases when
is equalto 0.0953. We notethat when
is small(so
doesa
n
),the asymptoticbehaviorof thenormalized maximumis thesame
whateverthevaluesofd.
4.2.6 Tail comparaison
Wegivehereabriefcomparisonbetweenthetailbehaviorofthetwo
compo-nentsoftheprocess(X
t )
t
denedin(12). Thedensityofther. v.
t
dened
in (11) when the r.v. "
t
follows a GED distribution with shape parameter
isgiven by h(x)=c 0 exp( x 2 ke 2 x ):
Figure 7 shows the tails of the distribution of the Gaussian linear process
( t
) t
dened in (2) and the noise (
t )
t
dened in (11). The three panels
correspond respectively to 2 = 2:3976, 2 = 0:6933 and 2 = 0:0953. When 2
= 2:3976, the Gaussian upper tail dominates the tail of
t for
=0:5; 1; 2; 3: When
2
=0:0953, theGaussian uppertailis dominated
bytheuppertailof
t
. Finallywhen
2
=0:6933, all thetails aretendency
to have the same behavior as the Gaussian upper tail. It is evident by
lookingaccrossthepanelsofFigure 7thatthelowertailof
t
dominate the
Gaussiantailwhateverthevalue of
2
.
5 Conclusion
In this paper, we get the asymptotic distribution of the maxima of a
log-transformationofaprocess(X
t )
t
drivenbyastochasticvolatilitymodeland
we study theempirical behavior of its asymptotic distribution . We point
outthein uenceof thetail-thicknessparameter of thedrivennoise("
t )
t .
Our ndings are that thechoice of ,
2
and the dependence structureof
( t
) t
doesaect thegoodness-of-tof thelimitingdistribution.
In practice, the results stated in theorem 1 allow us to calculate quantile
riskmeasuresusingthemaximumblockmethod,see forinstance Embrecht
et al. (1997). However innite samplescare must be taken because of the
poornessoftheapproximation. Thisisoneofthelimitationstoevaluatefor
instancetheValue atRiskforreal datausingthispartoftheextremevalue
theory. It wouldbe interesting to investigatedtheasymptotic behaviorfor
more complicatednonstationary models, this willbe done in a companion
Proof of Proposition 1 : We can establish usingDevroye (1986) p. 175
thattheGED noise("
t ) t denedin(3) satises " t d =k 1 VG 1 (18) where d
= denote the equality in distribution, V a uniform r.v. on [ 1; 1]
and G a r.v. following a Gamma distribution (1+
1
; 1), independent of
V. The identity (18) is used in Section 4 to draw samples from a GED
distribution. From(18), it can beshown that
E(" r t )= 8 > > < > > : k r r+1 ( r+1 +1) ( 1 +1) ; ifr iseven 0 ifr isodd. (19) i) Since (" t ) t
has a symmetric distribution and (
t )
t
dened in (2) is a
Gaussian linear process independent of ("
t )
t
, (5) and (6) are obtained
di-rectlyfromthe propertiesof thelognormal distribution.
ii) The excess kurtosis of (Y
t ) t dened by E[Y 4 t ] E[Y 2 t ] 2
3 follows directly from
(5).
iii)The r-thautocorrelationfunctionis denedby
(r) h = E[Y r t Y r t+h ] E[Y r t ] 2 E[Y 2r t ] E[Y r t ] 2 : (20)
From (1)and usingagainthe properties oflognormal distribution,wehave
forallr : E(Y r t Y r t+h )= 2r exp( r 2 4 ( 2 + (h)))E(" 2r t ): (21)
Theremainder ofthe prooffollows directly from(5) and (21).
Proof of Proposition 2: We avoid here to go throughthe details of the
computations which need some great eorts. We focus only on the
im-portant steps. Breidt and Davis (1998) use a Tauberian argument in the
Gaussian case to express the asymptotic approximation to the tail
distri-bution of (X
t )
t
, we use it again. In step 1, we give some expansions of
the two derivatives m() and S() of the log-moment generating function
of (X
t )
t
dened in (12). In the second step, we dene an inverse function
m 1
(:)ofm()andwejustifytheinversenotation. Someusefulexpansions
of functions in terms of m
1
(x) are also provided. In the third step, we
Step 1 :
We give here theexpansions of the two rst derivatives of the log-moment
generatingofthe process (X
t )
t
denedin(12).
Thelog ofthe moment generatingfunctionof (
t ) t isgiven by: lnEexp( t )=ln2c 0 ln 2+1 lnk+ln ( 2+1 ):
Thus, thelogof themoment-generating functionof (X
t ) t is equalto lnC() = 2 2 2 +ln2c 0 ln 2+1 lnk+ln ( 2+1 ) (22) = 2 2 2 g()lnk g()+g()lng()+ln 2 p 2 ( 1 2 )c 0 p k +O( 1 ): where g()= 2+1 1 2 :
Therst two derivativesoflnC() are
m() = d d lnC()= 2 2 lnk+ 2 ( 2+1 ) = 2 2 lnk+ 2 lng()+O( 1 2 ) (23) and S 2 ()= d 2 d 2 lnC() = 2 +( 2 ) 2 0 ( 2+1 ) = 2 +O( 1 ); (24) where (:)and 0
(:)arethedigammaandtrigammafunctionsrespectively.
Step 2 : We set m 1 (x)= g(x) 2 2 alng(x) 2 + blng(x) 2 g(x) + c 2 g(x) d 2 ; (25) where d= 2 ln(k 2 ): Itfollows that lng(m 1 (x))=ln( g(x) 2 ) 2alng(x) g(x) + D 1 g(x) +O( ln 2 g(x) g(x) 2 ); (26)
D 1 = 2d+ 2 2 2 :
After somecomputations, using(23), (25)and (26), weget
m(m 1 (x))=x+O( ln 2 g(x) g(x) 2 ) (27)
which justiestheinverse notation.
Step 3 :
WeestablishtheasymptoticnormalityforthenormalizedtransformEsscher
ofthe distributionF(see Feiginand Yashchin (1983)). Wewrite
F(m()) exp( m())C() (2) 1 2 S() : (28)
Usingtheinverse of m denedin(25), we have
F(x) exp( xm 1 (x))C(m 1 (x)) (2) 1 2 m 1 (x)S(m 1 (x)) : (29)
Theremainder ofthe proofneeds thefollowingexpansions.
2 2 (m 1 (x)) 2 = 2 g(x) 2 8 2 + a 2 ln 2 g(x) 2 2 ag(x)lng(x) 2 2 + ( b+2ad)lng(x) 2 2 dg(x) 2 2 + d 2 + c 2 2 +O( ln 2 g(x) g(x) 2 ); and g(m 1 (x))lng(m 1 (x)) = ln( 2 ) 2 g(x)+D 2 lng(x)+ g(x)ln(g(x)) 2 2aln 2 g(x) 2 + D 1 2 (1 ln 2 )+O( ln 2 g(x) g(x) ); where D 2 = ad+a 2 ln 2 2a 2 :
We concludetheproof byusing(24), (25), (27), (29) andthese expansions.
Proof of theorem1: Usingtheproposition2,weget
F n (u n ) !exp( e x ); x2IR (30) whereu n =a 1 n x+b n
. Now, we needto show that
jP(M n u n ) F n (u n )j !0 as n !+1: (31)
(1983, page 81)) and follows the great lines of Breidt and Davis (1998). First,weestablish jP(M n u n ) F n (u n )jnK n X i=1 j (i)j(Eexp (u n 1 ) 2 2 2 (1+j (i)j) ) 2 (32)
where K is a positive constant,
(:) the autocorrelation function of the
process ( t ) and t =ln" 2 t
. Using the asymptotic relationship for the tail
probabilityof theGED distributionfor
1 2 ,givenby F " (x) c 0 k x 1 e kx as x !+1;
then, itcan beshownthat
Eexp (u n 1 ) 2 2 2 (1+j (i)j) K 0 (lnn) [ 1 2(1+j(i)j) ] E( F 1 1+j(i)j " ( 1 (u n 1 )))+o(n 1 ); (33)
where[:]standsfortheintegerpartandK
0
apositiveconstant. ByJensens's
inequalityandthe relation(30), we have
(Eexp (u n 1 ) 2 2 2 (1+j (i)j) ) 2 K 00 (lnn) 1 1+j(i)j n 2 1+j(i)j (34) whereK 00
isapositiveconstant. Theremainderoftheproofissimilartothe
one of Breidt and Davis(1998, page671); see also Leadbetter et al. (1983,
page86).
References
Andersen,T. G.,Stochasticautoregressive volatility: aframework for
volatilitymodelling,Mathematical nance, 4(2), (1994) 75-102.
Ballerini,R.and Mc Cormick,W.P., Extremevaluetheory forprocesses
withperiodicvariances. Comm. Statist. Stochastic Models, 5,(1989),
45-51.
Box, G.E.P.and Tiao, G.C., BayesianInferenceinStatistical Analysis.
Reading, MA:Addison-Wesley. (1973).
Breidt,F. J., Crato, N.and DeLima, P., Thedetection and estimationof
long memoryinstochastic volatility. J.Econometrics,83, (1998)
325-348.
Breidt,F.J. and Davis,R.A. Extremes of Stochastic volatilitymodel. The
Annals of Applied Probability ,8 (3)(1998) 664-675.
Devroye, L., Non Uniform Random Variate Generation, Springer, New
York,(1986).
Diop A.,GueganD. Asymptotoc behavior for the extreme values of a
regression model, Preprint00-11, Universityof Reims,France,(2000).
Embrecht P,KluppelbergC.,Mikosch T.,Modelling extremal events for
insuranceandnance,Springer,ApplicationsofMathematics,33,(1997).
Feigin, P.D. and Yashchin,E., Ona strongTauberianresult. Z. Wahrsch.
Verw. Gebiete 65,(1983) 35-48.
Ghysels,E.,Harvey,A.andRenault, E.Stochasticvolatility in Handbookof
Statistics, Vol. 14, "Statistical Methodsinnance",J. Wiley,(1996).
Harvey,A.C.Longmemoryinstochasticvolatility,Discussionpaper,L.S.E.,
London,(1993).
Horowitz,J., Extremevaluesfora nonstationarystochastic process : an
application to airqualityanalysis. Technometrics, 22, (1980), 469-478.
Leadbetter, M. R.Lindgren,G. andRootzen, H., Extremes and Related
Properties of Random sequences and Processes,Springer, NewYork,
(1983).
Niu,X-F.Extremevaluetheoryforaclassofnonstationarytimeserieswith
applications. J.of Appl. Proba.,7, (1988), 508-522.
Shepard, N. Statistical aspect of ARCH and stochastic volatility, in Time
SeriesModelsinEconometrics,Financeandotherelds,eds. Cox,
Hink-ley,Barndor-Nielsen, Chapmanand Hall, Monographeson Statistics
and appliedprobability,65, (1993) 2-67.
Subbotin, M.T., On the law of frequency errors. Matematicheskii Sbornik,
31, (1923) ,296-300.
0
200
400
600
800
1000
a
0.000
0.005
0.010
0.015
0.020
0.025
phi=0.2
0
200
400
600
800
1000
b
0.00
0.05
0.10
0.15
phi=0.5
0
200
400
600
800
1000
c
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
phi=0.95
0
200
400
600
800
1000
d
0.0
0.5
1.0
1.5
2.0
2.5
phi=0.99
0
200
400
600
800
1000
e
0.1
0.2
0.3
Your Text
d = 0.1
0
200
400
600
800
1000
f
0.1
0.2
0.3
0.4
0.5
d = 0.2
0
200
400
600
800
1000
g
0.000
0.225
0.450
0.675
0.900
d = 0.3
0
200
400
600
800
1000
h
0.0
0.5
1.0
1.5
d = 0.4
Figure 1: Rate of convergence of
(h)lnh to 0 as h ! 1 for dierent
valuesof theautoregressive parameter (= 0:2; 0:5; 0:95; 0:99) (upper
plots : a, b, c, d) and dierent values for the long memory parameter d
(d=0:1; 0:2; 0:3; 0:4) (lower plots: e, f, g, h). The graph (h)shows the
failure of theconvergence for nitesamples in the long memory case with
-10
-5
0
5
10
a
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
11
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
1111111
11111111111111111111111111
-10
-5
0
5
10
d
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
1111111
11111111111111111111111111
-10
-5
0
5
10
g
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
1111111
11111111111111111111111111
-10
-5
0
5
10
b
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
111111111111111111111111111111111
-10
-5
0
5
10
e
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
1111111111
11111111111111111111111111
-10
-5
0
5
10
h
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
111111111111111111111111111111111
-10
-5
0
5
10
c
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
111111111111111111111111111111111
-10
-5
0
5
10
f
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
111111111111111111111111111111111
-10
-5
0
5
10
i
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
111111111111111111111111111111111
Figure 2: Comparaison between the empirical distribution function (solid
lines)for1000 normalized maximaand thedoubleexponentialdistribution
(dotted lines) for the process (X
t )
t
dened in (12) when the driven noise
(" t
) t
is Gaussian. In rst line for the three graphes
2 = 2.3976, in the second line 2
=0.6933,in thethirdline
2 =0.0953. a, b,c: t IID; d,e,f : t AR (1),=0:95; g,h,i: t FAR IMA(0;d;0), d=0:4.
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
11111111
1111111111111111111111111
a
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
111111111111111111111111111
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
111111
d
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
1111111111
11111111111111111111111111
g
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
111111111111111111111111111111111
b
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
1111111
11111111111111111111111111
e
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
111111111111111111111111111111111
h
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
11111111
1111111111111111111111111
c
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
11111111
1111111111111111111111111
f
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
111
11
111
1111111
11111111111111111111111111
i
Figure 3: Comparisonbetween the empiricaldistribution function and the
doubleexponentialdistributionforthe process (X
t )
t
dened in(12)with a
GED driven noise("
t )
t
with =3. Inrst lineforthe three graphes
2
=
2.3976,inthesecondline
2
=0.6933, inthethirdline
2 =0.0953. a,b,c: t IID;d,e,f: t AR (1),=0:95; g,h,i: t FAR IMA(0;d;0), d=0:4.
-20
-10
0
10
20
a
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
11
11111111111111111111111111111111111111111
-20
-10
0
10
20
d
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
11
1111111111
1111111111111111111111111111111
-20
-10
0
10
20
g
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
111
11111111111111111111111111111111111111111
-20
-10
0
10
20
b
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
11
11111111111111111111111111111111111111111
-20
-10
0
10
20
e
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
111
111111111111
11111111111111111111111111111
-20
-10
0
10
20
h
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
11
11111111111111111111111111111111111111111
-20
-10
0
10
20
c
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
111
11111111111111111111111111111111111111111
-20
-10
0
10
20
f
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
111
11111111111111111111111111111111111111111
-20
-10
0
10
20
i
0.0
0.2
0.4
0.6
0.8
1.0
11111111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
111
11111111111111111111111111111111111111111
Figure 4: Comparison between the empirical distribution function (solid
lines)for 1000 normalized maxima of (X
t )
t
dened in(12) and thedouble
exponential distribution (dotted lines)with a GED driven noise ("
t )
t with
= 1. In rst line for the three graphes
2
= 2.3976, in the second line
2
= 0.6933, in the third line
2 = 0.0953. a, b, c : t IID; d, e, f : t AR (1), =0:95; g,h, i: t FAR IMA(0;d;0), d=0:4.
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
1
1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
1
1 1
1 1 1 1 1 1 1
111111111
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
1
1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
1
1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
11 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
1
1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
11 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
11 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1
1
1
1
1
1
1
1
1
11 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Figure 5: Comparaison between the empirical distribution function (solid
lines)for 1000 normalized maxima of (X
t )
t
dened in(12) and thedouble
exponentialdistribution(dottedlines)forARSVwhenthedrivennoise("
t )
t
is Gaussian. Foreach line
2
is constant, thevaluesof
2
are the same as
in Figure 2. The columns correspond to = 0:2, = 0:95 and = 0:99
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
1111111
1111111111111
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
111
1111111
11111111111111111111111111
-10
-5
0
5
10
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
11
1
1
1
1
1
1
1
1
1
1
1
1
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
11
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11111
11111111
1111111111111111111111111
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
11
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11111
11111111
1111111111111111111111111
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
11
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11111
11111111
1111111111111111111111111
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
1111111111
11111111111111111111111111
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
1111111111
11111111111111111111111111
-10
-5
0
5
10
0.0
0.2
0.4
0.6
0.8
1.0
1111111111111111111111111111111111111111111
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
11
11
1111111111
11111111111111111111111111
Figure 6: Comparaison between the empirical distribution function (solid
lines)for 1000 normalized maxima of (X
t )
t
dened in(12) and thedouble
exponentialdistribution(dottedlines)forLMSVwhenthedrivennoise("
t )
t
is Gaussian. Foreach line
2
is constant, thevaluesof
2
are the same as
in Figure 2. The columns correspond to d = 0:1, d = 0:2 and d = 0:4
−6
−4
−2
0
2
4
6
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Normal density
gamma = 0.5
gamma = 1
gamma = 2
gamma = 3
−6
−4
−2
0
2
4
6
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Normal density
gamma = 0.5
gamma = 1
gamma = 2
gamma = 3
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Normal density
gamma = 0.5
gamma = 1
gamma = 2
gamma = 3
Figure7: Comparaisonbetweenthedensitiesof
t
denedin(2)(solidline)
and
t
dened in (11) for dierent values of ( = 0:5; 1; 2; 3). The
three graphes correspond respectively to
2 = 2:3976, 2 = 0:6933 and 2 =0:0953.