HAL Id: hal-00507025
https://hal.archives-ouvertes.fr/hal-00507025
Submitted on 30 Jul 2010
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of
sci-entific research documents, whether they are
pub-lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Nonparametric estimation of the derivatives of the
stationary density for stationary processes
Emeline Schmisser
To cite this version:
Emeline Schmisser.
Nonparametric estimation of the derivatives of the stationary density for
stationary processes.
ESAIM: Probability and Statistics, EDP Sciences, 2013, 17, pp33-69.
�10.1080/02331888.2011.591931�. �hal-00507025�
stationary density for stationary pro esses
Emeline S hmisser
Université ParisDes artes
Laboratoire MAP5 Emeline.S hmissermath-info.univ-paris5.fr
Abstra t
Inthisarti le, ouraim isto estimatethe su essivederivativesofthestationary density
f
ofastri tlystationaryandβ
-mixingpro ess(X
t
)
t≥0
. Thispro essisobservedatdis rete timest
= 0, ∆, . . . , n∆
. Thesamplinginterval∆
anbe xedorsmall. Weusea penalized least-square approa hto omputeadaptive estimators. Ifthe derivativef
(j)
belongsto the
Besovspa e
B
α
2,∞
,thenourestimator onvergesatrate(n∆)
−α/(2α+2j+1)
.Thenwe onsidera
diusionwithknowndiusion oe ient. Weusetheparti ularformofthestationarydensity
to omputean adaptive estimator of its rstderivative
f
′
. Whenthe samplinginterval
∆
tendsto0,andwhenthediusion oe ientisknown,the onvergen erateofourestimatoris(n∆)
−α/(2α+1)
. Whenthediusion oe ientisknown,wealso onstru taquotientestimator ofthedrift forlow-frequen ydata.Keywords: derivativesof thestationary density, diusion pro esses,mixing pro esses,
nonpara-metri estimation,stationarypro esses
AMSClassi ation: 62G05,60G10
1 Introdu tion
Inthisarti le,we onsiderastri tlystationary,ergodi and
β
-mixingpro ess(X
t
, t ≥ 0)
observed atdis retetimeswithsamplinginterval∆
. Thej
thorderderivativesf
(j)
(
j ≥ 0
)ofthestationary densityf
areestimatedbymodelsele tion. Adaptiveestimatorsoff
(j)
are onstru tedthanksto
apenalizedleast-squaremethodandthe
L
2
riskoftheseestimatorsis omputed.
Numerousarti lesdealwithnonparametri estimationofthestationarydensity(orthe
deriva-tivesofthestationarydensity)forastri tlystationaryandmixingpro essobservedin ontinuous
time. Forinstan e,Bosq[4℄usesakernelestimator,ComteandMerlevède[5℄realizeaproje tion
estimationand Leblan [16℄ utilizeswavelets. Under theCastellana andLeadbetter's onditions,
when
f
belongstoaBesovspa eB
α
2,∞
,theestimatoroff
onvergesattheparametri rateT
−1/2
(where
T
isthetimeof observation). Thenon parametri estimationofthestationarydensityof astationaryandmixingpro essobservedatdis retetimest = 0, ∆, . . . , n∆
hasalsobeenstudied, espe iallywhenthesamplinginterval∆
isxed. Forexample,Masry[19℄ onstru tswavelets es-timators,ComteandMerlevède[ 7℄andLerasle[17℄useapenalizedleast-square ontrastmethod.The
L
2
rateof onvergen e oftheestimatoris inthat ase
n
−α/(2α+1)
. ComteandMerlevède[5℄
demonstratethat, ifthe sampling interval
∆ → 0
, the penalized estimator off
onvergeswith rate(n∆)
−α/(2α+1)
and, under the onditionsof Castellanaand Leadbetter,theparametri rate
of onvergen eisrea hed.
Therearelesspapersabouttheestimationofthederivativesofthestationarydensity,andthe
mainresultsarefor independentandidenti ally distributedrandom variables. Forinstan e,Rao
[22℄estimatesthesu essivederivatives
f
(j)
ofamulti-dimensionalpro essbyawaveletmethod.
Hebounds the
L
2
riskofhis estimatorand omputesthe rateof onvergen eonSobolevspa es.
This estimator onvergeswith rate
n
−α/(2α+2j+1)
. Hosseinioun et al. [13℄ estimate the partial
derivativesofthestationarydensityofamixingpro essbyawaveletmethod,andtheirestimators
onvergewithrate
(n∆)
−α/(2α+1+2j)
.
Classi al examples of
β
-mixing pro esses are diusions: if(X
t
)
is solution of the sto hasti dierentialequationdX
t
= b(X
t
)dt + σ(X
t
)dW
t
andX
0
= η,
then,withsome lassi aladditional onditionson
b
andσ
,(X
t
)
isexponentiallyβ
-mixing.Dalalyan and Kutoyants [9℄ estimate the rst derivative of the stationary density for a diusion pro essobservedat ontinuous time. They prove that the minimax rate of onvergen e is
T
−2α/(2α+1)
where
T
is the time of observation. This is the samerate of onvergen e asfor non parametri estimatoroff
.A possible appli ation is, for diusion pro esses, the estimation of the drift fun tion
b
by quotient. Indeed, whenσ = 1
, wehavethatf
′
= 2bf
. Thedrift estimationis well-knownwhen
thediusion it observedat ontinuous time orfor high-frequen y data (see Comte et al. [6℄ for
instan e), but it is far more di ult when
∆
is xed. Gobet et al. [ 12℄ build non parametri estimatorsofb
andσ
when∆
isxedandprovethat theirestimatorsrea htheminimaxL
2
risk.
However,theirestimatorsarebuiltwitheigenvaluesoftheinnitesimalgeneratorandaredi ult
toimplement.
In this paper, in a rst step, we onsider a stri tly stationary and
β
-mixing pro ess(X
t
)
t≥0
observedat dis rete times
t = 0, ∆, . . . , n∆
. The su essive derivativesf
(j)
(
0 ≤ j ≤ k
)of the stationarydensityf
areestimated either ona ompa tset, oronR
thanksto apenalized least-squaremethod. Weintrodu easequen eofin reasinglinearsubspa es(S
m
)
and,forea hm
,we onstru tanestimatoroff
(j)
byminimisinga ontrastfun tionover
S
m
. Then,apenaltyfun tionpen(m)
is introdu edto sele t anestimatoroff
(j)
in the olle tion. When
f
(j)
∈ B
α
2,∞
, theL
2
riskof thisestimator onvergeswith rate
(n∆)
−2α/(2α+2j+1)
and the pro edure doesnotrequire
theknowledgeof
α
. Whenj = 0
,thisistherateof onvergen eobtainedbyComteandMerlevède [7, 5℄. Moreover, whenα
is known, Rao [22℄ obtained a rate of onvergen en
−2α/(2α+2j+1)
for
independentvariables.
Inase ondstep,weassumethatthepro ess
(X
t
)
issolutionofasto hasti dierentialequation ofknowndiusion oe ientσ
. Thenf
′
anbeestimatedbyestimating
2bf
andf
. Anestimator of2bf
is builteither on a ompa tset,oronR
by apenalized least-square ontrastmethod. It only onvergeswhenthesamplinginterval∆ → 0
,butinthis ase,itsrateof onvergen eisbetter thanforthepreviousestimator: itis(n∆)
−2α/(2α+1)
whenf
′
∈ B
α
2,∞
(andnot(n∆)
−2α/(2α+3)
).ThisistheminimaxrateobtainedbyDalalyanandKutoyants[9℄with ontinuousobservations.
Then, an estimator by quotient of the drift fun tion
b
is onstru ted. When∆
is xed, it rea hestheminimaxrateobtainedbyGobetet al.[ 12℄.InSe tion2,anadaptiveestimatorofthesu essivederivatives
f
(j)
ofthestationarydensity
f
ofastationaryandβ
-mixingpro essis omputedbyapenalized leastsquaremethod. InSe tion 3,onlydiusionswithknowndiusion oe ientsare onsidered. Anadaptiveestimatoroff
′
(in
fa t,anestimatorof
2bf
)isbuilt. InSe tion4,aquotientestimatorofb
is onstru ted. InSe tion 5, the theoreti al resultsare illustratedvia various simulationsusing several models. Pro esses(X
t
)
aresimulatedbytheexa tretrospe tivealgorithmof Beskosetal.[3℄ . Theproofsaregiven inSe tion6. IntheAppendix, thespa esoffun tionsareintrodu ed.2 Estimation of the su essive derivatives of the stationary
density
2.1 Model and assumptions
Inthisse tion,astationarypro ess
(X
t
)
t≥0
isobservedatdis retetimest = 0, ∆, . . . , n∆
andthe su essive derivativesf
(j)
of the stationary density
f = f
(0)
are estimated for
0 ≤ j ≤ k
. The samplinginterval∆
isxedortendsto0. TheestimationsetA
iseithera ompa t[a
0
, a
1
]
,orR
. Letus onsiderthenormsk.k
∞
= sup
A
|.|
,
k.k
L
2
= k.k
L
2
(A)
andh., .i = h., .i
L
2
(A)
.
(2.1)Thepro ess
(X
t
)
isergodi , stri tly stationary andarithmeti ally orexponentiallyβ
-mixing. Apro essisarithmeti allyβ
-mixingifitsβ
-mixing oe ientsatises:β
X
(t) ≤ β
0
(1 + t)
−(1+θ)
(2.2) whereθ
andβ
0
aresomepositive onstants. Apro essisexponentially(orgeometri ally)β
-mixing ifthereexists twopositive onstantsβ
0
andθ
su hthat:β
X
(t) ≤ β
0
exp (−θt)
(2.3)AssumptionM2.
The stationary density
f
isk
times dierentiable and, for ea hj ≤ k
,its derivativesf
(j)
belong toL
2
(A) ∩ L
1
(A)
. Moreover,f
(j)
satisesR
A
x
2
f
(j)
(x)
2
dx < +∞
. Remark2.1.IfA = [a
0
, a
1
]
,AssumptionM2 isonly∀j ≤ k
,f
(j)
∈ L
2
(A)
.
Our aim is to estimate
f
(j)
by model sele tion. Therefore an in reasing sequen e of nite
dimensionallinearsubspa es
(S
m
)
isneeded. Onea hof these subspa es,anestimatoroff
(j)
is
omputed,andthankstoapenaltyfun tiondependingon
m
,thebestpossibleestimatoris hosen. Letusdenote byC
l
the spa eoffun tions
l
timesdierentiableonA
and witha ontinuousℓ
th derivative,andC
l
m
thesetofthepie ewisefun tionsC
l
. Toestimate
f
(j)
,
0 ≤ j ≤ k
ona ompa t set,weneedasequen eoflinearsubspa esthat satisestheassumption:AssumptionS1: Estimationon a ompa t set. 1. Thesubspa es
S
m
arein reasing,of -nitedimensionD
m
andin ludedinL
2
(A)
.
2. Forany
m
,anyfun tiont ∈ S
m
isk
timesdierentiable(belongstoC
k−1
∩C
k
m
)andsatises:∀j ≤ k,
t
(j)
(a
0
) = t
(j)
(a
1
) = 0.
3. There existsanorm onne tion: for any
j ≤ k
,thereexistsa onstantψ
j
su hthat:∀m, ∀t ∈ S
m
,
t
(j)
2
∞
≤ ψ
j
D
2j+1
m
ktk
2
L
2
.
Letus onsider
(ϕ
λ,m
, λ ∈ Λ
m
)
anorthonormal basisofS
m
with|Λ
m
| = D
m
. WehavethatΨ
2
j,m
(x)
∞
≤ ψ
j
D
2j+1
m
whereΨ
2
j,m
(x) =
P
λ∈Λ
m
ϕ
(j)
λ,m
(x)
2
.4. There existsa onstant
c
su hthat,for anym ∈ N
,any fun tiont ∈ S
m
:t
(j)
L
2
≤ cD
2j
m
ktk
2
L
2
.
5. Foranyfun tion
t
belonging tothe unitballof aBesovspa eB
α
2,∞
:,kt − t
m
k
2
L
2
≤ D
−2
m
∨ D
−2α
m
wheret
m
isthe orthogonal(L
2
)
proje tion of
t
overS
m
.Remark 2.2.Be ause of Point 2, theproje tion
t
m
onvergesveryslowly tot
on theboundaries of the ompa tA = [a
0
, a
1
]
and the inequalitykt − t
m
k
2
L
2
≤ D
−2α
m
an notbe satisedfor anyt ∈ B
α
2,∞
.Inthe Appendix, several sequen es oflinear subspa essatisfyingthis property aregiven. To
estimate
f
(j)
on
R
, slightly dierent assumptions are needed: let us onsider an in reasing se-quen e oflinear subspa esS
m
generated by anorthonormalbasis{ϕ
λ,m
, λ ∈ Z}
. We havethatdim(S
m
) = ∞
,sotobuildestimators,weusetherestri tedspa esS
m,N
=
Ve t(ϕ
λ,m
, λ ∈ Λ
m,N
)
with|Λ
m,N
| < +∞
. Thefollowingassumptioninvolvesthesequen esoflinearsubspa es(S
m
)
andAssumptionS2: Estimationon
R
. 1. The sequen eof linearsubspa es(S
m
)
isin reasing. 2. Wehavethat|Λ
m,N
| := dim(S
m,N
) = 2
m+1
N + 1
. 3.∀m, N ∈ N, ∀t ∈ S
m,N
: t ∈ C
k−1
∩ C
k
m
and∀j < k
,lim
|x|→∞
t
(j)
(x) = 0
. 4.∃ψ
j
∈ R
+
,∀m ∈ N
,∀t ∈ S
m
,∀j ≤ k
,t
(j)
2
∞
≤ ψ
j
2
(2j+1)m
ktk
2
L
2
.
Parti ularly,Ψ
2
m
(x)
2
∞
=
X
λ∈Z
ϕ
(j)
λ,m
(x)
2
2
∞
≤ ψ
j
2
(2j+1)m
.
5.∃c
,∀m ∈ N
,∀t ∈ S
m
,∀j ≤ k
:t
(j)
2
L
2
≤ c2
2jm
ktk
2
L
2
.
6. Foranyfun tiont ∈ L
2
∩ L
1
(R)
su h thatR x
2
t
2
(x)dx < +∞
,kt
m
− t
m,N
k
2
L
2
≤ c
2
m
N
where
t
m
isthe orthogonal (L
2
) proje tion of
t
overS
m
andt
m,N
itsproje tion overS
m,N
. 7. There existsr ≥ 1
su h that, for any fun tiont
belonging to the unit ball of a Besov spa eB
α
2,∞
(withα < r
),kt − t
m
k
2
L
2
≤ 2
−2mα
.
Proposition 2.1.Ifthefun tion
ϕ
generatesar
-regularmultiresolutionanalysisofL
2
,with
r ≥ k
,thenthesubspa esS
m
=
Ve t{ϕ
λ,m
, λ ∈ Z}
andS
m,N
=
Ve t{ϕ
λ,m
, λ ∈ Λ
m,N
}
(whereϕ
λ,m
(x) = 2
m/2
ϕ (2
m
x − λ)
and
Λ
m,N
= {λ ∈ Z, |λ| ≤ 2
m
N }
) satisfyS2.
Forthedenitionofthemulti-resolutionanalysis,see Meyer[20℄, hapter2.
2.2 Risk of the estimator for xed
m
Anestimator
ˆ
g
j,m
ofg
j
:= f
(j)
is omputedbyminimising the ontrastfun tion
γ
j,n
(t) = ktk
2
L
2
−
2(−1)
j
n
n
X
k=1
t
(j)
(X
k∆
).
UnderAssumptionsS1orS2:E
(γ
j,n
(t)) = ktk
2
L
2
−2 (−1)
j
D
t
(j)
, f
E
= ktk
2
L
2
−2
D
t, f
(j)
E
=
t − f
(j)
2
L
2
−C
whereC =
f
(j)
2
L
2
.
IfAssumptionS1issatised,letusdenoteˆ
g
j,m
(t) = arg inf
t∈S
m
γ
j,n
(t),
and,underAssumptionS2,
ˆ
g
j,m,N
(t) = arg inf
t∈S
m,N
γ
j,n
(t).
Wehavethetwofollowingtheorems:
Theorem2.1: Estimationon a ompa t set.
UnderAssumptionsM1-M2 andS1,the estimator risksatises, forany
j ≤ k
andm ∈ N
:E
kˆg
j,m
− g
j
k
2
L
2
≤ kg
j,m
− g
j
k
2
L
2
+ 8cβ
0
ψ
j
D
2j+1
m
n
1 ∨
θ∆
1
where
g
j,m
istheorthogonal(L
2
)
proje tion of
g
j
overS
m
. The onstantsβ
0
andθ
aredenedin (2.2)or (2.3),ψ
j
isdenedinAssumptionS1andc
isauniversal onstant.Theorem2.2: Estimationon
R
.UnderAssumptionsM1-M2 andS2,for any
j ≤ k
andm ∈ N
:E
kˆg
j,m,N
− g
j
k
2
L
2
≤ kg
j,m
− g
j
k
2
L
2
+ C
2
m
N
+ 8cβ
0
ψ
j
2
(2j+1)m
n
1 ∨
θ∆
1
whereC
depends onR
∞
−∞
x
2
g
2
(x)dx
and of the hosen sequen e of linear subspa es
(S
m,N
)
. A - ording toAssumptionS26.,ifN ≥ (n ∧ nθ∆)
,E
kˆg
j,m,N
− g
j
k
2
L
2
≤ kg
j,m
− gk
2
L
2
+ cβ
0
2
(2j+1)m
n
1 ∨
1
θ∆
.
If the random variables
(X
0
, . . . , X
n
)
are independent, the derivatives of the density an be estimatedinthesamewayandthetwoprevioustheorems(aswellasthetheoremsfortheadaptiverisk) anbeappliedifweset
θ = +∞
.When
∆ = 1
,theriskboundisthesameasinHosseiniounetal.[13℄.2.3 Optimisation of the hoi e of
m
Under Assumption S1 and if
g
j
belongs to the unit ball of a Besov spa eB
α
2,∞
withα ≥
1
, thenkg
j,m
− g
j
k
2
L
2
≤ cD
m
−2
and the best bias-varian e ompromise is obtained forD
m
∼
(n (1 ∨ θ∆))
1/(2j+3)
.
Inthat ase,E
kˆg
j,m
− g
j
k
2
L
2
≤ (n ∨ nθ∆)
−2/(2j+3)
.
IfAssumptionS2issatisedandif
g
j
belongstoB
α
2,∞
,withr ≥ α
,thenkg
j,m
− g
j
k
2
L
2
≤ c2
−2mα
.
IfN ≥ n (1 ∧ θ∆)
,thebestbias-varian e ompromiseisobtainedform ∼
1
2j + 1 + 2α
log
2
(n (1 ∨ θ∆))
andthenE
kˆg
j,m,N
− g
j
k
2
L
2
≤ (n ∨ n∆)
−2α/(2α+2j+1)
.
Rao[22℄ builds estimatorsofthe su essivederivatives
f
(j)
forindependentvariables. This
esti-mators onvergewithrate
n
−2α/(2α+2j+1)
.
2.4 Risk of the adaptive estimator on a ompa t set
Anadditionalassumptionforthepro ess
(X
t
)
isneeded: AssumptionM3.Ifthe pro ess
(X
t
)
t≥0
isarithmeti allyβ
-mixing, thenthe onstantθ
denedin (2.2)issu hthatθ > 3
.Letusset
M
j,n
= {m, D
m
≤ D
j,n
}
whereD
j,n
≤ (n∆ ∧ n)
1/(2j+2)
isthemaximal dimension.
Forany
m ∈ M
j,n
, anestimatorg
ˆ
j,m
∈ S
m
ofg
j
= f
(j)
is omputed. Letusintrodu eapenalty
fun tion
pen
j
(m)
dependingonD
m
andn
:pen
j
(m) ≥ κβ
0
ψ
j
D
2j+1
m
n
1 ∨
θ∆
1
.
Thenwe onstru tanadaptiveestimator: hoose
m
ˆ
j
su hthat˜
g
j
:= ˆ
g
j, ˆ
m
j
wherem
ˆ
j
= arg min
m∈M
j,n
[γ
j,n
(ˆ
g
j,m
) + pen
j
(m)] .
Theorem2.3: Adaptive estimationon a ompa t set.
Thereexists auniversal onstant
κ
su hthat,if AssumptionsM1-3andS1aresatised:E
k˜g
j
− g
j
k
2
L
2
≤ C
inf
m∈M
j,n
kg
j,m
− g
j
k
2
L
2
+ pen
j
(m)
+
c
n
1 ∨
∆
1
andMerlevède[5℄obtainsimilarresultswhen
j = 0
andthesamplinginterval∆
isxed,andtheir remaindertermissmaller: itis1/n
andnotln
2
(n)/n
.
The penalty fun tion depends on
β
0
andθ
. Unfortunately, these two onstantsare di ult to estimate. However,the slope heuristi dened in Arlotand Massart [1℄ enablesus to hooseautomati allya onstant
λ
su hthatthepenaltyλD
2j+1
m
/(n∆)
isgood. Itis alsopossibletouse theresamplingpenaltiesofLerasle[18℄.2.5 Risk of the adaptive estimator on
R
Letus denote
M
j,n
= {m, 2
m
≤ D
j,n
}
withD
2j+2
j,n
≤ n∆ ∧ n
andxN = N
n
= (n ∧ n∆)
. For anym ∈ M
j,n
,anestimatorg
ˆ
j,m,N
n
∈ S
m,N
n
ofg
j
is omputed. Thebestdimensionm
ˆ
j
is hosen su hthatˆ
m
j
= arg min
m∈M
j,n
[γ
j,n
(ˆ
g
j,m,N
n
) + pen
j
(m)]
wherepen
j
(m) = cψ
j
2
(2j+1)m
n
∨
2
(2j+1)m
nθ∆
andtheresultingestimatorisdenoted by
˜
g
j
:= ˆ
g
j, ˆ
m
j
,N
n
.Theorem2.4: Adaptive estimationon
R
. UnderAssumptionsM1-M3 andS2,E
k˜g
j
− g
j
k
2
L
2
≤ C
inf
m∈M
j,n
kg
j,m
− g
j
k
2
L
2
+ pen
j
(m)
+
c
n
1 ∨
∆
1
where
c
dependsonψ
j
,β
0
andθ
.3 Case of stationary diusion pro esses
Letus onsiderthesto hasti dierentialequation(SDE):
dX
t
= b(X
t
)dt + σ(X
t
)dW
t
,
X
0
= η,
(3.1) whereη
isarandomvariableand(W
t
)
t≥0
aBrownianmotionindependentofη
. Thedriftfun tionb : R → R
isunknownandthediusion oe ientσ : R → R
+∗
isknown. Thepro ess
(X
t
)
t≥0
is assumedtobestri tlystationary,ergodi andβ
-mixing. Obviously,we an onstru testimatorsof thesu essivederivativesofthestationarydensityusingthepreviousse tion. Butinthisse tion,weusethe properties ofadiusion pro ess to ompute anewestimatorof therstderivativeof
thestationarydensity. Ifthesamplinginterval
∆
issmall,thisnewestimator onvergefasterthan thepreviousone.3.1 Model and Assumptions
Thepro ess
(X
t
)
t≥0
isobservedatdis retetimest = 0, ∆, . . . , n∆
. AssumptionM4.Thefun tions
b
andσ
are globally Lips hitzandσ ∈ C
1
.
AssumptionM4ensurestheexisten eanduniqueness ofasolutionof theSDE(3.1).
AssumptionM5.
Thediusion oe ient
σ
belongstoC
1
,isboundedandpositive: thereexist onstants
σ
0
andσ
1
su hthat:∀x ∈ R, 0 < σ
1
≤ σ(x) ≤ σ
0
.
AssumptionM6.
Thereexist onstant
r > 0
and1 ≤ α ≤ 2
su hthatUnderAssumptionsM4-M6,thereexistsastationarydensity
f
fortheSDE(3.1),andf (x) ∝ σ
−2
(x) exp
2
Z
x
0
b(s)σ
−2
(s)ds
.
(3.2)Then
f
hasmomentsofanyordersand:Z
|f
′
(x)|
2
dx < ∞,
∀m ∈ N,
Z
|x|
m
|f
′
(x)| dx < ∞
(3.3)∀m ∈ N, kx
m
f (x)k
∞
< ∞,
b
4
(x)f (x)
∞
< ∞
andZ
exp (|b(x)|) f(x)dx < ∞.
(3.4) AssumptionM7.Thepro ess isstationary:
η ∼ f
.A ordingto Pardouxand Veretennikov [21℄, Proposition 1 p.1063,under Assumptions
M5-M6,thepro ess
(X
t
)
isexponentiallyβ
-mixing: thereexist onstantsβ
0
andθ
su hthatβ
X
(t) ≤
β
0
e
−θt
. Moreover,Gloter[11℄provethefollowingproperty: Proposition 3.1.Let us set
F
t
= σ (η, W
s
, s ≤ t) .
Under Assumptions M4 andM7, for anyk ≥ 1
,there exists a onstantc(k)
dependingonb
andσ
su hthat:∀h, 0 < h ≤ 1, ∀t ≥ 0 E
sup
s∈[t,t+h]
|b(X
s
) − b(X
t
)|
k
F
t
!
≤ c (k) h
k/2
1 + |X
t
|
k
.
Remark3.1.Toestimatef
′
,itisenoughtohaveanestimatorof
2bf
andanestimatoroff
. Indeed, a ordingtoequation(3.2),therstderivativef
′
satises:f
′
(x)
f (x)
∝
2b(x)
σ
2
(x)
− 2
σ
′
(x)
σ(x)
.
Byassumption,the diusion oe ient
σ
is known. Besides, a ordingto AssumptionsM4and M5,σ
′
and
σ
−1
arebounded. Aswehavealready onstru tedanestimatorof
f = g
0
in Se tion 2,itremainstoestimate2bf
.Inthisse tion,we onstru tanestimator
h
˜
ofh := 2bf
eitherona ompa tset[a
0
, a
1
]
,oronR
.3.2 Sequen e of linear subspa es
Likein the previous se tion, estimators
ˆ
h
m
ofh
are omputed on some linear subspa esS
m
orS
m,N
, thenapenaltyfun tionpen(m)
isintrodu edto hoosethebest possibleestimator˜
h
. Ifh
isestimatedona ompa tsetA = [a
0
, a
1
]
,thefollowingassumptionisneeded:AssumptionS3: Estimationon a ompa t set. 1. Thesequen eoflinearsubspa es
S
m
is in reasing,D
m
= dim(S
m
) < ∞
and∀m
,S
m
⊆ L
2
(A)
.
2. There existsanorm onne tion: for any
m ∈ N
,any fun tiont ∈ S
m
satisesktk
2
∞
≤ φ
0
D
m
ktk
2
L
2
.
Parti ularly, ifwe noteΦ
m
(x) =
P
λ∈Λ
m
(ϕ
λ,m
(x))
2
where(ϕ
λ,m
, λ ∈ Λ
m
)
is an orthonor-malbasis ofS
m
,thenΦ
2
m
(x)
∞
≤ φ
0
D
m
.
3. There exists
r ≥ 1
su hthat,for anyfun tiont
belonging toB
α
2,∞
withα ≤ r
,kt − t
m
k
2
L
2
≤ D
m
−2α
wheret
m
isthe orthogonal proje tionoft
overS
m
.aregiven. Toestimate
h
onR
,anin reasingsequen eoflinearsubspa esS
m
=
Ve t(ϕ
λ,m
λ ∈ Z)
(where{ϕ
λ,m
}
λ∈Z
isan orthonormalbasisofS
m
)is onsidered. As thedimensionof those sub-spa esisinnite,thetrun atedsubspa esS
m,N
=
Ve t(ϕ
λ,m
, λ ∈ Λ
m,N
)
areused.AssumptionS4: Estimationon
R
. 1. The sequen eof linearsubspa es(S
m
)
isin reasing. 2. Thedimension of the subspa eS
m,N
is2
m+1
N + 1
. 3.∃φ
0
,∀m
,∀t ∈ S
m
,ktk
2
∞
≤ φ
0
2
m
ktk
2
L
2
.
Let us setΦ
m
(x) =
P
λ∈Z
(ϕ
λ,m
(x))
2
, thenΦ
2
m
(x)
∞
≤ φ
0
2
m
where
φ
0
isa onstantindependent ofN
. 4. Foranyfun tiont ∈ L
2
∩ L
1
(R)
su h thatR x
2
t
2
(x)dx < +∞
,kt
m
− t
m,N
k
2
L
2
≤ c
2
m
N
where
t
m
isthe orthogonal (L
2
) proje tion of
t
overS
m
andt
m,N
itsproje tion overS
m,N
. 5. There existsr ≥ 1
su h that for any fun tiont
belonging to the unit ball of a Besov spa eB
2,∞
α
withα ≤ r
,kt − t
m
k
2
L
2
≤ c2
−2mα
.
Proposition 3.2.Letus onsider afun tion
ϕ
generatingar
-regularmulti-resolutionanalysis ofL
2
with
r ≥ 0
. Let ussetS
m
=
Ve t{ϕ
λ,m
, λ ∈ Z}
andS
m,N
=
Ve t{ϕ
λ,m
, λ ∈ Λ
m
}
whereϕ
λ,m
(x) = 2
m/2
ϕ (2
m
x − λ)
and
Λ
m
= {λ ∈ Z, |λ| ≤ 2
m
N }
. Then the subspa es
S
m,N
satisfyAssumptionS4.Fun tions
ϕ(x) = sin(x)/x
also generate a multi-resolution ofL
2
(R)
, but they are not even
0-regular. However,theysatisfyAssumptionS4ifSobolevspa estakethe pla eofBesovspa esin
Point5. Thedenition of Sobolevspa esof regularity
α
isre alledhere:W
α
=
g,
Z
∞
−∞
|g
∗
(x)|
2
x
2
+ 1
α
dx < ∞
whereg
∗
istheFouriertransformof
g
.3.3 Risk of the estimator with
m
xedForany
m ∈ M
n
, whereM
n
= {m, D
m
≤ D
n
}
, an estimatorˆ
h
m
ofh = 2bf
is omputed. The maximaldimensionD
n
isspe iedlater. Thefollowing ontrastfun tionis onsidered:Γ
n
(t) = ktk
2
L
2
−
4
n∆
n
X
k=1
X
(k+1)∆
− X
k∆
t (X
k∆
) .
As∆
−1
X
(k+1)∆
− X
k∆
= I
k∆
+ Z
k∆
+ b(X
k∆
)
withI
k∆
=
1
∆
Z
(k+1)∆
k∆
(b(X
s
) − b(X
k∆
)) ds
andZ
k∆
=
1
∆
Z
(k+1)∆
k∆
σ(X
s
)dW
s
,
(3.5) wehavethatE
(Γ
n
(t)) = ktk
2
L
2
−4 hbf, ti−4E (I
∆
t(X
∆
)) .
A ordingtoLemma6.4,|E (I
k∆
t(X
k∆
))| ≤
c∆
1/2
. Moreover,h = 2bf
,soE
(Γ
n
(t)) = ktk
2
L
2
− 2 hh, ti + O
∆
1/2
.
Thisinequalityjustiesthe hoi eof the ontrastfun tion ifthesampling interval
∆
issmall. If AssumptionS3issatised,we onsidertheestimatorˆ
h
m
= arg min
t∈S
m
ˆ
h
m,N
= arg min
t∈S
m,N
Γ
n
(t).
Theorem3.1: Estimationon a ompa t set.
UnderAssumptionsM4-M7 andS3,
E
ˆ
h
m
− h
2
L
2
≤ kh
m
− hk
2
L
2
+ c∆ +
σ
2
0
kfk
∞
+
2β
0
φ
0
θ
D
m
n∆
where
h
m
is the orthogonal proje tion ofh
overS
m
andc
a onstant depending onb
andσ
. We remindthat theβ
-mixing oe ientof the pro ess(X
t
)
issu hthatβ
X
(t) ≤ β
0
e
−θt
.
Theorem3.2: Estimationon
R
. UnderAssumptionsM4-M7 andS4E
ˆ
h
m,N
− h
2
L
2
≤ kh
m,N
− hk
2
L
2
+ c
2
m
N
+ c∆ +
kfk
∞
+
2β
0
φ
0
θ
2
m
n∆
.
where
h
m,N
isthe orthogonal proje tion ofh
onthe spa eS
m,N
. IfN = N
n
= n∆
,thenE
ˆ
h
m,N
n
− h
2
L
2
≤ kh
m
− hk
2
L
2
+ c∆ +
kfk
∞
+
2β
0
φ
0
θ
2
m
n∆
where
h
m
isthe orthogonal proje tion ofh
overS
m
.3.4 Optimisation of the hoi e of
m
UnderAssumptionS3,if
h
1A
belongstotheunit ballofaBesovspa eB
α
2,∞
,thenkh − h
m
k
2
L
2
≤
D
−2α
m
. Tominimisethebias-varian e ompromise,onehaveto hooseD
m
∼ (n∆)
1/(1+2α)
andinthat asetheestimatorrisksatises:
E
ˆ
h
m
− h
2
L
2
≤ C (n∆)
−2α/(1+2α)
+ c∆.
UnderAssumptionS4,if
h
belongstoB
α
2,∞
, thenkh − h
m
k
2
L
2
≤ 2
−2mα
andE
ˆ
h
m,n∆
− h
2
L
2
≤ C (n∆)
−2α/(1+2α)
+ c∆.
Remark 3.2.Dalalyan and Kutoyants [9℄ estimate the rst derivative of the stationary density
observed at ontinuous time (they observe
X
t
fort ∈ [0, T ]
). In that framework, the diusion oe ientσ
2
isknown. Theminimaxrateof onvergen eoftheestimatoris
T
−α/(1+2α)
. Itisthe
ratethat weobtainwhen
∆
tendsto 0. Letusset∆ ∼ n
−β
. Weobtainthefollowing onvergen etable:
β
prin ipaltermofthebound rateof onvergen eoftheestimator0 < β ≤
2α
4α+1
∆
n
−β
2α
4α+1
≤ β < 1
(n∆)
−2α/(1+2α)
n
−2α(1−β)/(4α+1)
Thoseratesof onvergen earethesameasfortheestimatorofthedrift. If
β ≥ 1/2
,the domi-natingtermintheriskboundisalways(n∆)
−2α/(1+2α)
. Therateof onvergen eisalwayssmaller
than
n
−1/2
. If
(n, ∆)
is xed and if∆ ≤ n
−2α/(4α+3)
, then these ond estimator
h
ˆ
m
onverges fasterthantherstoneˆ
g
1,m
. However,ifthesamplinginterval∆
islargerthann
−2α/(4α+3)
,itis
For any
m ∈ M
n,A
= {m, D
m
≤ D
n
}
where the maximal dimensionD
n
is spe ied later, an estimatorˆ
h
m
∈ S
m
ofh
is omputed. Letussetpen(m) ≥ κ
D
m
n∆
1 +
8β
0
θ
andm =
ˆ
inf
m∈M
n,A
n
γ
n
ˆh
m
+ pen(m)
o
.
Theresultingestimatorisdenoted by
˜
h := ˆ
h
m
ˆ
. Letus onsidertheasymptoti framework: AssumptionS5.n∆
ln
2
(n)
→ ∞
andD
2
n
≤
n∆
ln
2
(n)
.
Theorem3.3: Adaptive estimationon a ompa t set.
There exists a onstant
κ
depending only on the hosen sequen e of linear subspa es(S
m
)
su h that,underAssumptionsM4-M7 ,S3andS5,E
˜
h − h
2
L
2
≤ C
inf
m∈M
n,A
n
kh
m
− hk
2
L
2
+ pen(m)
o
+ c∆ +
c
′
n∆
where
C
isanumeri al onstant,c
′
depends on
φ
0
andkfk
∞
andc
depends onb
.Remark 3.3.The estimator is only onsistent if
∆ → 0
. Moreover, the adaptive estimator˜
h
automati allyrealisesthebias-varian e ompromise.3.6 Risk of the adaptive estimator on
R
An estimator
ˆ
h
m,n∆
∈ S
m,n∆
is omputed for anym ∈ M
n,R
= {m, 2
m
≤ D
n
}
. Thefollowing penaltyfun tion isintrodu ed:pen(m) ≥ κ
2
m
n∆
1 +
2β
0
θ
andweset
m = inf
ˆ
m∈M
n
n
γ
n
ˆh
m,n∆
+ pen(m)
o
Letus denoteby
˜h
n∆
theresultingestimator. Theorem3.4: Adaptive estimationonR
.There existsa onstant
κ
depending only on the sequen e of linear subspa es(S
m
)
su h that, if AssumptionsM4-M7,S4andS5aresatised:E
˜
h
n∆
− h
2
L
2
≤ C
inf
m∈M
j,n,R
n
kh
m
− hk
2
L
2
+ pen(m)
o
+ c∆ +
c
′
n∆
.
4 Drift estimation by quotient
Ifthepro ess
(X
t
)
t≥0
isthesolutionofthesto hasti dierentialequation(SDE)dX
t
= b(X
t
)dt + dW
t
andsatisesAssumptionsM4-M7,then
b = f
′
/2f.
An estimator of the drift by quotient an therefore be onstru ted. For high-frequen y data,
Comteetal.[6℄buildanadaptivedriftestimatorthankstoapenalizedleast-squaremethod. Their
estimator onvergeswiththeminimaxrate
(n∆)
−2α/(2α+1)
if
b
belongstotheBesovspa eB
α
2,∞
. Onthe ontrary, there exist fewresultson thedriftestimation where thesampling interval∆
is xed. Gobet etal.[12℄ buildadriftestimatorforlow-frequen ydata, however,theirestimatorisnoteasyto implement. Inthisse tion, adriftestimatorbyquotientis onstru tedanditsriskis
Weestimate
f
andf
′
on
R
inordertoavoid onvergen eproblemsontheboundariesofthe om-pa t. Letus onsider twosequen esof linearsubspa es(S
0,m
, m ∈ M
0,n
)
and(S
1,m
, m ∈ M
1,n
)
satisfyingAssumptionS2fork = 1
and su hthatM
0,n
=
n
m
0
, log(n) ≤ 2
m
0
≤ η
√
n∆/ log(n∆)
o
andM
1,n
=
n
m
1
, 2
m
1
≤ (n∆)
1/5
o
wherethe onstant
η
doesnotdependonb
neitherσ
.As in Se tion 2, adaptive estimators
f := ˜
˜
g
0,n∆
andg := ˜
˜
g
1,n∆
off = g
0
andf
′
= g
1
are omputed. As
b
belongs toB
α
2,∞
,f
andf
′
alsobelong to
B
α
2,∞
and the best bias-varian e ompromiseforg
ˆ
0,m
is obtained for2
m
0
∼ (n∆)
1/(1+2α)
, and for
ˆ
g
1,m
it is obtained for2
m
1
∼
(n∆)
1/(3+2α)
. If
α > 1
, therestri tionsonM
0,n
andM
1,n
donotmodifytherateof onvergen e ofoursestimators. Letus onsidertheestimator˜b =
˜
g
2 ˜
f
ifg ≤ 2n∆ ˜
˜
f
and˜b = 0
otherwise.
Theorem4.1. Ifb ∈ B
α
2,∞
withα > 1
,thenE
˜b − b
2
L
2
≤ c
E
˜
f − f
2
L
2
+ E
k˜g − gk
2
L
2
+
1
n∆
wherethe onstant
c
does notdepend onn
nor on∆
. Then, byTheorem2.4,E
˜b − b
2
L
2
≤ c(n∆)
−2α/(2α+3)
So
˜b
onvergestowardsb
withtheminimaxratedenedbyGobetetal.[12℄.5 Simulations
5.1 Models
Ornstein-Uhlenbe k: Letus onsidertheSDE
dX
t
= −bX
t
+ dW
t
withb > 0
. Thestationary densityisaGaussiandistributionN
0, (2b)
−1
anditsderivativeisf
′
(x) = −
2b
3/2
√
π
xe
−bx
2
.
Hyperboli tangent: We onsiderapro ess
(X
t
)
satisfyingtheSDEdX
t
= −a tanh(aX
t
)dt + dW
t
.
ThestationarydensityrelatedtothisSDEis
f (x) =
a
2 cosh
2
(ax)
andf
′
(x) = −
a
2
tanh(ax)
cosh
2
(ax)
.
Square root: Letus onsiderthediusionwithparameters
b(x) = −
√
ax
1 + x
2
andσ = 1.
Thestationarydensityisf (x) = c exp
−2a
p
1 + x
2
and
f
dX
t
= −
2aX
t
1 + X
2
t
dt + dW
t
.
Thepro ess
(X
t
)
t≥0
doesnotsatisfyAssumptionM6 neitherthesu ient onditionstobe expo-nentiallyβ
-mixing. Ifa > 1/2
,itadmitsthestationary densityf (x) = c
a
1 + x
2
−2a
andf
′
(x) = −
4c
a
ax
(1 + x
2
)
1+2a
.
Sinefun tion: Letus onsiderthediusion withparameters:
b(x) = sin(ax) −
√
x
1 + x
2
andσ = 1.
Itsstationarydensityf
satises:f (x) = c
a
exp
−2a
−1
cos(ax) − 2
p
1 + x
2
andf
′
(x) = 2c
a
b(x)f (x)
5.2 Estimation of the rst derivative
f
′
Here,weestimatetherstderivative
f
′
ofthestationarydensityona ompa tsetandwe ompare
the two estimators
˜
g
1
and˜
h
dened in Se tions 2 and 3. The subspa esS
m
are generated by trigonometri polynomials: those fun tions are orthonormal, very regular and enable very fastomputations: to ompute
ˆ
g
1,m
(respˆ
h
m
)wheng
ˆ
1,m−1
(respˆ
h
m−1
)isknown,itisonlyne essary to omputeoneortwo oe ients.Figures1-5 showthedieren es betweenthetwoestimators:
g
˜
1
onvergeswhateverthe sam-plinginterval,and˜
h
onvergesonlyif∆
is small. Inthat ase,˜
h
is better thang
˜
1
: thevarian e termisgreaterforg
ˆ
1,m
(isproportionaltoD
3
m
/(n∆
))thanforˆ
h
m
(ispproportionaltoD
m
/n∆
). InTables1-3,forea hvalueofn
and∆
,50exa tsimulationsofadiusionpro essarerealized using the retrospe tive exa t algorithm of Beskos et al. [3℄ (ex ept for the Ornstein-Uhlenbe kpro ess whi h is simulatedusing Gaussian variables). Forea h path, we ompute the empiri al
risksoftheestimators
g
˜
1
and˜
h
:k˜g
1
− g
1
k
2
E
:=
1
M
M
X
k=1
(˜
g
1
(x
k
) − g
1
(x
k
))
2
andh − h
˜
2
E
:=
1
M
M
X
k=1
˜h(x
k
) − h(x
k
)
2
,
wherethepoints
x
k
areequidistributedoverA
. To he kthattheestimatorisadaptive,theora lesor
g
=
k˜g
1
− g
1
k
2
E
min
m∈M
n
kˆg
1,m
− g
1
k
2
E
andor
h
=
˜
h − h
2
E
min
m∈M
n
ˆ
h
m
− h
2
E
are omputed. The mean time of simulation
t
sim
of a pro ess is measured, and for ea h kind of estimator, the means of the empiri al riskris
g
orris
h
, of the ora lesor
¯
g
oror
¯
h
and of the omputationtimest
g
ort
h
or omputed.The omplexity of the retrospe tive exa t algorithm of Beskos et al. [3℄ is proportional to
ne
c∆
where
c
depends on the model. Table3 showsthat forModel 4,t
sim
in reaseswhenn
or∆
in reases. For the hyperboli tangent, thetime of simulationonly depends onn
be ause the onstantc
is exa tlyequalto 0. TheOrnstein-Uhlenbe kpro ess isnotsimulated thanksto the retrospe tivealgorithm,soitstimeofsimulationdoesnotdependon∆
. Tables1-3showthat the rstestimatorg
˜
1
isalwaysfaster to omputethanthe se ond one˜
h
. This ismainly be ausewe havelessmodelstotest: fortherstestimator,themaximaldimensionD
n
isboundedby(n∆)
1/4
whereasforthese ondestimator,
D
n
≤ (n∆)
1/2
.
When
∆ = 1, ˜
g
1
isbetterthan˜
h
. Ifnot,the estimatorsaresimilar andbe omebetterwhenn∆
in reases. FortheOrnstein-Uhlenbe kpro essandthehyperboli tangent,thepro ess(X
t
)
t≥0
isexponentially
β
-mixingandg
˜
1
isingeneralbetterthan˜
h
. ForModel4,thepro ess(X
t
)
isnot exponentiallyβ
-mixingandwhen∆ < 1
,h
˜
is(ingeneral)betterthan˜
g
1
.Figure1: Ornstein-Uhlenbe k: estimationof
f
′
n = 10
4
,∆ = 1
n = 10
5
,∆ = 10
−2
−3
−2
−1
0
1
2
3
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
−3
−2
−1
0
1
2
3
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
Figure2: Hyperboli tangent: estimationof
f
′
n = 10
4
,∆ = 1
n = 10
5
,∆ = 10
−2
−5
−4
−3
−2
−1
0
1
2
3
4
5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
−4
−3
−2
−1
0
1
2
3
4
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
Figure 3: Squareroot: estimationof
f
′
n = 10
4
,∆ = 1
n = 10
4
,∆ = 10
−1
−4
−3
−2
−1
0
1
2
3
4
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
−4
−3
−2
−1
0
1
2
3
4
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
- : truederivative· · ·
: estimator˜
g
1
(dierentiating anestimatoroff
) -. : estimator˜
h
(usingtof
′
= 2bf
Figure4: Model4: estimationof
f
′
n = 10
4
,∆ = 1
n = 10
4
,∆ = 10
−1
−4
−3
−2
−1
0
1
2
3
4
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
−4
−3
−2
−1
0
1
2
3
4
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Figure5: Sinefun tion: estimationof
f
′
n = 10
4
,∆ = 1
n = 10
5
,∆ = 10
−2
−8
−6
−4
−2
0
2
4
6
8
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−8
−6
−4
−2
0
2
4
6
8
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
- : truederivative· · ·
: estimator˜
g
1
(dierentiating anestimatoroff
) -. : estimator˜
h
(usingtof
′
= 2bf
Table1: Estimationof
f
′
forOrnstein-Uhlenbe k
rstestimator se ond estimator
n
∆
t
sim
ris
g
or
¯
g
t
g
ris
h
or
¯
h
t
h
10
4
1 0.10 0.00025 2.5 0.33 0.0090 1.0 0.7310
4
10
−1
0.10 0.0010 1.8 0.17 0.00091 1.2 0.6810
4
10
−2
0.099 0.0060 2.6 0.097 0.0067 2.3 0.6610
3
1 0.0027 0.0023 4.2 0.034 0.0097 1.0 0.1210
3
10
−1
0.0025 0.0058 3.0 0.020 0.0077 2.3 0.1210
3
10
−2
0.0026 0.037 3.0 0.0070 0.078 4.0 0.03510
2
1 0.00022 0.0080 2.0 0.013 0.019 1.5 0.06210
2
10
−1
0.00021 0.035 2.4 0.0046 0.078 5.5 0.01910
2
10
−2
0.00023 0.067 2.1 0.0048 0.11 1.4 0.0068Table2: Hyperboli tangent: estimationof
f
′
rstestimator se ondestimator
n
∆
t
sim
ris
g
or
¯
g
t
g
ris
h
or
¯
h
t
h
10
4
1 6.2 0.0027 1.1 0.33 0.0087 1.03 0.7110
4
10
−1
1.2 0.0018 3.7 0.17 0.0014 1.4 0.6810
4
10
−2
1.7 0.0065 2.8 0.10 0.0056 1.8 0.6510
3
1 0.61 0.0040 1.5 0.034 0.0097 1.1 0.1210
3
10
−1
0.19 0.0067 2.8 0.020 0.0087 2.1 0.1210
3
10
−2
0.16 0.022 2.5 0.0068 0.036 2.6 0.0310
2
1 0.066 0.011 1.7 0.014 0.021 1.80 0.06310
2
10
−1
0.020 0.023 2.3 0.0048 0.044 3.4 0.02010
2
10
−2
0.018 0.033 1.6 0.0054 0.078 1.2 0.0080Table3: Model4: estimation of
f
′
rstestimator se ond estimator
n
∆
t
sim
ris
g
or
¯
g
t
g
ris
h
or
¯
h
t
h
10
4
1 6.6 0.00073 1.8 0.33 0.020 1.0 0.7110
4
10
−1
2.3 0.0032 4.2 0.17 0.0019 1.3 0.7010
4
10
−2
2.1 0.016 3.8 0.10 0.0090 1.7 0.6810
3
1 0.67 0.0049 2.4 0.035 0.022 1.1 0.1210
3
10
−1
0.24 0.017 3.6 0.021 0.013 2.0 0.1210
3
10
−2
0.18 0.043 2.0 0.0071 0.094 3.5 0.03510
2
1 0.071 0.048 8.1 0.014 0.041 1.6 0.06510
2
10
−1
0.022 0.046 1.91 0.0049 0.077 3.1 0.0210
2
10
−2
0.019 0.070 1.4 0.005 0.12 1.1 0.0069ris
g
andris
h
: averageempiri alrisksrelatedforg
˜
1
andh
˜
¯
or
g
andor
¯
h
: average ora les (empiri al risks ofg
˜
1
(resp˜
h
) over the empiri al risk of the best estimatorg
ˆ
1,m
(respˆ
h
m
))t
g
ett
h
: averagetimeof omputationofg
˜
1
and˜
h
(timesin se onds)Twodrift estimatorsare ompared: theestimatorbyquotientdened in Se tion 4,denoted here
by
˜b
quot
, and a penalized least-square estimator denoted by˜b
pls
. The onstru tion of the last estimatorisdoneinComteetal.[6℄. Itonly onvergeswhenthesamplinginterval∆
issmall,but inthat ase,itrea hestheminimaxrateof onvergen e:ifb
belongstoaBesovspa eB
α
2,∞
,then theriskoftheestimator˜b
pls
isboundedbyE
˜b
pls
− b
2
L
2
≤ C
(n∆)
−2α/(2α+1)
+ ∆
.
Figures6-10showthat,forlow-frequen ydata,thequotientestimator
˜b
quot
isbetterthan˜b
pls
. Forvariousvaluesofn
and∆
,50exa tsimulationsof(X
0
, . . . , X
n∆
)
are realizedandestimators˜b
quot
and˜b
pls
are omputed. Table4and5givetheaverageempiri alriskfortheseestimatorsand theaverage omputationtimes. Thelowestriskissetin bold.Tables 4 and 5 underline that the rst estimator is always faster than the se ond one: to
ompute
˜b
pls
, wehaveto inverseamatrixm × m
overea h spa eS
m
. When∆
is smalland the timeofobservationn∆
islarge,thepenalizedleastsquare ontrastestimator onvergesbetterthan thequotientestimator. Of ourse,when∆
isxed,˜b
quot
onvergesfasterthan˜b
pls
.Table4: Ornstein-Uhlenbe k: estimationof
b
quotientestimator least-squareestimatorn
∆
ris
quot
t
quot
ris
pls
t
pls
10
4
1 0.0022 3.6 0.089 7.310
4
10
−1
0.0086 1.2 0.0049 1.710
4
10
−2
0.069 0.4 0.031 0.710
3
1 0.011 0.2 0.090 0.710
3
10
−1
0.061 0.06 0.022 0.310
3
10
−2
0.31 0.02 0.50 0.00410
2
1 0.073 0.03 0.085 0.310
2
10
−1
0.25 0.01 0.34 0.003Table5: Hyperboli tangent: estimationof
b
quotientestimator least-squareestimatorn
∆
ris
quot
t
quot
ris
pls
t
pls
10
4
1 0.0023 3.6 0.086 7.210
4
10
−1
0.019 1.2 0.017 1.810
4
10
−2
0.078 0.4 0.052 0.710
3
1 0.036 0.2 0.18 0.710
3
10
−1
0.12 0.06 0.065 0.310
3
10
−2
0.17 0.02 0.61 0.00410
2
1 0.24 0.03 0.10 0.310
2
10
−1
0.20 0.01 0.53 0.003ris
quot
andris
pls
: averageempiri alrisksfor˜b
quot
and˜b
pls
t
quot
andt
pls
: average omputationtimesof˜b
quot
and˜b
pls
(timesin se onds)6 Proofs
6.1 Important lemmas
Lemma6.1 : Varian e of
β
-mixing variables. Letus setA =
1
n
n
X
k=1
g(X
k∆
) − E (g(X
k∆
)) .
Figure6: Ornstein-Uhlenbe k: estimationof
b
n = 10
4
,∆ = 1
−1.5
−1
−0.5
0
0.5
1
1.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Figure7: Hyperboli tangent: estimationof
b
n = 10
4
,∆ = 1
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure8: Squareroot: estimation of
b
n = 10
4
,∆ = 1
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
-: truedriftb
−−
: estimationofb
byquotient:˜b
quot
.. : estimation ofb
likein Comte etal.[6℄ :˜b
pls
Figure9: Model 4: estimationof
b
n = 10
4
,∆ = 10
−1
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
−1.5
−1
−0.5
0
0.5
1
1.5
Figure 10: Sine fun tion: estimationof