• Aucun résultat trouvé

Nonparametric estimation of the derivatives of the stationary density for stationary processes

N/A
N/A
Protected

Academic year: 2021

Partager "Nonparametric estimation of the derivatives of the stationary density for stationary processes"

Copied!
35
0
0

Texte intégral

(1)

HAL Id: hal-00507025

https://hal.archives-ouvertes.fr/hal-00507025

Submitted on 30 Jul 2010

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Nonparametric estimation of the derivatives of the

stationary density for stationary processes

Emeline Schmisser

To cite this version:

Emeline Schmisser.

Nonparametric estimation of the derivatives of the stationary density for

stationary processes.

ESAIM: Probability and Statistics, EDP Sciences, 2013, 17, pp33-69.

�10.1080/02331888.2011.591931�. �hal-00507025�

(2)

stationary density for stationary pro esses

Emeline S hmisser

Université ParisDes artes

Laboratoire MAP5 Emeline.S hmissermath-info.univ-paris5.fr

Abstra t

Inthisarti le, ouraim isto estimatethe su essivederivativesofthestationary density

f

ofastri tlystationaryand

β

-mixingpro ess

(X

t

)

t≥0

. Thispro essisobservedatdis rete times

t

= 0, ∆, . . . , n∆

. Thesamplinginterval

anbe xedorsmall. Weusea penalized least-square approa hto omputeadaptive estimators. Ifthe derivative

f

(j)

belongsto the

Besovspa e

B

α

2,∞

,thenourestimator onvergesatrate

(n∆)

−α/(2α+2j+1)

.Thenwe onsidera

diusionwithknowndiusion oe ient. Weusetheparti ularformofthestationarydensity

to omputean adaptive estimator of its rstderivative

f

. Whenthe samplinginterval

tendsto0,andwhenthediusion oe ientisknown,the onvergen erateofourestimatoris

(n∆)

−α/(2α+1)

. Whenthediusion oe ientisknown,wealso onstru taquotientestimator ofthedrift forlow-frequen ydata.

Keywords: derivativesof thestationary density, diusion pro esses,mixing pro esses,

nonpara-metri estimation,stationarypro esses

AMSClassi ation: 62G05,60G10

1 Introdu tion

Inthisarti le,we onsiderastri tlystationary,ergodi and

β

-mixingpro ess

(X

t

, t ≥ 0)

observed atdis retetimeswithsamplinginterval

. The

j

thorderderivatives

f

(j)

(

j ≥ 0

)ofthestationary density

f

areestimatedbymodelsele tion. Adaptiveestimatorsof

f

(j)

are onstru tedthanksto

apenalizedleast-squaremethodandthe

L

2

riskoftheseestimatorsis omputed.

Numerousarti lesdealwithnonparametri estimationofthestationarydensity(orthe

deriva-tivesofthestationarydensity)forastri tlystationaryandmixingpro essobservedin ontinuous

time. Forinstan e,Bosq[4℄usesakernelestimator,ComteandMerlevède[5℄realizeaproje tion

estimationand Leblan [16℄ utilizeswavelets. Under theCastellana andLeadbetter's onditions,

when

f

belongstoaBesovspa e

B

α

2,∞

,theestimatorof

f

onvergesattheparametri rate

T

−1/2

(where

T

isthetimeof observation). Thenon parametri estimationofthestationarydensityof astationaryandmixingpro essobservedatdis retetimes

t = 0, ∆, . . . , n∆

hasalsobeenstudied, espe iallywhenthesamplinginterval

isxed. Forexample,Masry[19℄ onstru tswavelets es-timators,ComteandMerlevède[ 7℄andLerasle[17℄useapenalizedleast-square ontrastmethod.

The

L

2

rateof onvergen e oftheestimatoris inthat ase

n

−α/(2α+1)

. ComteandMerlevède[5℄

demonstratethat, ifthe sampling interval

∆ → 0

, the penalized estimator of

f

onvergeswith rate

(n∆)

−α/(2α+1)

and, under the onditionsof Castellanaand Leadbetter,theparametri rate

of onvergen eisrea hed.

Therearelesspapersabouttheestimationofthederivativesofthestationarydensity,andthe

mainresultsarefor independentandidenti ally distributedrandom variables. Forinstan e,Rao

[22℄estimatesthesu essivederivatives

f

(j)

ofamulti-dimensionalpro essbyawaveletmethod.

Hebounds the

L

2

riskofhis estimatorand omputesthe rateof onvergen eonSobolevspa es.

This estimator onvergeswith rate

n

−α/(2α+2j+1)

. Hosseinioun et al. [13℄ estimate the partial

derivativesofthestationarydensityofamixingpro essbyawaveletmethod,andtheirestimators

onvergewithrate

(n∆)

−α/(2α+1+2j)

.

Classi al examples of

β

-mixing pro esses are diusions: if

(X

t

)

is solution of the sto hasti dierentialequation

(3)

dX

t

= b(X

t

)dt + σ(X

t

)dW

t

and

X

0

= η,

then,withsome lassi aladditional onditionson

b

and

σ

,

(X

t

)

isexponentially

β

-mixing.Dalalyan and Kutoyants [9℄ estimate the rst derivative of the stationary density for a diusion pro ess

observedat ontinuous time. They prove that the minimax rate of onvergen e is

T

−2α/(2α+1)

where

T

is the time of observation. This is the samerate of onvergen e asfor non parametri estimatorof

f

.

A possible appli ation is, for diusion pro esses, the estimation of the drift fun tion

b

by quotient. Indeed, when

σ = 1

, wehavethat

f

= 2bf

. Thedrift estimationis well-knownwhen

thediusion it observedat ontinuous time orfor high-frequen y data (see Comte et al. [6℄ for

instan e), but it is far more di ult when

is xed. Gobet et al. [ 12℄ build non parametri estimatorsof

b

and

σ

when

isxedandprovethat theirestimatorsrea htheminimax

L

2

risk.

However,theirestimatorsarebuiltwitheigenvaluesoftheinnitesimalgeneratorandaredi ult

toimplement.

In this paper, in a rst step, we onsider a stri tly stationary and

β

-mixing pro ess

(X

t

)

t≥0

observedat dis rete times

t = 0, ∆, . . . , n∆

. The su essive derivatives

f

(j)

(

0 ≤ j ≤ k

)of the stationarydensity

f

areestimated either ona ompa tset, oron

R

thanksto apenalized least-squaremethod. Weintrodu easequen eofin reasinglinearsubspa es

(S

m

)

and,forea h

m

,we onstru tanestimatorof

f

(j)

byminimisinga ontrastfun tionover

S

m

. Then,apenaltyfun tion

pen(m)

is introdu edto sele t anestimatorof

f

(j)

in the olle tion. When

f

(j)

∈ B

α

2,∞

, the

L

2

riskof thisestimator onvergeswith rate

(n∆)

−2α/(2α+2j+1)

and the pro edure doesnotrequire

theknowledgeof

α

. When

j = 0

,thisistherateof onvergen eobtainedbyComteandMerlevède [7, 5℄. Moreover, when

α

is known, Rao [22℄ obtained a rate of onvergen e

n

−2α/(2α+2j+1)

for

independentvariables.

Inase ondstep,weassumethatthepro ess

(X

t

)

issolutionofasto hasti dierentialequation ofknowndiusion oe ient

σ

. Then

f

anbeestimatedbyestimating

2bf

and

f

. Anestimator of

2bf

is builteither on a ompa tset,oron

R

by apenalized least-square ontrastmethod. It only onvergeswhenthesamplinginterval

∆ → 0

,butinthis ase,itsrateof onvergen eisbetter thanforthepreviousestimator: itis

(n∆)

−2α/(2α+1)

when

f

∈ B

α

2,∞

(andnot

(n∆)

−2α/(2α+3)

).

ThisistheminimaxrateobtainedbyDalalyanandKutoyants[9℄with ontinuousobservations.

Then, an estimator by quotient of the drift fun tion

b

is onstru ted. When

is xed, it rea hestheminimaxrateobtainedbyGobetet al.[ 12℄.

InSe tion2,anadaptiveestimatorofthesu essivederivatives

f

(j)

ofthestationarydensity

f

ofastationaryand

β

-mixingpro essis omputedbyapenalized leastsquaremethod. InSe tion 3,onlydiusionswithknowndiusion oe ientsare onsidered. Anadaptiveestimatorof

f

(in

fa t,anestimatorof

2bf

)isbuilt. InSe tion4,aquotientestimatorof

b

is onstru ted. InSe tion 5, the theoreti al resultsare illustratedvia various simulationsusing several models. Pro esses

(X

t

)

aresimulatedbytheexa tretrospe tivealgorithmof Beskosetal.[3℄ . Theproofsaregiven inSe tion6. IntheAppendix, thespa esoffun tionsareintrodu ed.

2 Estimation of the su essive derivatives of the stationary

density

2.1 Model and assumptions

Inthisse tion,astationarypro ess

(X

t

)

t≥0

isobservedatdis retetimes

t = 0, ∆, . . . , n∆

andthe su essive derivatives

f

(j)

of the stationary density

f = f

(0)

are estimated for

0 ≤ j ≤ k

. The samplinginterval

isxedortendsto0. Theestimationset

A

iseithera ompa t

[a

0

, a

1

]

,or

R

. Letus onsiderthenorms

k.k

= sup

A

|.|

,

k.k

L

2

= k.k

L

2

(A)

and

h., .i = h., .i

L

2

(A)

.

(2.1)

(4)

Thepro ess

(X

t

)

isergodi , stri tly stationary andarithmeti ally orexponentially

β

-mixing. Apro essisarithmeti ally

β

-mixingifits

β

-mixing oe ientsatises:

β

X

(t) ≤ β

0

(1 + t)

−(1+θ)

(2.2) where

θ

and

β

0

aresomepositive onstants. Apro essisexponentially(orgeometri ally)

β

-mixing ifthereexists twopositive onstants

β

0

and

θ

su hthat:

β

X

(t) ≤ β

0

exp (−θt)

(2.3)

AssumptionM2.

The stationary density

f

is

k

times dierentiable and, for ea h

j ≤ k

,its derivatives

f

(j)

belong to

L

2

(A) ∩ L

1

(A)

. Moreover,

f

(j)

satises

R

A

x

2

f

(j)

(x)



2

dx < +∞

. Remark2.1.If

A = [a

0

, a

1

]

,AssumptionM2 isonly

∀j ≤ k

,

f

(j)

∈ L

2

(A)

.

Our aim is to estimate

f

(j)

by model sele tion. Therefore an in reasing sequen e of nite

dimensionallinearsubspa es

(S

m

)

isneeded. Onea hof these subspa es,anestimatorof

f

(j)

is

omputed,andthankstoapenaltyfun tiondependingon

m

,thebestpossibleestimatoris hosen. Letusdenote by

C

l

the spa eoffun tions

l

timesdierentiableon

A

and witha ontinuous

th derivative,and

C

l

m

thesetofthepie ewisefun tions

C

l

. Toestimate

f

(j)

,

0 ≤ j ≤ k

ona ompa t set,weneedasequen eoflinearsubspa esthat satisestheassumption:

AssumptionS1: Estimationon a ompa t set. 1. Thesubspa es

S

m

arein reasing,of -nitedimension

D

m

andin ludedin

L

2

(A)

.

2. Forany

m

,anyfun tion

t ∈ S

m

is

k

timesdierentiable(belongsto

C

k−1

∩C

k

m

)andsatises:

∀j ≤ k,

t

(j)

(a

0

) = t

(j)

(a

1

) = 0.

3. There existsanorm onne tion: for any

j ≤ k

,thereexistsa onstant

ψ

j

su hthat:

∀m, ∀t ∈ S

m

,

t

(j)

2

≤ ψ

j

D

2j+1

m

ktk

2

L

2

.

Letus onsider

λ,m

, λ ∈ Λ

m

)

anorthonormal basisof

S

m

with

m

| = D

m

. Wehavethat

Ψ

2

j,m

(x)

≤ ψ

j

D

2j+1

m

where

Ψ

2

j,m

(x) =

P

λ∈Λ

m



ϕ

(j)

λ,m

(x)



2

.

4. There existsa onstant

c

su hthat,for any

m ∈ N

,any fun tion

t ∈ S

m

:

t

(j)

L

2

≤ cD

2j

m

ktk

2

L

2

.

5. Foranyfun tion

t

belonging tothe unitballof aBesovspa e

B

α

2,∞

:,

kt − t

m

k

2

L

2

≤ D

−2

m

∨ D

−2α

m

where

t

m

isthe orthogonal

(L

2

)

proje tion of

t

over

S

m

.

Remark 2.2.Be ause of Point 2, theproje tion

t

m

onvergesveryslowly to

t

on theboundaries of the ompa t

A = [a

0

, a

1

]

and the inequality

kt − t

m

k

2

L

2

≤ D

−2α

m

an notbe satisedfor any

t ∈ B

α

2,∞

.

Inthe Appendix, several sequen es oflinear subspa essatisfyingthis property aregiven. To

estimate

f

(j)

on

R

, slightly dierent assumptions are needed: let us onsider an in reasing se-quen e oflinear subspa es

S

m

generated by anorthonormalbasis

λ,m

, λ ∈ Z}

. We havethat

dim(S

m

) = ∞

,sotobuildestimators,weusetherestri tedspa es

S

m,N

=

Ve t

λ,m

, λ ∈ Λ

m,N

)

with

m,N

| < +∞

. Thefollowingassumptioninvolvesthesequen esoflinearsubspa es

(S

m

)

and

(5)

AssumptionS2: Estimationon

R

. 1. The sequen eof linearsubspa es

(S

m

)

isin reasing. 2. Wehavethat

m,N

| := dim(S

m,N

) = 2

m+1

N + 1

. 3.

∀m, N ∈ N, ∀t ∈ S

m,N

: t ∈ C

k−1

∩ C

k

m

and

∀j < k

,

lim

|x|→∞

t

(j)

(x) = 0

. 4.

∃ψ

j

∈ R

+

,

∀m ∈ N

,

∀t ∈ S

m

,

∀j ≤ k

,

t

(j)

2

≤ ψ

j

2

(2j+1)m

ktk

2

L

2

.

Parti ularly,

Ψ

2

m

(x)

2

=

X

λ∈Z



ϕ

(j)

λ,m

(x)



2

2

≤ ψ

j

2

(2j+1)m

.

5.

∃c

,

∀m ∈ N

,

∀t ∈ S

m

,

∀j ≤ k

:

t

(j)

2

L

2

≤ c2

2jm

ktk

2

L

2

.

6. Foranyfun tion

t ∈ L

2

∩ L

1

(R)

su h that

R x

2

t

2

(x)dx < +∞

,

kt

m

− t

m,N

k

2

L

2

≤ c

2

m

N

where

t

m

isthe orthogonal (

L

2

) proje tion of

t

over

S

m

and

t

m,N

itsproje tion over

S

m,N

. 7. There exists

r ≥ 1

su h that, for any fun tion

t

belonging to the unit ball of a Besov spa e

B

α

2,∞

(with

α < r

),

kt − t

m

k

2

L

2

≤ 2

−2mα

.

Proposition 2.1.

Ifthefun tion

ϕ

generatesa

r

-regularmultiresolutionanalysisof

L

2

,with

r ≥ k

,thenthesubspa es

S

m

=

Ve t

λ,m

, λ ∈ Z}

and

S

m,N

=

Ve t

λ,m

, λ ∈ Λ

m,N

}

(where

ϕ

λ,m

(x) = 2

m/2

ϕ (2

m

x − λ)

and

Λ

m,N

= {λ ∈ Z, |λ| ≤ 2

m

N }

) satisfyS2.

Forthedenitionofthemulti-resolutionanalysis,see Meyer[20℄, hapter2.

2.2 Risk of the estimator for xed

m

Anestimator

ˆ

g

j,m

of

g

j

:= f

(j)

is omputedbyminimising the ontrastfun tion

γ

j,n

(t) = ktk

2

L

2

2(−1)

j

n

n

X

k=1

t

(j)

(X

k∆

).

UnderAssumptionsS1orS2:

E

j,n

(t)) = ktk

2

L

2

−2 (−1)

j

D

t

(j)

, f

E

= ktk

2

L

2

−2

D

t, f

(j)

E

=

t − f

(j)

2

L

2

−C

where

C =

f

(j)

2

L

2

.

IfAssumptionS1issatised,letusdenote

ˆ

g

j,m

(t) = arg inf

t∈S

m

γ

j,n

(t),

and,underAssumptionS2,

ˆ

g

j,m,N

(t) = arg inf

t∈S

m,N

γ

j,n

(t).

Wehavethetwofollowingtheorems:

Theorem2.1: Estimationon a ompa t set.

UnderAssumptionsM1-M2 andS1,the estimator risksatises, forany

j ≤ k

and

m ∈ N

:

E



kˆg

j,m

− g

j

k

2

L

2



≤ kg

j,m

− g

j

k

2

L

2

+ 8cβ

0

ψ

j

D

2j+1

m

n



1 ∨

θ∆

1



where

g

j,m

istheorthogonal

(L

2

)

proje tion of

g

j

over

S

m

. The onstants

β

0

and

θ

aredenedin (2.2)or (2.3),

ψ

j

isdenedinAssumptionS1and

c

isauniversal onstant.

(6)

Theorem2.2: Estimationon

R

.

UnderAssumptionsM1-M2 andS2,for any

j ≤ k

and

m ∈ N

:

E



kˆg

j,m,N

− g

j

k

2

L

2



≤ kg

j,m

− g

j

k

2

L

2

+ C

2

m

N

+ 8cβ

0

ψ

j

2

(2j+1)m

n



1 ∨

θ∆

1



where

C

depends on

R

−∞

x

2

g

2

(x)dx

and of the hosen sequen e of linear subspa es

(S

m,N

)

. A - ording toAssumptionS26.,if

N ≥ (n ∧ nθ∆)

,

E



kˆg

j,m,N

− g

j

k

2

L

2



≤ kg

j,m

− gk

2

L

2

+ cβ

0

2

(2j+1)m

n



1 ∨

1

θ∆



.

If the random variables

(X

0

, . . . , X

n

)

are independent, the derivatives of the density an be estimatedinthesamewayandthetwoprevioustheorems(aswellasthetheoremsfortheadaptive

risk) anbeappliedifweset

θ = +∞

.

When

∆ = 1

,theriskboundisthesameasinHosseiniounetal.[13℄.

2.3 Optimisation of the hoi e of

m

Under Assumption S1 and if

g

j

belongs to the unit ball of a Besov spa e

B

α

2,∞

with

α ≥

1

, then

kg

j,m

− g

j

k

2

L

2

≤ cD

m

−2

and the best bias-varian e ompromise is obtained for

D

m

(n (1 ∨ θ∆))

1/(2j+3)

.

Inthat ase,

E



kˆg

j,m

− g

j

k

2

L

2



≤ (n ∨ nθ∆)

−2/(2j+3)

.

IfAssumptionS2issatisedandif

g

j

belongsto

B

α

2,∞

,with

r ≥ α

,then

kg

j,m

− g

j

k

2

L

2

≤ c2

−2mα

.

If

N ≥ n (1 ∧ θ∆)

,thebestbias-varian e ompromiseisobtainedfor

m ∼

1

2j + 1 + 2α

log

2

(n (1 ∨ θ∆))

andthen

E



kˆg

j,m,N

− g

j

k

2

L

2



≤ (n ∨ n∆)

−2α/(2α+2j+1)

.

Rao[22℄ builds estimatorsofthe su essivederivatives

f

(j)

forindependentvariables. This

esti-mators onvergewithrate

n

−2α/(2α+2j+1)

.

2.4 Risk of the adaptive estimator on a ompa t set

Anadditionalassumptionforthepro ess

(X

t

)

isneeded: AssumptionM3.

Ifthe pro ess

(X

t

)

t≥0

isarithmeti ally

β

-mixing, thenthe onstant

θ

denedin (2.2)issu hthat

θ > 3

.

Letusset

M

j,n

= {m, D

m

≤ D

j,n

}

where

D

j,n

≤ (n∆ ∧ n)

1/(2j+2)

isthemaximal dimension.

Forany

m ∈ M

j,n

, anestimator

g

ˆ

j,m

∈ S

m

of

g

j

= f

(j)

is omputed. Letusintrodu eapenalty

fun tion

pen

j

(m)

dependingon

D

m

and

n

:

pen

j

(m) ≥ κβ

0

ψ

j

D

2j+1

m

n



1 ∨

θ∆

1



.

Thenwe onstru tanadaptiveestimator: hoose

m

ˆ

j

su hthat

˜

g

j

:= ˆ

g

j, ˆ

m

j

where

m

ˆ

j

= arg min

m∈M

j,n

j,n

g

j,m

) + pen

j

(m)] .

Theorem2.3: Adaptive estimationon a ompa t set.

Thereexists auniversal onstant

κ

su hthat,if AssumptionsM1-3andS1aresatised:

E



k˜g

j

− g

j

k

2

L

2



≤ C

inf

m∈M

j,n



kg

j,m

− g

j

k

2

L

2

+ pen

j

(m)



+

c

n



1 ∨

1



(7)

andMerlevède[5℄obtainsimilarresultswhen

j = 0

andthesamplinginterval

isxed,andtheir remaindertermissmaller: itis

1/n

andnot

ln

2

(n)/n

.

The penalty fun tion depends on

β

0

and

θ

. Unfortunately, these two onstantsare di ult to estimate. However,the slope heuristi dened in Arlotand Massart [1℄ enablesus to hoose

automati allya onstant

λ

su hthatthepenalty

λD

2j+1

m

/(n∆)

isgood. Itis alsopossibletouse theresamplingpenaltiesofLerasle[18℄.

2.5 Risk of the adaptive estimator on

R

Letus denote

M

j,n

= {m, 2

m

≤ D

j,n

}

with

D

2j+2

j,n

≤ n∆ ∧ n

andx

N = N

n

= (n ∧ n∆)

. For any

m ∈ M

j,n

,anestimator

g

ˆ

j,m,N

n

∈ S

m,N

n

of

g

j

is omputed. Thebestdimension

m

ˆ

j

is hosen su hthat

ˆ

m

j

= arg min

m∈M

j,n

j,n

g

j,m,N

n

) + pen

j

(m)]

where

pen

j

(m) = cψ

j

 2

(2j+1)m

n

2

(2j+1)m

nθ∆



andtheresultingestimatorisdenoted by

˜

g

j

:= ˆ

g

j, ˆ

m

j

,N

n

.

Theorem2.4: Adaptive estimationon

R

. UnderAssumptionsM1-M3 andS2,

E



k˜g

j

− g

j

k

2

L

2



≤ C

inf

m∈M

j,n



kg

j,m

− g

j

k

2

L

2

+ pen

j

(m)



+

c

n



1 ∨

1



where

c

dependson

ψ

j

,

β

0

and

θ

.

3 Case of stationary diusion pro esses

Letus onsiderthesto hasti dierentialequation(SDE):

dX

t

= b(X

t

)dt + σ(X

t

)dW

t

,

X

0

= η,

(3.1) where

η

isarandomvariableand

(W

t

)

t≥0

aBrownianmotionindependentof

η

. Thedriftfun tion

b : R → R

isunknownandthediusion oe ient

σ : R → R

+∗

isknown. Thepro ess

(X

t

)

t≥0

is assumedtobestri tlystationary,ergodi and

β

-mixing. Obviously,we an onstru testimatorsof thesu essivederivativesofthestationarydensityusingthepreviousse tion. Butinthisse tion,

weusethe properties ofadiusion pro ess to ompute anewestimatorof therstderivativeof

thestationarydensity. Ifthesamplinginterval

issmall,thisnewestimator onvergefasterthan thepreviousone.

3.1 Model and Assumptions

Thepro ess

(X

t

)

t≥0

isobservedatdis retetimes

t = 0, ∆, . . . , n∆

. AssumptionM4.

Thefun tions

b

and

σ

are globally Lips hitzand

σ ∈ C

1

.

AssumptionM4ensurestheexisten eanduniqueness ofasolutionof theSDE(3.1).

AssumptionM5.

Thediusion oe ient

σ

belongsto

C

1

,isboundedandpositive: thereexist onstants

σ

0

and

σ

1

su hthat:

∀x ∈ R, 0 < σ

1

≤ σ(x) ≤ σ

0

.

AssumptionM6.

Thereexist onstant

r > 0

and

1 ≤ α ≤ 2

su hthat

(8)

UnderAssumptionsM4-M6,thereexistsastationarydensity

f

fortheSDE(3.1),and

f (x) ∝ σ

−2

(x) exp



2

Z

x

0

b(s)σ

−2

(s)ds



.

(3.2)

Then

f

hasmomentsofanyordersand:

Z

|f

(x)|

2

dx < ∞,

∀m ∈ N,

Z

|x|

m

|f

(x)| dx < ∞

(3.3)

∀m ∈ N, kx

m

f (x)k

< ∞,

b

4

(x)f (x)

< ∞

and

Z

exp (|b(x)|) f(x)dx < ∞.

(3.4) AssumptionM7.

Thepro ess isstationary:

η ∼ f

.

A ordingto Pardouxand Veretennikov [21℄, Proposition 1 p.1063,under Assumptions

M5-M6,thepro ess

(X

t

)

isexponentially

β

-mixing: thereexist onstants

β

0

and

θ

su hthat

β

X

(t) ≤

β

0

e

−θt

. Moreover,Gloter[11℄provethefollowingproperty: Proposition 3.1.

Let us set

F

t

= σ (η, W

s

, s ≤ t) .

Under Assumptions M4 andM7, for any

k ≥ 1

,there exists a onstant

c(k)

dependingon

b

and

σ

su hthat:

∀h, 0 < h ≤ 1, ∀t ≥ 0 E

sup

s∈[t,t+h]

|b(X

s

) − b(X

t

)|

k

F

t

!

≤ c (k) h

k/2



1 + |X

t

|

k



.

Remark3.1.Toestimate

f

,itisenoughtohaveanestimatorof

2bf

andanestimatorof

f

. Indeed, a ordingtoequation(3.2),therstderivative

f

satises:

f

(x)

f (x)

2b(x)

σ

2

(x)

− 2

σ

(x)

σ(x)

.

Byassumption,the diusion oe ient

σ

is known. Besides, a ordingto AssumptionsM4and M5,

σ

and

σ

−1

arebounded. Aswehavealready onstru tedanestimatorof

f = g

0

in Se tion 2,itremainstoestimate

2bf

.

Inthisse tion,we onstru tanestimator

h

˜

of

h := 2bf

eitherona ompa tset

[a

0

, a

1

]

,oron

R

.

3.2 Sequen e of linear subspa es

Likein the previous se tion, estimators

ˆ

h

m

of

h

are omputed on some linear subspa es

S

m

or

S

m,N

, thenapenaltyfun tion

pen(m)

isintrodu edto hoosethebest possibleestimator

˜

h

. If

h

isestimatedona ompa tset

A = [a

0

, a

1

]

,thefollowingassumptionisneeded:

AssumptionS3: Estimationon a ompa t set. 1. Thesequen eoflinearsubspa es

S

m

is in reasing,

D

m

= dim(S

m

) < ∞

and

∀m

,

S

m

⊆ L

2

(A)

.

2. There existsanorm onne tion: for any

m ∈ N

,any fun tion

t ∈ S

m

satises

ktk

2

≤ φ

0

D

m

ktk

2

L

2

.

Parti ularly, ifwe note

Φ

m

(x) =

P

λ∈Λ

m

λ,m

(x))

2

where

λ,m

, λ ∈ Λ

m

)

is an orthonor-malbasis of

S

m

,then

Φ

2

m

(x)

≤ φ

0

D

m

.

3. There exists

r ≥ 1

su hthat,for anyfun tion

t

belonging to

B

α

2,∞

with

α ≤ r

,

kt − t

m

k

2

L

2

≤ D

m

−2α

where

t

m

isthe orthogonal proje tionof

t

over

S

m

.

(9)

aregiven. Toestimate

h

on

R

,anin reasingsequen eoflinearsubspa es

S

m

=

Ve t

λ,m

λ ∈ Z)

(where

λ,m

}

λ∈Z

isan orthonormalbasisof

S

m

)is onsidered. As thedimensionof those sub-spa esisinnite,thetrun atedsubspa es

S

m,N

=

Ve t

λ,m

, λ ∈ Λ

m,N

)

areused.

AssumptionS4: Estimationon

R

. 1. The sequen eof linearsubspa es

(S

m

)

isin reasing. 2. Thedimension of the subspa e

S

m,N

is

2

m+1

N + 1

. 3.

∃φ

0

,

∀m

,

∀t ∈ S

m

,

ktk

2

≤ φ

0

2

m

ktk

2

L

2

.

Let us set

Φ

m

(x) =

P

λ∈Z

λ,m

(x))

2

, then

Φ

2

m

(x)

≤ φ

0

2

m

where

φ

0

isa onstantindependent of

N

. 4. Foranyfun tion

t ∈ L

2

∩ L

1

(R)

su h that

R x

2

t

2

(x)dx < +∞

,

kt

m

− t

m,N

k

2

L

2

≤ c

2

m

N

where

t

m

isthe orthogonal (

L

2

) proje tion of

t

over

S

m

and

t

m,N

itsproje tion over

S

m,N

. 5. There exists

r ≥ 1

su h that for any fun tion

t

belonging to the unit ball of a Besov spa e

B

2,∞

α

with

α ≤ r

,

kt − t

m

k

2

L

2

≤ c2

−2mα

.

Proposition 3.2.

Letus onsider afun tion

ϕ

generatinga

r

-regularmulti-resolutionanalysis of

L

2

with

r ≥ 0

. Let usset

S

m

=

Ve t

λ,m

, λ ∈ Z}

and

S

m,N

=

Ve t

λ,m

, λ ∈ Λ

m

}

where

ϕ

λ,m

(x) = 2

m/2

ϕ (2

m

x − λ)

and

Λ

m

= {λ ∈ Z, |λ| ≤ 2

m

N }

. Then the subspa es

S

m,N

satisfyAssumptionS4.

Fun tions

ϕ(x) = sin(x)/x

also generate a multi-resolution of

L

2

(R)

, but they are not even

0-regular. However,theysatisfyAssumptionS4ifSobolevspa estakethe pla eofBesovspa esin

Point5. Thedenition of Sobolevspa esof regularity

α

isre alledhere:

W

α

=



g,

Z

−∞

|g

(x)|

2

x

2

+ 1



α

dx < ∞



where

g

istheFouriertransformof

g

.

3.3 Risk of the estimator with

m

xed

Forany

m ∈ M

n

, where

M

n

= {m, D

m

≤ D

n

}

, an estimator

ˆ

h

m

of

h = 2bf

is omputed. The maximaldimension

D

n

isspe iedlater. Thefollowing ontrastfun tionis onsidered:

Γ

n

(t) = ktk

2

L

2

4

n∆

n

X

k=1

X

(k+1)∆

− X

k∆

 t (X

k∆

) .

As

−1

X

(k+1)∆

− X

k∆

 = I

k∆

+ Z

k∆

+ b(X

k∆

)

with

I

k∆

=

1

Z

(k+1)∆

k∆

(b(X

s

) − b(X

k∆

)) ds

and

Z

k∆

=

1

Z

(k+1)∆

k∆

σ(X

s

)dW

s

,

(3.5) wehavethat

E

n

(t)) = ktk

2

L

2

−4 hbf, ti−4E (I

t(X

)) .

A ordingtoLemma6.4,

|E (I

k∆

t(X

k∆

))| ≤

c∆

1/2

. Moreover,

h = 2bf

,so

E

n

(t)) = ktk

2

L

2

− 2 hh, ti + O



1/2



.

Thisinequalityjustiesthe hoi eof the ontrastfun tion ifthesampling interval

issmall. If AssumptionS3issatised,we onsidertheestimator

ˆ

h

m

= arg min

t∈S

m

(10)

ˆ

h

m,N

= arg min

t∈S

m,N

Γ

n

(t).

Theorem3.1: Estimationon a ompa t set.

UnderAssumptionsM4-M7 andS3,

E



ˆ

h

m

− h

2

L

2



≤ kh

m

− hk

2

L

2

+ c∆ +



σ

2

0

kfk

+

0

φ

0

θ

 D

m

n∆

where

h

m

is the orthogonal proje tion of

h

over

S

m

and

c

a onstant depending on

b

and

σ

. We remindthat the

β

-mixing oe ientof the pro ess

(X

t

)

issu hthat

β

X

(t) ≤ β

0

e

−θt

.

Theorem3.2: Estimationon

R

. UnderAssumptionsM4-M7 andS4

E



ˆ

h

m,N

− h

2

L

2



≤ kh

m,N

− hk

2

L

2

+ c

2

m

N

+ c∆ +



kfk

+

0

φ

0

θ

 2

m

n∆

.

where

h

m,N

isthe orthogonal proje tion of

h

onthe spa e

S

m,N

. If

N = N

n

= n∆

,then

E



ˆ

h

m,N

n

− h

2

L

2



≤ kh

m

− hk

2

L

2

+ c∆ +



kfk

+

0

φ

0

θ

 2

m

n∆

where

h

m

isthe orthogonal proje tion of

h

over

S

m

.

3.4 Optimisation of the hoi e of

m

UnderAssumptionS3,if

h

1

A

belongstotheunit ballofaBesovspa e

B

α

2,∞

,then

kh − h

m

k

2

L

2

D

−2α

m

. Tominimisethebias-varian e ompromise,onehaveto hoose

D

m

∼ (n∆)

1/(1+2α)

andinthat asetheestimatorrisksatises:

E



ˆ

h

m

− h

2

L

2



≤ C (n∆)

−2α/(1+2α)

+ c∆.

UnderAssumptionS4,if

h

belongsto

B

α

2,∞

, then

kh − h

m

k

2

L

2

≤ 2

−2mα

and

E



ˆ

h

m,n∆

− h

2

L

2



≤ C (n∆)

−2α/(1+2α)

+ c∆.

Remark 3.2.Dalalyan and Kutoyants [9℄ estimate the rst derivative of the stationary density

observed at ontinuous time (they observe

X

t

for

t ∈ [0, T ]

). In that framework, the diusion oe ient

σ

2

isknown. Theminimaxrateof onvergen eoftheestimatoris

T

−α/(1+2α)

. Itisthe

ratethat weobtainwhen

tendsto 0. Letusset

∆ ∼ n

−β

. Weobtainthefollowing onvergen etable:

β

prin ipaltermofthebound rateof onvergen eoftheestimator

0 < β ≤

4α+1

n

−β

4α+1

≤ β < 1

(n∆)

−2α/(1+2α)

n

−2α(1−β)/(4α+1)

Thoseratesof onvergen earethesameasfortheestimatorofthedrift. If

β ≥ 1/2

,the domi-natingtermintheriskboundisalways

(n∆)

−2α/(1+2α)

. Therateof onvergen eisalwayssmaller

than

n

−1/2

. If

(n, ∆)

is xed and if

∆ ≤ n

−2α/(4α+3)

, then these ond estimator

h

ˆ

m

onverges fasterthantherstone

ˆ

g

1,m

. However,ifthesamplinginterval

islargerthan

n

−2α/(4α+3)

,itis

(11)

For any

m ∈ M

n,A

= {m, D

m

≤ D

n

}

where the maximal dimension

D

n

is spe ied later, an estimator

ˆ

h

m

∈ S

m

of

h

is omputed. Letusset

pen(m) ≥ κ

D

m

n∆



1 +

0

θ



and

m =

ˆ

inf

m∈M

n,A

n

γ

n

ˆh

m



+ pen(m)

o

.

Theresultingestimatorisdenoted by

˜

h := ˆ

h

m

ˆ

. Letus onsidertheasymptoti framework: AssumptionS5.

n∆

ln

2

(n)

→ ∞

and

D

2

n

n∆

ln

2

(n)

.

Theorem3.3: Adaptive estimationon a ompa t set.

There exists a onstant

κ

depending only on the hosen sequen e of linear subspa es

(S

m

)

su h that,underAssumptionsM4-M7 ,S3andS5,

E



˜

h − h

2

L

2



≤ C

inf

m∈M

n,A

n

kh

m

− hk

2

L

2

+ pen(m)

o

+ c∆ +

c

n∆

where

C

isanumeri al onstant,

c

depends on

φ

0

and

kfk

and

c

depends on

b

.

Remark 3.3.The estimator is only onsistent if

∆ → 0

. Moreover, the adaptive estimator

˜

h

automati allyrealisesthebias-varian e ompromise.

3.6 Risk of the adaptive estimator on

R

An estimator

ˆ

h

m,n∆

∈ S

m,n∆

is omputed for any

m ∈ M

n,R

= {m, 2

m

≤ D

n

}

. Thefollowing penaltyfun tion isintrodu ed:

pen(m) ≥ κ

2

m

n∆



1 +

0

θ



andweset

m = inf

ˆ

m∈M

n

n

γ

n

ˆh

m,n∆



+ pen(m)

o

Letus denoteby

˜h

n∆

theresultingestimator. Theorem3.4: Adaptive estimationon

R

.

There existsa onstant

κ

depending only on the sequen e of linear subspa es

(S

m

)

su h that, if AssumptionsM4-M7,S4andS5aresatised:

E



˜

h

n∆

− h

2

L

2



≤ C

inf

m∈M

j,n,R

n

kh

m

− hk

2

L

2

+ pen(m)

o

+ c∆ +

c

n∆

.

4 Drift estimation by quotient

Ifthepro ess

(X

t

)

t≥0

isthesolutionofthesto hasti dierentialequation(SDE)

dX

t

= b(X

t

)dt + dW

t

andsatisesAssumptionsM4-M7,then

b = f

/2f.

An estimator of the drift by quotient an therefore be onstru ted. For high-frequen y data,

Comteetal.[6℄buildanadaptivedriftestimatorthankstoapenalizedleast-squaremethod. Their

estimator onvergeswiththeminimaxrate

(n∆)

−2α/(2α+1)

if

b

belongstotheBesovspa e

B

α

2,∞

. Onthe ontrary, there exist fewresultson thedriftestimation where thesampling interval

is xed. Gobet etal.[12℄ buildadriftestimatorforlow-frequen ydata, however,theirestimatoris

noteasyto implement. Inthisse tion, adriftestimatorbyquotientis onstru tedanditsriskis

(12)

Weestimate

f

and

f

on

R

inordertoavoid onvergen eproblemsontheboundariesofthe om-pa t. Letus onsider twosequen esof linearsubspa es

(S

0,m

, m ∈ M

0,n

)

and

(S

1,m

, m ∈ M

1,n

)

satisfyingAssumptionS2for

k = 1

and su hthat

M

0,n

=

n

m

0

, log(n) ≤ 2

m

0

≤ η

n∆/ log(n∆)

o

and

M

1,n

=

n

m

1

, 2

m

1

≤ (n∆)

1/5

o

wherethe onstant

η

doesnotdependon

b

neither

σ

.

As in Se tion 2, adaptive estimators

f := ˜

˜

g

0,n∆

and

g := ˜

˜

g

1,n∆

of

f = g

0

and

f

= g

1

are omputed. As

b

belongs to

B

α

2,∞

,

f

and

f

alsobelong to

B

α

2,∞

and the best bias-varian e ompromisefor

g

ˆ

0,m

is obtained for

2

m

0

∼ (n∆)

1/(1+2α)

, and for

ˆ

g

1,m

it is obtained for

2

m

1

(n∆)

1/(3+2α)

. If

α > 1

, therestri tionson

M

0,n

and

M

1,n

donotmodifytherateof onvergen e ofoursestimators. Letus onsidertheestimator

˜b =

˜

g

2 ˜

f

if

g ≤ 2n∆ ˜

˜

f

and

˜b = 0

otherwise

.

Theorem4.1. If

b ∈ B

α

2,∞

with

α > 1

,then

E



˜b − b

2

L

2



≤ c



E



˜

f − f

2

L

2



+ E



k˜g − gk

2

L

2



+

1

n∆



wherethe onstant

c

does notdepend on

n

nor on

. Then, byTheorem2.4,

E



˜b − b

2

L

2



≤ c(n∆)

−2α/(2α+3)

So

˜b

onvergestowards

b

withtheminimaxratedenedbyGobetetal.[12℄.

5 Simulations

5.1 Models

Ornstein-Uhlenbe k: Letus onsidertheSDE

dX

t

= −bX

t

+ dW

t

with

b > 0

. Thestationary densityisaGaussiandistribution

N



0, (2b)

−1



anditsderivativeis

f

(x) = −

2b

3/2

π

xe

−bx

2

.

Hyperboli tangent: We onsiderapro ess

(X

t

)

satisfyingtheSDE

dX

t

= −a tanh(aX

t

)dt + dW

t

.

ThestationarydensityrelatedtothisSDEis

f (x) =

a

2 cosh

2

(ax)

and

f

(x) = −

a

2

tanh(ax)

cosh

2

(ax)

.

Square root: Letus onsiderthediusionwithparameters

b(x) = −

ax

1 + x

2

and

σ = 1.

Thestationarydensityis

f (x) = c exp



−2a

p

1 + x

2



and

f

(13)

dX

t

= −

2aX

t

1 + X

2

t

dt + dW

t

.

Thepro ess

(X

t

)

t≥0

doesnotsatisfyAssumptionM6 neitherthesu ient onditionstobe expo-nentially

β

-mixing. If

a > 1/2

,itadmitsthestationary density

f (x) = c

a

1 + x

2



−2a

and

f

(x) = −

4c

a

ax

(1 + x

2

)

1+2a

.

Sinefun tion: Letus onsiderthediusion withparameters:

b(x) = sin(ax) −

x

1 + x

2

and

σ = 1.

Itsstationarydensity

f

satises:

f (x) = c

a

exp



−2a

−1

cos(ax) − 2

p

1 + x

2



and

f

(x) = 2c

a

b(x)f (x)

5.2 Estimation of the rst derivative

f

Here,weestimatetherstderivative

f

ofthestationarydensityona ompa tsetandwe ompare

the two estimators

˜

g

1

and

˜

h

dened in Se tions 2 and 3. The subspa es

S

m

are generated by trigonometri polynomials: those fun tions are orthonormal, very regular and enable very fast

omputations: to ompute

ˆ

g

1,m

(resp

ˆ

h

m

)when

g

ˆ

1,m−1

(resp

ˆ

h

m−1

)isknown,itisonlyne essary to omputeoneortwo oe ients.

Figures1-5 showthedieren es betweenthetwoestimators:

g

˜

1

onvergeswhateverthe sam-plinginterval,and

˜

h

onvergesonlyif

is small. Inthat ase,

˜

h

is better than

g

˜

1

: thevarian e termisgreaterfor

g

ˆ

1,m

(isproportionalto

D

3

m

/(n∆

))thanfor

ˆ

h

m

(ispproportionalto

D

m

/n∆

). InTables1-3,forea hvalueof

n

and

,50exa tsimulationsofadiusionpro essarerealized using the retrospe tive exa t algorithm of Beskos et al. [3℄ (ex ept for the Ornstein-Uhlenbe k

pro ess whi h is simulatedusing Gaussian variables). Forea h path, we ompute the empiri al

risksoftheestimators

g

˜

1

and

˜

h

:

k˜g

1

− g

1

k

2

E

:=

1

M

M

X

k=1

g

1

(x

k

) − g

1

(x

k

))

2

and

h − h

˜

2

E

:=

1

M

M

X

k=1

˜h(x

k

) − h(x

k

)



2

,

wherethepoints

x

k

areequidistributedover

A

. To he kthattheestimatorisadaptive,theora les

or

g

=

k˜g

1

− g

1

k

2

E

min

m∈M

n

kˆg

1,m

− g

1

k

2

E

and

or

h

=

˜

h − h

2

E

min

m∈M

n

ˆ

h

m

− h

2

E

are omputed. The mean time of simulation

t

sim

of a pro ess is measured, and for ea h kind of estimator, the means of the empiri al risk

ris

g

or

ris

h

, of the ora les

or

¯

g

or

or

¯

h

and of the omputationtimes

t

g

or

t

h

or omputed.

The omplexity of the retrospe tive exa t algorithm of Beskos et al. [3℄ is proportional to

ne

c∆

where

c

depends on the model. Table3 showsthat forModel 4,

t

sim

in reaseswhen

n

or

in reases. For the hyperboli tangent, thetime of simulationonly depends on

n

be ause the onstant

c

is exa tlyequalto 0. TheOrnstein-Uhlenbe kpro ess isnotsimulated thanksto the retrospe tivealgorithm,soitstimeofsimulationdoesnotdependon

. Tables1-3showthat the rstestimator

g

˜

1

isalwaysfaster to omputethanthe se ond one

˜

h

. This ismainly be ausewe havelessmodelstotest: fortherstestimator,themaximaldimension

D

n

isboundedby

(n∆)

1/4

whereasforthese ondestimator,

D

n

≤ (n∆)

1/2

.

When

∆ = 1, ˜

g

1

isbetterthan

˜

h

. Ifnot,the estimatorsaresimilar andbe omebetterwhen

n∆

in reases. FortheOrnstein-Uhlenbe kpro essandthehyperboli tangent,thepro ess

(X

t

)

t≥0

isexponentially

β

-mixingand

g

˜

1

isingeneralbetterthan

˜

h

. ForModel4,thepro ess

(X

t

)

isnot exponentially

β

-mixingandwhen

∆ < 1

,

h

˜

is(ingeneral)betterthan

˜

g

1

.

(14)

Figure1: Ornstein-Uhlenbe k: estimationof

f

n = 10

4

,

∆ = 1

n = 10

5

,

∆ = 10

−2

−3

−2

−1

0

1

2

3

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

−3

−2

−1

0

1

2

3

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Figure2: Hyperboli tangent: estimationof

f

n = 10

4

,

∆ = 1

n = 10

5

,

∆ = 10

−2

−5

−4

−3

−2

−1

0

1

2

3

4

5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

−4

−3

−2

−1

0

1

2

3

4

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

Figure 3: Squareroot: estimationof

f

n = 10

4

,

∆ = 1

n = 10

4

,

∆ = 10

−1

−4

−3

−2

−1

0

1

2

3

4

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

−4

−3

−2

−1

0

1

2

3

4

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

- : truederivative

· · ·

: estimator

˜

g

1

(dierentiating anestimatorof

f

) -. : estimator

˜

h

(usingto

f

= 2bf

(15)

Figure4: Model4: estimationof

f

n = 10

4

,

∆ = 1

n = 10

4

,

∆ = 10

−1

−4

−3

−2

−1

0

1

2

3

4

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

−4

−3

−2

−1

0

1

2

3

4

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

Figure5: Sinefun tion: estimationof

f

n = 10

4

,

∆ = 1

n = 10

5

,

∆ = 10

−2

−8

−6

−4

−2

0

2

4

6

8

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

−8

−6

−4

−2

0

2

4

6

8

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

- : truederivative

· · ·

: estimator

˜

g

1

(dierentiating anestimatorof

f

) -. : estimator

˜

h

(usingto

f

= 2bf

(16)

Table1: Estimationof

f

forOrnstein-Uhlenbe k

rstestimator se ond estimator

n

t

sim

ris

g

or

¯

g

t

g

ris

h

or

¯

h

t

h

10

4

1 0.10 0.00025 2.5 0.33 0.0090 1.0 0.73

10

4

10

−1

0.10 0.0010 1.8 0.17 0.00091 1.2 0.68

10

4

10

−2

0.099 0.0060 2.6 0.097 0.0067 2.3 0.66

10

3

1 0.0027 0.0023 4.2 0.034 0.0097 1.0 0.12

10

3

10

−1

0.0025 0.0058 3.0 0.020 0.0077 2.3 0.12

10

3

10

−2

0.0026 0.037 3.0 0.0070 0.078 4.0 0.035

10

2

1 0.00022 0.0080 2.0 0.013 0.019 1.5 0.062

10

2

10

−1

0.00021 0.035 2.4 0.0046 0.078 5.5 0.019

10

2

10

−2

0.00023 0.067 2.1 0.0048 0.11 1.4 0.0068

Table2: Hyperboli tangent: estimationof

f

rstestimator se ondestimator

n

t

sim

ris

g

or

¯

g

t

g

ris

h

or

¯

h

t

h

10

4

1 6.2 0.0027 1.1 0.33 0.0087 1.03 0.71

10

4

10

−1

1.2 0.0018 3.7 0.17 0.0014 1.4 0.68

10

4

10

−2

1.7 0.0065 2.8 0.10 0.0056 1.8 0.65

10

3

1 0.61 0.0040 1.5 0.034 0.0097 1.1 0.12

10

3

10

−1

0.19 0.0067 2.8 0.020 0.0087 2.1 0.12

10

3

10

−2

0.16 0.022 2.5 0.0068 0.036 2.6 0.03

10

2

1 0.066 0.011 1.7 0.014 0.021 1.80 0.063

10

2

10

−1

0.020 0.023 2.3 0.0048 0.044 3.4 0.020

10

2

10

−2

0.018 0.033 1.6 0.0054 0.078 1.2 0.0080

Table3: Model4: estimation of

f

rstestimator se ond estimator

n

t

sim

ris

g

or

¯

g

t

g

ris

h

or

¯

h

t

h

10

4

1 6.6 0.00073 1.8 0.33 0.020 1.0 0.71

10

4

10

−1

2.3 0.0032 4.2 0.17 0.0019 1.3 0.70

10

4

10

−2

2.1 0.016 3.8 0.10 0.0090 1.7 0.68

10

3

1 0.67 0.0049 2.4 0.035 0.022 1.1 0.12

10

3

10

−1

0.24 0.017 3.6 0.021 0.013 2.0 0.12

10

3

10

−2

0.18 0.043 2.0 0.0071 0.094 3.5 0.035

10

2

1 0.071 0.048 8.1 0.014 0.041 1.6 0.065

10

2

10

−1

0.022 0.046 1.91 0.0049 0.077 3.1 0.02

10

2

10

−2

0.019 0.070 1.4 0.005 0.12 1.1 0.0069

ris

g

and

ris

h

: averageempiri alrisksrelatedfor

g

˜

1

and

h

˜

¯

or

g

and

or

¯

h

: average ora les (empiri al risks of

g

˜

1

(resp

˜

h

) over the empiri al risk of the best estimator

g

ˆ

1,m

(resp

ˆ

h

m

))

t

g

et

t

h

: averagetimeof omputationof

g

˜

1

and

˜

h

(timesin se onds)

(17)

Twodrift estimatorsare ompared: theestimatorbyquotientdened in Se tion 4,denoted here

by

˜b

quot

, and a penalized least-square estimator denoted by

˜b

pls

. The onstru tion of the last estimatorisdoneinComteetal.[6℄. Itonly onvergeswhenthesamplinginterval

issmall,but inthat ase,itrea hestheminimaxrateof onvergen e:if

b

belongstoaBesovspa e

B

α

2,∞

,then theriskoftheestimator

˜b

pls

isboundedby

E



˜b

pls

− b

2

L

2



≤ C



(n∆)

−2α/(2α+1)

+ ∆



.

Figures6-10showthat,forlow-frequen ydata,thequotientestimator

˜b

quot

isbetterthan

˜b

pls

. Forvariousvaluesof

n

and

,50exa tsimulationsof

(X

0

, . . . , X

n∆

)

are realizedandestimators

˜b

quot

and

˜b

pls

are omputed. Table4and5givetheaverageempiri alriskfortheseestimatorsand theaverage omputationtimes. Thelowestriskissetin bold.

Tables 4 and 5 underline that the rst estimator is always faster than the se ond one: to

ompute

˜b

pls

, wehaveto inverseamatrix

m × m

overea h spa e

S

m

. When

is smalland the timeofobservation

n∆

islarge,thepenalizedleastsquare ontrastestimator onvergesbetterthan thequotientestimator. Of ourse,when

isxed,

˜b

quot

onvergesfasterthan

˜b

pls

.

Table4: Ornstein-Uhlenbe k: estimationof

b

quotientestimator least-squareestimator

n

ris

quot

t

quot

ris

pls

t

pls

10

4

1 0.0022 3.6 0.089 7.3

10

4

10

−1

0.0086 1.2 0.0049 1.7

10

4

10

−2

0.069 0.4 0.031 0.7

10

3

1 0.011 0.2 0.090 0.7

10

3

10

−1

0.061 0.06 0.022 0.3

10

3

10

−2

0.31 0.02 0.50 0.004

10

2

1 0.073 0.03 0.085 0.3

10

2

10

−1

0.25 0.01 0.34 0.003

Table5: Hyperboli tangent: estimationof

b

quotientestimator least-squareestimator

n

ris

quot

t

quot

ris

pls

t

pls

10

4

1 0.0023 3.6 0.086 7.2

10

4

10

−1

0.019 1.2 0.017 1.8

10

4

10

−2

0.078 0.4 0.052 0.7

10

3

1 0.036 0.2 0.18 0.7

10

3

10

−1

0.12 0.06 0.065 0.3

10

3

10

−2

0.17 0.02 0.61 0.004

10

2

1 0.24 0.03 0.10 0.3

10

2

10

−1

0.20 0.01 0.53 0.003

ris

quot

and

ris

pls

: averageempiri alrisksfor

˜b

quot

and

˜b

pls

t

quot

and

t

pls

: average omputationtimesof

˜b

quot

and

˜b

pls

(timesin se onds)

6 Proofs

6.1 Important lemmas

Lemma6.1 : Varian e of

β

-mixing variables. Letus set

A =

1

n

n

X

k=1

g(X

k∆

) − E (g(X

k∆

)) .

(18)

Figure6: Ornstein-Uhlenbe k: estimationof

b

n = 10

4

,

∆ = 1

−1.5

−1

−0.5

0

0.5

1

1.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Figure7: Hyperboli tangent: estimationof

b

n = 10

4

,

∆ = 1

−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Figure8: Squareroot: estimation of

b

n = 10

4

,

∆ = 1

−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

-: truedrift

b

−−

: estimationof

b

byquotient:

˜b

quot

.. : estimation of

b

likein Comte etal.[6℄ :

˜b

pls

(19)

Figure9: Model 4: estimationof

b

n = 10

4

,

∆ = 10

−1

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

−1.5

−1

−0.5

0

0.5

1

1.5

Figure 10: Sine fun tion: estimationof

b

n = 10

4

,

∆ = 1

−4

−3

−2

−1

0

1

2

3

4

−1.5

−1

−0.5

0

0.5

1

1.5

-: truedrift

b

−−

: estimationof

b

byquotient:

˜b

quot

.. : estimation of

b

likein Comte etal.[6℄ :

˜b

pls

Figure

Figure 1: Ornstein-Uhlenbek: estimation of f ′ n = 10 4 , ∆ = 1 n = 10 5 , ∆ = 10 −2 −3 −2 −1 0 1 2 3−0.5−0.4−0.3−0.2−0.100.10.20.30.40.5 −3 −2 −1 0 1 2 3−0.5−0.4−0.3−0.2−0.100.10.20.30.40.5
Figure 4: Model 4: estimation of f ′ n = 10 4 , ∆ = 1 n = 10 4 , ∆ = 10 −1 −4 −3 −2 −1 0 1 2 3 4−0.8−0.6−0.4−0.200.20.40.60.8 −4 −3 −2 −1 0 1 2 3 4−0.8−0.6−0.4−0.200.20.40.60.8
Table 2: Hyperboli tangent: estimation of f ′
Table 4: Ornstein-Uhlenbek: estimation of b
+3

Références

Documents relatifs

We consider a population model with size structure in continuous time, where individuals are cells which grow continuously and undergo binary divisions after random exponential times

We recover the optimal adaptive rates of convergence over Besov spaces (that is the same as in the independent framework) for τ -mixing processes, which is new as far as we know..

The aim of this part is to study the well-posedness of the limit problem (4) posed on the half line. Namely, this part is concerned with the proof of Theorem 1.1. We begin with

Unité de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 RENNES Cedex Unité de recherche INRIA Rhône-Alpes, 655, avenue de l’Europe, 38330 MONTBONNOT ST

The above theorem is reminiscent of heavy traffic diffusion limits for the joint queue length process in various queueing networks, involving reflected Brownian motion and state

The paper deals with projection estimators of the density of the stationary solution X to a differential equation driven by the fractional Brownian motion under a

Adaptive estimation of the stationary density of a stochastic differential equation driven by a fractional Brownian motion... Adaptive estimation of the stationary density of

Key words and phrases: Central limit theorem, spatial processes, m-dependent random fields, physical dependence measure, nonparametric estimation, kernel density estimator, rate