• Aucun résultat trouvé

Estimation in models driven by fractional brownian motion

N/A
N/A
Protected

Academic year: 2022

Partager "Estimation in models driven by fractional brownian motion"

Copied!
23
0
0

Texte intégral

(1)

www.imstat.org/aihp 2008, Vol. 44, No. 2, 191–213

DOI: 10.1214/07-AIHP105

© Association des Publications de l’Institut Henri Poincaré, 2008

Estimation in models driven by fractional Brownian motion

Corinne Berzin

a

and José R. León

b

aLabSAD – Université Pierre Mendès-France 1251 Avenue centrale, BP 47, 38040 Grenoble cedex 9, France.

E-mail: Corinne.Berzin@upmf-grenoble.fr

bEscuela de Matemáticas, Facultad de Ciencias, Universidad Central de Venezuela, Paseo Los Ilustres, Los Chaguaramos, A.P. 47197, Caracas 1041-A, Venezuela. E-mail: jleon@euler.ciens.ucv.ve

Received 6 March 2006; accepted 25 July 2006

Abstract. Let{bH(t ), t∈R}be the fractional Brownian motion with parameter 0< H <1. When 1/2< H, we consider diffusion equations of the type

X(t )=c+ t

0 σ

X(u)

dbH(u)+ t

0 μ

X(u) du.

In different particular models whereσ (x)=σ orσ (x)=σ xandμ(x)=μorμ(x)=μ x, we propose a central limit theorem for estimators ofH and ofσ based on regression methods. Then we give tests of the hypothesis onσ for these models. We also consider functional estimation onσ (·)in the above more general models based in the asymptotic behavior of functionals of the 2nd-order increments of the fBm.

Résumé. Soit{bH(t ), t∈R}le mouvement Brownien fractionnaire de paramètre 0< H <1. Lorsque 1/2< H, nous considérons des équations de diffusion de la forme

X(t )=c+ t

0 σ

X(u)

dbH(u)+ t

0 μ

X(u) du.

Nous proposons dans des modèles particuliers où,σ (x)=σouσ (x)=σ xetμ(x)=μouμ(x)=μ x, un théorème central limite pour des estimateurs deH et deσ, obtenus par une méthode de régression. Ensuite, pour ces modèles, nous proposons des tests d’hypothèses surσ. Enfin, dans les modèles plus généraux ci-dessus nous proposons des estimateurs fonctionnels pour la fonction σ (·)dont les propriétés sont obtenues via la convergence de fonctionnelles des accroissements doubles du mBf.

AMS 2000 subject classifications:60F05; 60G15; 60G18; 60H10; 62F03; 62F12; 33C45

Keywords:Central limit theorem; Estimation; Fractional Brownian motion; Gaussian processes; Hermite polynomials

1. Introduction

Let{bH(t ), t∈R}be the fractional Brownian motion with Hurst coefficientH, 0< H <1. ForH >1/2, the integral with respect to fBm can be defined pathwise as the limit of Riemann sums (see [8] and [9]). This allows us to consider, under certain restrictions overσ (·)andμ(·), the “pseudo-diffusion” equations with respect to fBm, that is,

X(t )=c+ t

0

σ X(u)

dbH(u)+ t

0

μ X(u)

du. (1)

Our main interest in this work is to provide estimators for the functionσ (·).

(2)

In Section 3.1 we propose simultaneous estimators ofH and ofσ in models such thatσ (x)=σ orσ (x)=σ x andμ(x)=μor μ(x)=μx. Following [4,7] and [8], we can give an explicit expression for the unique solution to each equation that is a function of the trajectory ofbH(·). Instead of the original process, we use a mollified ver- sion and we assume we observe a smoothed by convolution process, defined asXε(t )=ϕεX(t ). Hereε, which tends to zero, is the smoothing parameter andϕε(·)is a convolution kernel such thatϕε(·)=1εϕ(ε·), withϕ(·)aC2 positive kernel withL1norm equal to one. Then we observed functionals of the type1

0h(Xε(u))| ¨Xε(u)|kdu, with h(x)=1/|x|k in the case of linearσ (·)andh(x)=1 in the case of constantσ (·). Such an observation could seem unusual but note that in case whereϕ(·)=1[−1,0]1[−1,0](·), then ε2X¨ε(u)=X(u+2ε)−2X(u+ε)+X(u).

Although ϕ(·) is not a C2 function, all results obtained in this paper can be rewritten with this particular ap- proximation. So our method can be related with variation methods and consists in obtaining some least squares estimators for H and σ in certain regression models. This method can be compared to that of [3], where the model was the fBm i.e. σ (·)≡1 and μ(·)≡0 and the purpose was to estimate H. Indeed, we prove that the asymptotic behavior of such estimators, that is, (HkH )/

ε and kσ )/(

εlog(ε)) are both equiva- lent to those of certain non-linear functionals of the Gaussian process bεH(·)=ϕεbH(·). As in [3], we show that they satisfy a central limit theorem using the method of moments, via the diagram formula. It is interest- ing to note that the rates of convergence of such estimators are not the same. Furthermore the asymptotic law of (Hkk)is a degenerated Gaussian. Hence, we could not provide simultaneous confidence intervals for H and σ. Finally, we proved that the best estimators for H and σ in the sense of minimal variance are obtained for k=2.

In Section 3.2, we get back to more general models of the form (1) and our goal is to provide functional estimation ofσ (·)as in [2]. Indeed, in [2] we considered the case whereμ(·)≡0 and we proved that, ifNεX(x)denotes the number of times the regularized processXε(·)crosses levelx, before time 1, then for 1/2< H <1 and any continuous functionh,

π 2

ε(1H )

˜ σ2H

−∞h(x)NεX(x)dx−→a.s.

1

0

h X(u)

σ X(u)

du. (2)

Furthermore we got the following result about the rates of convergence proving that there exists a Brownian mo- tionW ( ·)independent ofbH(·)and a constantσg1 such that for 1/2< H <3/4,

√1 ε

π 2

ε(1H )

˜ σ2H

−∞h(x)NεX(x)dx1

0

h X(u)

σ X(u)

du

ε0σg1 1

0

h X(u)

σ X(u)

dW (u), (3)

under some assumptions on the functionh.

The proofs of these two last convergences are based on the fact that, on the one hand, when μ(·) ≡0 and because fBm has quadratic variation when H > 12 and σ (·)C1(R), the solution to Eq. (1) is given by X(t )=K(bH(t )) where K is the solution to the ordinary differential equation K(t )˙ =σ (K(t )) with K(0)=c (see [8]). On the other hand, by the Banach formula, we have

−∞h(x)NεX(x)dx =1

0h(Xε(u))| ˙Xε(u)|du and since X˙ε(u) ˙K(bεH(u))b˙Hε (u)=σ (K(bεH(u)))b˙εH(u), we needed to look for the asymptotic behavior of a particular non-linear functional of the regularized fBm bεH(·) and Theorems 1.1 and 3.4 of [2] gave the re- sult.

We need to recall these two convergence results because they can serve as a motivation to the statement of Theo- rems 3.8 and 3.9. Indeed in these two last theorems we will use the same type of approximations and convergence.

Also convergences in (2) and (3) can serve as a motivation to the main interest in the present paper, that is to work with the second derivative of the smoothed process instead of the first one. Indeed in the second convergence result (3), we obtained the restrictive conditionH <3/4 due to the fact that we used the first derivative ofXε(·).

Thus in this work, having in mind to reach all the range ofH <1, we considered the case whereμ(·)is not nec- essarily null and instead of considering functionals of| ˙Xε(·)|, we worked with functionals of| ¨Xε(·)|k. This approach allowed us to provide functional estimation ofσ (·)as in (2) and to exhibit the rate of convergence as in (3) for any

(3)

value ofHin]1/2,1[, using a generalization of the two last convergence results whenμ(·)≡0 and then applying the Girsanov theorem (see [5] and [6]).

We observe that the limit convergence in (2) is1

0h(X(u))σ (X(u))du and will become1

0 σk(X(u))duin that work in cases whereh(·)≡1. Thus if we get back to the last four models of Section 3.1 and if we take into account the form ofσ (·), that is σ (x)=σ orσ (x)=σ x, the limit integral is now a function ofσ. Thus in the second part of Section 3.1 we propose estimators forσkwhenHis known. This supplementary information aboutH leads us to estimators ofσ performing more than those of the first part of Section 3.1 because the rate of convergence will be 1/√

εinstead of 1/(√

εlog(ε))as before.

Finally in Section 3.3, a result similar to the one of part two in Section 3.1 can be obtained under contiguous alternatives forσand provides a test of the hypothesis for such a coefficient.

We work with the techniques described in the last sections but without using the Girsanov theorem that is more tricky to use since under the alternative hypothesisσ depends onε.

2. Hypothesis and notation

Let{bH(t ), t∈R}be a fractional Brownian motion (fBm) with parameter 0< H <1 (see for instance [10]), i.e.bH(·) is a centered Gaussian process with the covariance function:

E

bH(t )bH(s)

=1 2v22H

|t|2H+ |s|2H− |ts|2H ,

withv2H2 := [(2H+1)sin(πH )]1. We define, for aC2densityϕ with compact support included in[−1,1], for eacht≥0 andε >0 the regularized processes:

bεH(t ):=1 ε

−∞ϕ tx

ε

bH(x)dx and Zε(t ):=ε(2H )b¨εH(t ) σ2H

,

with

σ2H2 :=V

ε(2H )b¨Hε(t )

= 1 2π

+∞

−∞ |x|(32H )ϕ(ˆ −x)2dx.

We shall use Hermite polynomials, defined by e(t xt2/2)=

n=0

Hn(x)tn n! .

They form an orthogonal system for the standard Gaussian measureφ (x)dx and, if hL2(φ (x)dx),thenh(x)=

n=0hˆnHn(x)andh22,φ:=

n=0hˆ2nn!.

Letgbe a function inL2(φ (x)dx)such thatg(x)=

n=1gˆnHn(x), withg22,φ=

n=1gˆn2n!<+∞.

The symbol “⇒” will mean weak convergence of measures.

At this step of the paper it will be helpful to state several theorems obtained in [3] in the aim to enlighten the notations and to make this paper more independent of it.

We proved the two following theorems.

Theorem 2.1. For all0< H <1andk∈N, 1

0

Zε(u)k

du−→a.s.

ε0E[N]k,

whereN will denote a standard Gaussian random variable.

(4)

Remark. This theorem implies thatZε(·)N whenεgoes to zero,the random variableZε(·)is considered as a variable on([0,1], λ)whereλis the Lebesgue measure.This last convergence implies that for all0< H <1and real k >0,almost surely,1

0 |Zε(u)|kdu→E[|N|k].

Forε >0, define Sg(ε):=ε1/2

1

0

g Zε(u)

du.

Theorem 2.2. For all0< H <1, Sg·)

ε0 X(·),

where the above convergence is in the sense of finite-dimensional distributions andX(·)is a cylindrical centered Gaussian process with covarianceρg(b, c):=E[X(b)X(c)],where forb, c >0,

ρg(b, c):= 1

bc n=1

ˆ g2nn!

+∞

−∞ ρHn(x, b, c)dx, and forx∈R,

ρH(x, b, c):=E

Zεb(εx+u)Zεc(u)

=(bc)(2H )σ2H2

−∞|y|(32H )eixyϕ(by)ϕ(cy)dy.

Note that for fixedc >0,ρH(x, c, c)=ρH(x/c), where forx∈R, ρH(x):=E

Zε(εx+u)Zε(u)

= 1 2πσ2H2

−∞|y|(32H )eixyϕ(y)2dy, so that,ρg(c, c)=σg2:=

n=1gˆn2n!+∞

−∞ ρHn(x)dx. Therefore we furthermore got the following remark that was also shown in Corollary 3.2(i) in [2].

Remark. Ifc >0is fixed,Sg(εc)σgN.

For allm∈N, for allc1>0,c2>0, . . . ,cm>0 and for alld1,d2, . . . , dm∈R, we will denote

σg,m2 (c,d):=

m i=1

m j=1

didjρg(ci, cj)=E m

i=1

diX(ci) 2

.

We also proved in [3] the following lemma.

Lemma 2.1. limε0E[m

i=1diSg(εci)]2=σg,m2 (c,d).

Throughout the paper,C(resp.C(ω)) shall stand for a generic constant (resp. for a generic constant that depends onωliving in the space of the trajectories), whose value may change during a proof, and log(·)for the Naperian logarithm.

For k≥1, we shall note Nkk :=E[|N|k], and if C(·) is a measurable function we shall note C(·)kk :=

1

0|C(u)|kdu, byC(·)kk,ε:=ε

0|C(u)|kduand byC(·)kk,εc:=1

ε |C(u)|kdu, forε >0.

(5)

3. Results

3.1. Simultaneous estimators ofHand ofσ

As mentioned in the Introduction, we are interested in providing simultaneous estimators of H andσ in the four following models. ForH >1/2 andt≥0

dX(t )=σdbH(t )+μdt, (4)

dX(t )=σdbH(t )+μX(t )dt, (5)

dX(t )=σ X(t )dbH(t )+μX(t )dt, (6)

dX(t )=σ X(t )dbH(t )+μdt, (7)

withX(0)=c.

The solution to these equations are, respectively:

(4) X(t )=σ bH(t )+μt+c(see [8]), (5) X(t )=σ bH(t )+exp(μt )[σ μ(t

0bH(s)exp(−μs)ds)+c], (6) X(t )=cexp(μt+σ bH(t ))(see [4] and [7]),

(7) X(t )=exp(σ bH(t ))(c+μt

0exp(−σ bH(s))ds).

We consider the problem of estimating simultaneously H and σ >0. Suppose we observe instead of X(t ) a smoothed by convolution process Xε(t )=ϕεX(t ), where ϕε(·)=1/εϕ(·/ε), with ϕ(·) as before and where we have extendedX(·)by means ofX(t )=c, ift <0.

For model (7) we will make the additional hypothesis thatμandchave the same sign withμpossibly null.

From now on, we shall note for eacht≥0 andε >0, ZXε(t ):=

⎧⎨

ε(2−H )X¨ε(t )

σ2H for the first two models,

ε(2−H )X¨ε(t )

σ2HXε(t ) for the other two.

(8)

Fork≥1, let us denote Ak(ε):= 1

σkNkk 1

0

ZεX(u)kdu

−1. (9)

The remark following Theorem 2.1 allows us to state the following theorem.

Theorem 3.1. Fork≥1, (1) Ak(ε)−→a.s.

ε00.

(2) Furthermore

√1

εAk(ε)=Sgk(ε)+oa.s.(1), where gk(x):= |x|k

Nkk −1. (10)

At this step, we can propose estimators ofH andσ, by observingXε(u)at several scales of the parameterε, i.e.

hi=εci,ci>0,i=1, . . . , . In this aim, let us define Mk(ε):=

1

0X¨ε(u)kdu for the first two models, 1

0X¨ε(u)

Xε(u)kdu for the other two. (11)

(6)

Using assertion (1) of Theorem 3.1, we get εk(2−H )Mk(ε)

σ2Hk σkNkk

−→a.s.

ε01, from which we obtain

log Mk(ε)

=k(H−2)log(ε)+log

σ2Hk σkNkk

+oa.s.(1). (12)

The following regression model can be written, for each scalehi: Yi=akXi+bk+ξi, i=1, . . . , ,

whereak:=k(H −2),bk:=log(σ2Hk σkNkk)and fori=1, . . . , ,Yi:=log(Mk(hi)),Xi:=log(hi). Hence, the least squares estimatorsHk ofHandBkofbk are defined as

k(Hk−2):=

i=1

zilog

Mk(hi)

(13)

and

Bk:=1

i=1

log Mk(hi)

k(Hk−2)1

i=1

log(hi), (14)

where

zi:= yi

i=1y2i and yi:=log(ci)−1

i=1

log(ci).

Note the following property

i=1

zi =0 and

i=1

ziyi=1. (15)

Then we propose σ2Hk :=σk

2Hk, as estimator ofσ2Hk and

σk:= exp(Bk)

σ2Hk Nkk, (16)

as estimator ofσk. Finally, we proposeσkas estimator ofσ defined by σk:=σk1/ k

. (17)

Theorem 3.1 and Theorem 2.2 imply the following theorem for all the range ofH belonging to]1/2,1[. Theorem 3.2. Fork≥1,

(7)

(1) Hkis a strongly consistent estimator ofH and

√1

ε(HkH )

ε0N

0, σg2k,

c,c

z k

, wheregk(·)is defined by(10)and

gk(x)= n=1

ˆ

g2n,kH2n(x), withgˆ2n,k= 1 (2n)!

n1 i=0

(k−2i).

(2) σk is a weakly consistent estimator ofσ and

√ 1

εlog(ε)kσ )

ε0N

0, σ2σg2

k,

c,

c z

k

.

Remark. As in[3],the varianceσg2

k,(c,

c(z/ k))is minimal fork=2and then the best estimators forHandσ in the sense of minimal variance are obtained fork=2.

Now let us suppose thatH, 12< H <1, is known. Theorem 3.1 also provides estimators forσ. Indeed, if fork≥1 we set,

˜

σk:=ZεX(·)k Nk

see (8) for definition ofZεX(·) ,

then Theorem 3.1 and the remark following Theorem 2.2 imply the following theorem.

Theorem 3.3. Fork≥1and ifH is known with12< H <1then (1) σ˜k is a strongly consistent estimator ofσ and

(2)

√1

ε(σ˜kσ )

ε→0N

0,σ2 k2σg2

k

, wheregk(·)is defined by(10).

Remark 1. Note that the rate of convergence in assertion(2)is1/√

εinstead of1/(√

εlog(ε))as in assertion(2)in Theorem3.2.This is due to the fact that hereH is known.

Remark 2. The varianceσg2

k/ k2 is minimal fork=2 and then the best estimator for σ in the sense of minimal variance is obtained fork=2.

Proof of Theorem 3.1.

(1) We need the following lemma for which proof is given in the Appendix.

Lemma 3.1. For0≤t≤1, X¨ε(t )=

σb¨εH(t )+aε(t ) for the first two models, σ Xε(t )b¨εH(t )+Xε(t )aε(t ) for the other two, with

aε(t )C(ω)

ε(H2δ)1{0tε}+ε(2H2δ)1{εt1}

, for anyδ >0.

(8)

Remark. Indeed,for the first model one has, aε(t )C(ω)ε(H2−δ)1{0tε}

and for the second one, aε(t )C(ω)

ε(H2δ)1{0tε}+ε(H1δ)1{εt1} .

We have to prove that almost surely, ZεX(·)k converges to σNk when ε goes to zero. For this, we write ZXε(·)kas

ZεX(·)

k=σ Zε(·)

k+ZXε(·)

kσ Zε(·)

k.

By the remark following Theorem 2.1, we know thatZε(·)k converges almost surely toNk when εgoes to zero. Thus, we have just to prove that|ZεX(·)kσ Zε(·)k|converges almost surely to zero withε. By Lemma 3.1 and using Minkowski’s inequality, one has

ZXε(·)

kσ Zε(·)

kZXε(·)σ Zε(·)

k

=ε(2−H ) σ2H

aε(·)

kC(ω)

ε(1/ kδ)+ε(Hδ) .

Choosingδ small enough, i.e. 0< δ <inf(1k, H ), we proved that the last term in the above inequality tends almost surely to zero withεand assertion (1) follows.

(2) We write

√1

εAk(ε)=Sgk(ε)+ 1

εNkkσkZXε(·)k

kσ Zε(·)k

k

.

Let us prove now that(ZεX(·)kkσ Zε(·)kk)=oa.s.(

ε). Using the bound |x+y|k− |x|k≤2(k1)k|y|

|x|(k1)+ |y|(k1)

, fork≥1, and Hölder’s inequality, we obtain

fkkgkkf (·)kg(·)k

1

≤2(k−1)kfgk

g(kk1)+ fg(kk1)

. (18)

Let us apply this inequality tof (·):=ZXε(·)and tog(·):=σ Zε(·), successively with the norm · k,εand the norm · k,εc.

On the one hand, applying Lemma 3.1, one obtains thatZXε(·)σ Zε(·)k,εC(ω)ε(1/ kδ)and thatZXε(·)σ Zε(·)k,εcC(ω)ε(Hδ).

On the other hand, the trajectories ofbH(·)are(Hδ)-Hölder continuous, in other words for anyδ >0

bH(u+ε)bH(u)C(ω)ε(Hδ). (19)

Using this fact, we get b¨εH(u)=

1 ε2

−∞ϕ(v)¨

bH(uεv)bH(u) dv

C(ω)ε(H2δ). (20)

We deduceZε(·)k,εC(ω)ε(1/ kδ)andZε(·)k,εcC(ω)εδ.

(9)

Finally, takingδsmall enough, i.e. 0< δ <1k(H12), we proved that ZεX(·)k

k−σ Zε(·)k

kC(ω)

ε(1δk)+ε(Hδk)+εk(Hδ)

C(ω)ε(Hδk)=oa.s.

ε ,

and assertion (2) follows.

Proof of Theorem 3.2.

(1) By using (13) and (12) we obtain k(Hk−2)=

i=1

zi

k(H−2)log(εci)+bk

+oa.s.(1),

and property (15) gives

k(Hk−2)=k(H−2)+oa.s.(1).

We proved thatHk is a strongly consistent estimator ofH.

Now by using definitions ofAk(ε)and ofMk(ε)(see (9) and (11)), one obtains Ak(ε)=εk(2H )exp(−bk)Mk(ε)−1.

With this definition and using a Taylor expansion for the logarithm function one has log

Mk(ε)

=log

εk(H2)exp(bk) +log

1+Ak(ε)

=k(H−2)log(ε)+bk+Ak(ε)+A2k(ε)

−1 2 +ε1

Ak(ε) . (21)

Let us see that A2k(ε)

−1 2 +ε1

Ak(ε) =oP

ε

. (22)

By assertion (2) of Theorem 3.1 we know that

√1

εAk(ε)=Sgk(ε)+oa.s.(1), (23)

wheregk(·)is defined by (10), and by Lemma 2.1, E

Sg2

k(ε)

=O(1), (24)

so 1

εA2k(ε)=oP(1)and then (22) is proved.

By using (21)–(23) we obtain log

Mk(ε)

=k(H−2)log(ε)+bk+√

εSgk(ε)+oP

ε

. (25)

Thus (13), (25) and property (15) entail that

k(Hk−2)=k(H−2)+

i=1

zi

εciSgk(εci)+oP

ε .

(10)

Then

(HkH )

ε =1 k

i=1

zi

ciSgk(εci)+oP(1).

Theorem 2.2 gives the required result (the computation of the coefficients in the Hermite expansion of functiongk(·) is explicitly made in the proof of Corollary 3.2 of [3]).

(2) Let us see thatBk is a weakly consistent estimator ofbk. By using (14) and (25), one has

Bkbk=k

(HHk) i=1

log(hi)+√ ε1

i=1

ciSgk(εci)+√ εoP(1).

Thus

Bkbk=klog(ε)(H−Hk)+k

(HHk) i=1

log(ci)+√ ε1

i=1

ciSgk(εci)+√ εoP(1).

Using assertion (1) of Theorem 3.2 and (24), we obtain [Bkbk]

εlog(ε) =k

HHk

ε

+oP(1), (26)

and then using again assertion (1) of Theorem 3.2 we proved thatBk is a weakly consistent estimator ofbk.

Now using a Taylor expansion of order one for the exponential function, equality (26), the fact thatBkis a weakly consistent estimator ofbk and assertion (1) of Theorem 3.2, we finally get

[exp(Bk)−exp(bk)]

εlog(ε) =kexp(bk)

HHk

ε

+oP(1). (27)

Thus if we get back to the definition ofσk(see (16)) and if we use the last equality (27) we get kσk)

εlog(ε) =σkexp(−bk)

εlog(ε)

exp(Bk)−exp(bk)

+exp(Bk) 1

σk

2Hk

− 1 σ2Hk σ2Hk

=k

HHk

ε

+ σ

σ2H

k

k

exp(−bk)exp(Bk)[σ2Hkσk

2Hk]

εlog(ε) +oP(1).

At this step of the proof we are going to show that kσk)

εlog(ε) =k

HHk

ε

+oP(1). (28)

Using the fact thatBkis a weakly consistent estimator ofbkit is enough to prove the following convergence 2Hkσk

2Hk)

εlog(ε)

−→P ε00, which is the same as showing

2H2σ2

2Hk)

εlog(ε)

−→P

ε00. (29)

(11)

We write 2H2σ2

2Hk)

εlog(ε) = 1 2π√

εlog(ε)

−∞|x|(32H )ϕ(−x)2

1−exp

2(H−Hk)log

|x|

dx.

Making a Taylor expansion for the exponential function we obtain 2H2σ2

2Hk)

εlog(ε) = (HkH ) π√

εlog(ε)

−∞|x|(32H )ϕ(x)2log

|x| exp

θε(x) dx,

whereθε(x)is a point between 0 and 2(H−Hk)log(|x|). By using assertion (1) of Theorem 3.2, and inequality exp

θε(x)

≤exp

2|HHk|log

|x|,

we will get the convergence in (29) by showing the following convergence result

−∞|x|(32H )ϕ(x)2|log

|x|

|exp

2|HHk|log

|x|dx

−→a.s.

ε0

−∞|x|(32H )ϕ(x)2log

|x|dx <+∞. (30)

To prove the convergence in (30) we use the fact that(HHk)=oa.s.(1)and then forx=0,

|x|(32H )ϕ(x)2log

|x|exp

2|HHk|log

|x|

−→a.s.

ε→0|x|(32H )ϕ(x)2log

|x|.

Now, let 0< δ <inf(2H, (4−2H )), then almost surely for allω, there existsε(ω)such that 2|(HHk(ω))| ≤δ, whenεε(ω). Furthermore, using the fact thatϕ is a density one has|ϕ(x)|2≤1. Since forx=0,|ϕ(x)|2C|x|4, then almost surely for allω, for allεε(ω)andx=0, one obtains

|x|(32H )ϕ(−x)2log

|x|exp

2|H−Hk|log

|x|

≤ |x|(32(H+δ/2))log

|x|1|x|≤1+ |x|(12(Hδ/2))log

|x|1|x|>1.

Since(Hδ/2) >0 and(4−2(H+δ/2)) >0, we can apply Lebesgue’s dominated convergence theorem to obtain the convergence in (30), thus we proved the convergence in (29) and equality (28) follows.

Now if we remark that the asymptotic behavior ofk−1)(see (17) for definition ofσk) is the same as that of kk−1)/ k, then by (28) and assertion (1) of Theorem 3.2 the proof of assertion (2) is complete.

Remark. The last step of the proof shows that the asymptotic behavior of((HˆkH )/

ε, (σkσ )/(

εlog(ε)))is a

degenerated Gaussian law.

Proof of Theorem 3.3.

(1) Assertion (1) follows from assertion (1) of Theorem 3.1.

(2) Assertion (2) of Theorem 3.1 and the remark following Theorem 2.2 imply that 1

ε([σ˜σk]k−1)converges weakly toσgkN(0,1)that yields assertion (2).

Remark 2 follows from the fact that sincegˆ2,k=k2, one has σg2

k

k2 = 1 k2

n=1

ˆ

g2n,k2 (2n)! +∞

−∞ ρH2n(x)dx≥ 2 k2gˆ2,k2

+∞

−∞ ρH2(x)dx=1 2

+∞

−∞ ρH2(x)dx=σg2

2

4 .

(12)

3.2. Functional estimation ofσ (·)

Under certain regularity conditions forμ(·)andσ (·), we consider the “pseudo-diffusion” equations (1) with respect tobH(·), that is

X(t )=c+ t

0

σ X(u)

dbH(u)+ t

0

μ X(u)

du,

fort≥0,H >1/2 and positiveσ (·). We consider the problem of estimatingσ (·). Suppose we observe, as before, instead ofX(t )the smoothed by convolution processXε(t )=1ε

−∞ϕ(tεx)X(x)dx, withϕ(·)as in Section 2, where we have extendedX(·)by means ofX(t )=c, ift <0.

In a previous paper [2], in the case whereμ(·)≡0, estimation ofσ (·)is done, using the first increments ofX(·)or more generally the first derivative ofXε(·). Namely we proved the two following theorems.

Theorem 3.4. Let1/2< H <1.Ifh(·)C0andσ (·)C1then π

2 ε(1H )

σ2H 1

0

h

Xε(u)X˙ε(u)du−→a.s.

1

0

h X(u)

σ X(u)

du, whereσ˜2H is defined by

˜

σ2H2 :=V

ε(1−H )b˙εH(t )

= 1 2π

+∞

−∞ |x|(12H )ϕ(ˆ −x)2dx.

The rate of convergence is given by Theorem 3.5.

Theorem 3.5. Let us suppose that 1/2 < H <3/4, h(·)C4, σ (·)C4, σ (·) is bounded and sup{|σ(4)(x)|,

|h(4)(x)|} ≤P (|x|),whereP (·)is a polynomial,then

√1 ε

π 2

ε(1H )

˜ σ2H

1

0

h

Xε(u)X˙ε(u)du− 1

0

h X(u)

σ X(u)

du , converges stably towards

σg1

1

0

h X(u)

σ X(u)

dW (u).

Here,W ( ·)is a standard Brownian motion independent ofbH(·),g1(x)= π2|x| −1.

We give an outline of the proof of the last two above theorems in order to generalize these results to our setting, considering the case whereμ(·) is not necessarily null. Indeed, becausebH(·) has zero quadratic variation when H >1/2, Lin (see ([8]) proved that whenσ (·)C1andμ(·)≡0, the solution to the stochastic differential equation (1) can be expressed asX(t )=K(bH(t )), fort≥0, whereK(t )is the solution to the ordinary differential equation (ODE)

K(t )˙ =σ K(t )

, K(0)=c (31)

(fort <0,X(t )=c).

Heuristic proof of Theorem 3.4. We have shown in [2] that π

2 ε(1H )

˜ σ2H

1 0

h

Xε(u)X˙ε(u)du

π 2

ε(1H )

˜ σ2H

1

0

h K

bHε(u) σ

K

bεH(u)b˙εH(u)du,

Références

Documents relatifs

If external lines are to be used then the corresponding port pins should be programmed as bit ports with the correct data direction. Finally, theCo~nter/Timer

The tobacco industry is unrelenting in its attempts to draw in new customers, grow its profits and undermine tobacco control efforts.. It monitors our activities and efforts so that

swift -A https://oculcloud/auth/v1.0 -U system:root -K testpass upload thumbnails dataverse/logo.png swift -A https://oculcloud/auth/v1.0 -U system:root -K testpass upload

The common point to Kostant’s and Steinberg’s formulae is the function count- ing the number of decompositions of a root lattice element as a linear combination with

ERRATUM ON “ENTROPY-ENERGY INEQUALITIES AND IMPROVED CONVERGENCE RATES FOR NONLINEAR..

For any starting point in R d , d ≥ 3, we identify the stochastic dierential equa- tion of distorted Brownian motion with respect to a certain discontinuous Muck- enhoupt A 2

Implicite- ment, le Traité de la CEDEAO sur la libre circulation des personnes a conféré un statut particulier aux res- sortissants ouest-africains qui émi- grent dans les

We will establish a local Li-Yau’s estimate for weak solutions of the heat equation and prove a sharp Yau’s gradient gradient for harmonic functions on metric measure spaces, under