• Aucun résultat trouvé

Characterizations of processes with stationary and independent increments under <span class="mathjax-formula">$G$</span>-expectation

N/A
N/A
Protected

Academic year: 2022

Partager "Characterizations of processes with stationary and independent increments under <span class="mathjax-formula">$G$</span>-expectation"

Copied!
18
0
0

Texte intégral

(1)

www.imstat.org/aihp 2013, Vol. 49, No. 1, 252–269

DOI:10.1214/12-AIHP492

© Association des Publications de l’Institut Henri Poincaré, 2013

Characterizations of processes with stationary and independent increments under G-expectation 1

Yongsheng Song

Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. E-mail:yssong@amss.ac.cn Received 26 October 2010; revised 8 April 2012; accepted 17 April 2012

Abstract. Our purpose is to investigate properties for processes with stationary and independent increments underG-expectation.

As applications, we prove the martingale characterization ofG-Brownian motion and present a pathwise decomposition theorem for generalizedG-Brownian motion.

Résumé. Notre but est d’étudier des propriétés de processus à accroissements stationnaires et indépendants sous uneG-espérance.

Comme application, nous démontrons la caractérisation de la martingale deG-mouvement Brownien et fournissons un théorème de décomposition trajectorielle pour leG-mouvement Brownien généralisé.

MSC:60G10; 60G17; 60G48; 60G51

Keywords:Stationary increments; Independent increments; Martingale characterization; Decomposition theorem;G-Brownian motion;

G-expectation

1. Introduction

Recently, motivated by the modelling of dynamic risk measures, Shige Peng ([3–5]) introduced the notion of aG- expectation space. It is a generalization of probability spaces (with their associated linear expectation) to spaces endowed with a nonlinear expectation. As the counterpart of Wiener space in the linear case, the notion ofG-Brownian motion was introduced under the nonlinearG-expectation.

Recall that if {At}is a continuous process over a probability space(Ω,F, P )with stationary, independent in- crements and finite variation, then there exists some constantcsuch thatAt =ct. However, it is not the case in the G-expectation space T, L1GT),E). A counterexample isˆ {Bt}, the quadratic variation process for the coor- dinate process{Bt}, which is aG-Brownian motion. We know that {Bt}is a continuous, increasing process with stationary and independent increments, but it is not deterministic.

The process{Bt}is very important in the theory ofG-expectation, which shows, in many aspects, the difference between probability spaces andG-expectation spaces. For example, we know that for a probability space continuous local martingales with finite variation are trivial processes. However, [4] proved that in aG-expectation space all pro- cesses in form oft

0ηsdBst

02G(ηs)ds,ηMG1(0, T )(see Section2for the definitions of the functionG(·)and the spaceMG1(0, T )), are nontrivialG-martingales with finite variation (in fact, they are even nonincreasing) and con- tinuous paths. [4] also conjectured that anyG-martingale with finite variation should have such representation. Up to

1Supported by NCMIS; Youth Grant of National Science Foundation (No. 11101406/A011002); the National Basic Research Program of China (973 Program) (No. 2007CB814902); Key Lab of Random Complex Structures and Data Science, Chinese Academy of Sciences (Grant No. 2008DP173182).

(2)

now, some properties of the process{Bt}remain unknown. For example, we know that, ifG(x)=12supσσσσ2x generates theG-expectation, we haveσ2(ts)BtBsσ (ts)for alls < t, but we do not know whether {dsdBs}belongs toMG1(0, T ). This is a very important property since{dsdBs} ∈MG1(0, T )would imply that the representation mentioned above ofG-martingales with finite variation is not unique.

For the case of a probability space, a continuous local martingale{Mt}is a standard Brownian motion if and only if the quadratic variation process Mt =t. However, it’s not the case for G-Brownian motion since its quadratic variation process is only an increasing process with stationary and independent increments. How can we give a char- acterization forG-Brownian motion?

In this article, we shall prove that if At=t

0hsds (respectivelyAt =t

0hsdBs) is a process with stationary, independent increments and hMG1(0, T )(respectivelyhMGβ,+(0, T ), for someβ >1), then there exists some constantcsuch thathc. As applications, we prove the following conclusions (Question 1 and 3 are put forward by Prof. Shige Peng in private communications):

1. {dsdBs}∈/MG1(0, T ).

2. (Martingale characterization)

A symmetricG-martingale{Mt}is aG-Brownian motion if and only if its quadratic variation process{Mt} has stationary and independent increments;

A symmetricG-martingale{Mt}is aG-Brownian motion if and only if its quadratic variation processMt= cBt for somec≥0.

The sufficiency of the second assertion is trivial, but not the necessity.

3. Let{Xt}be a generalizedG-Brownian motion with zero mean, then we have the following decomposition:

Xt=Mt+Lt,

where{Mt}is a (symmetric) G-Brownian motion, and {Lt}is a nonpositive, nonincreasingG-martingale with stationary and independent increments.

This article is organized as follows: In Section2we recall some basic notions and results ofG-expectation and the related space of random variables. In Section3we characterize processes with stationary and independent increments.

In Section4, as application, we prove the martingale characterization ofG-Brownian motion and present a decompo- sition theorem for generalizedG-Brownian motion. In Section5we present some properties forG-martingales with finite variation.

2. Preliminary

We recall some basic notions and results ofG-expectation and the related space of random variables. More details of this section can be found in [3–8].

Definition 2.1. LetΩbe a given set and letHbe a vector lattice of real valued functions defined onΩwithcHfor all constantsc.His considered as the space of “random variables.” A sublinear expectationEˆ onHis a functional Eˆ:HRsatisfying the following properties:For allX, YH,we have

(a) Monotonicity:IfXY thenE(X)ˆ ≥ ˆE(Y ).

(b) Constant preserving:E(c)ˆ =c.

(c) Sub-additivity:E(X)ˆ − ˆE(Y )≤ ˆE(XY ).

(d) Positive homogeneity:E(λX)ˆ =λE(X),ˆ λ≥0.

(Ω,H,E)ˆ is called a sublinear expectation space.

Definition 2.2. Let X1and X2 be twon-dimensional random vectors defined respectively in sublinear expectation spaces(Ω1,H1,Eˆ1)and(Ω2,H2,Eˆ2).They are called identically distributed,denoted byX1X2,ifEˆ1[ϕ(X1)] = Eˆ2[ϕ(X2)],for allϕCl,Lip(Rn),whereCl,Lip(Rn)is the space of real continuous functions defined onRnsuch that

ϕ(x)ϕ(y)C

1+ |x|k+ |y|k

|xy|, for allx, yRn,

(3)

wherekandCdepend only onϕ.

Definition 2.3. In a sublinear expectation space(Ω,H,E)ˆ a random vectorY =(Y1, . . . , Yn),YiH,is said to be independent of another random vectorX=(X1, . . . , Xm),XiH,underE(ˆ ·),denoted byYX,if for every test functionϕCb,Lip(Rm×Rn)we haveEˆ[ϕ(X, Y )] = ˆE[ ˆE[ϕ(x, Y )]x=X].

Definition 2.4 (G-normal distribution). Ad-dimensional random vectorX=(X1, . . . , Xd)in a sublinear expecta- tion space(Ω,H,E)ˆ is calledG-normal distributed if for everya, bR+we have

aX+bXˆ ∼

a2+b2X,

whereXˆ is an independent copy ofX.Here the letterGdenotes the function G(A):=1

2Eˆ

(AX, X)

:SdR,

whereSddenotes the collection ofd×dsymmetric matrices.

The functionG(·):SdRis a monotonic, sublinear mapping onSdandG(A)=12Eˆ[(AX, X)] ≤12|A| ˆE[|X|2] =:

1

2|A| ¯σ2implies that there exists a bounded, convex and closed subsetΓSd+such that G(A)=1

2 sup

γΓ

Tr(γ A). (2.1)

If there exists someβ >0 such thatG(A)G(B)βTr(A−B)for anyAB, we call theG-normal distribution nondegenerate. This is the case we consider throughout this article.

Definition 2.5. (i)LetΩT =C0([0, T];Rd)be endowed with the supremum norm and{Bt}be the coordinate process.

Set HT0 := {ϕ(Bt1, . . . , Btn)|n≥1, t1, . . . , tn∈ [0, T], ϕCl,Lip(Rd×n)}. G-expectation is a sublinear expectation defined by

Eˆ[X] = ˜E ϕ(

t1t0ξ1, . . . ,

tmtm1ξm) ,

for allX=ϕ(Bt1Bt0, Bt2Bt1, . . . , BtmBtm−1),whereξ1, . . . , ξn are identically distributedd-dimensionalG- normally distributed random vectors in a sublinear expectation space(Ω,˜ H˜,E)˜ such thatξi+1 is independent of 1, . . . , ξi)for everyi=1, . . . , m−1.T,H0T,E)ˆ is called aG-expectation space.

(ii)Let us define the conditionalG-expectationEˆtofξH0T knowingH0t,fort∈ [0, T].Without loss of generality we can assume thatξ has the representationξ =ϕ(Bt1Bt0, Bt2Bt1, . . . , BtmBtm−1) witht =ti,for some 1≤im,and we put

Eˆti

ϕ(Bt1Bt0, Bt2Bt1, . . . , BtmBtm−1)

= ˜ϕ(Bt1Bt0, Bt2Bt1, . . . , BtiBti−1), where

˜

ϕ(x1, . . . , xi)= ˆE

ϕ(x1, . . . , xi, Bti+1Bti, . . . , BtmBtm1) .

Defineξp,G= [ ˆE(|ξ|p)]1/pforξH0T andp≥1. Then for allt∈ [0, T],Eˆt(·)is a continuous mapping onH0T with respect to the norm · 1,Gand therefore can be extended continuously to the completionL1GT)ofH0T under the norm · 1,G.

LetLipT):= {ϕ(Bt1, . . . , Btn)|n≥1, t1, . . . , tn∈ [0, T], ϕCb,Lip(Rd×n)}, whereCb,Lip(Rd×n)denotes the set of bounded Lipschitz functions onRd×n. [1] proved that the completions of CbT),H0T andLipT)under · p,Gare the same; we denote them byLpGT).

(4)

Definition 2.6. (i)We say that{Xt}on(ΩT, L1GT),E)ˆ is a process with independent increments if for any0< t <

T ands0≤ · · · ≤smtt0≤ · · · ≤tnT,

(Xt1Xt0, . . . , XtnXtn−1)(Xs1Xs0, . . . , XsmXsm−1).

(ii)We say that{Xt}on(ΩT, L1GT),E)ˆ withXtL1Gt)for everyt∈ [0, T]is a process with independent incrementsw.r.t. the filtrationif for any0< s < T ands0≤ · · · ≤smst0≤ · · · ≤tnT,

(Xt1Xt0, . . . , XtnXtn1)(Bs1Bs0, . . . , BsmBsm1).

Remark 2.7. (i) Let ξL1GT). If there exists s ∈ [0, T] such that for any s0≤ · · · ≤sms, ξ(Bs1Bs0, . . . , BsmBsm1),then we have Eˆs(ξ )= ˆE(ξ ).In fact,there is no loss of generality,we assume E(ξ )ˆ =1 andCξεfor someC, ε >0.Setη= ˆEs(ξ ).For anynN,we have

Eˆ ηn+1

= ˆE ηnξ

.

Sinceξηn,we have Eˆ

ηn+1

= ˆE ηn

= · · · = ˆE(η)=1.

By this,we have η≤1, q.s.

On the other hand,we have Eˆ

−1)2

= ˆE

η(η−2)

+1= ˆE

η(ξ−2) +1.

Sinceξ−2⊥η,we have Eˆ

(1η)2

= ˆE(1η).

By Theorem2.12below,there existsPPsuch that EP

(1η)2

= ˆE

(1η)2 .

Noting that

EP(1η)≤ ˆE(1η)= ˆE

(1η)2

=EP

(1η)2

EP(1η), we have

EP

(1η)2

=EP(1η).

By this,we have η2=η, P-a.s.

Sinceηε,we haveη=1,P-a.s.So we have Eˆ

(1η)2

=EP

(1η)2

=0.

(ii)Let{Xt}on(ΩT, L1GT),E)ˆ be a process with stationary and independent increments and letc= ˆE(XT)/T. IfE(Xˆ t)→0ast↓0,then for any0≤s < tT,we haveE(Xˆ tXs)=c(ts).

(5)

Definition 2.8. Let{Xt}be ad-dimensional process defined on(ΩT, L1GT),E)ˆ such that:

(i) X0=0;

(ii) {Xt}is a process with stationary and independent increments w.r.t.the filtration;

(iii) limt0Eˆ[|Xt|3]t1=0.

Then{Xt}is called a generalizedG-Brownian motion.

If in additionE(Xˆ t)= ˆE(Xt)=0for allt∈ [0, T],{Xt}is called a(symmetric)G-Brownian motion.

Remark 2.9. (i)Clearly,the coordinate process{Bt}is a(symmetric)G-Brownian motion and its quadratic variation process{Bt}is a process with stationary and independent increments(w.r.t.the filtration).

(ii) [4]gave a characterization for the generalizedG-Brownian motion:Let{Xt}be a generalizedG-Brownian motion.Then

Xt+sXt∼√

+ fort, s≥0, (2.2)

where(ξ, η)isG-distributed(see,e.g., [6]for the definition ofG-distributed random vectors).In fact,this character- ization presented a decomposition of generalizedG-Brownian motion in the sense of distribution.In this article,we shall give a pathwise decomposition for the generalizedG-Brownian motion.

LetHG0(0, T )be the collection of processes of the following form: for a given partition{t0, . . . , tN} =πT of[0, T], N≥1,

ηt(ω)=

N1

j=0

ξj(ω)1]tj,tj+1](t),

whereξiLipti),i=0,1,2, . . . , N−1. For everyηHG0(0, T ), letηHp

G= { ˆE(T

0 |ηs|2ds)p/2}1/p,ηMp

G=

{ ˆE(T

0 |ηs|pds)}1/pand denote byHGp(0, T ),MGp(0, T )the completions ofHG0(0, T )under the norms·Hp

GMp respectively. G

Definition 2.10. For everyηHG0(0, T )with the form ηt(ω)=

N1

j=0

ξj(ω)1]tj,tj+1](t),

we define

I (η)= T

0

η(s)dBs:=

N1

j=0

ξj(Btj+1Btj).

By B–D–G inequality (see Proposition 4.3 in [10] for this inequality under G-expectation), the mapping I:HG0(0, T )LpGT)is continuous under · Hp

G and thus can be continuously extended toHGp(0, T ).

Definition 2.11. (i)A process{Mt}with values inL1GT)is called aG-martingale ifEˆs(Mt)=Ms for anyst.

If{Mt}and{−Mt}are bothG-martingales,we call{Mt}a symmetricG-martingale.

(ii)A random variableξL1GT)is called symmetric ifE(ξ )ˆ + ˆE(ξ )=0.

AG-martingale{Mt}is symmetric if and only ifMT is symmetric.

(6)

Theorem 2.12 ([1,2]). There exists a tight subsetPM1T)such that E(ξ )ˆ =max

P∈PEP(ξ ) for allξH0T. Pis called a set that representsE.ˆ

Remark 2.13. (i)Let(Ω0,F0, P0)be a probability space and{Wt}be ad-dimensional Brownian motion underP0. LetF0= {Ft0}be the augmented filtration generated byW. [1]proved that

PM:=

Ph|Ph=P0X1, Xt= t

0

hsdWs, hL2F0

[0, T];Γ1/2

is a set that represents E,ˆ where Γ1/2:= {γ1/2|γΓ}and Γ is the set in the representation of G(·)in the for- mula(2.1).

(ii)For the1-dimensional case,i.e.,ΩT =C0([0, T], R), L2F0:=L2F0

[0, T];Γ1/2

=

h|his adapted w.r.t.F0andσhsσ ,

whereσ2= ˆE(B12)andσ2= − ˆE(B12).

G(a)=1/2Eˆ aB12

=1/2

σ2a+σ2a

foraR.

(iii)Setc(A)=supP∈PMP (A),forAB(ΩT).We sayAB(ΩT)is a polar set ifc(A)=0.If an event happens except on a polar set,we say the event happens q.s.

3. Characterization of processes with stationary and independent increments

In what follows, we only consider the G-expectation space(ΩT, L1GT),E)ˆ withΩT =C0([0, T], R)andσ2= E(Bˆ 12) >− ˆE(B12)=σ2>0.

Lemma 3.1. ForζMG1(0, T )andε >0,let ζtε=1

ε t

(tε)+

ζsds and

ζtε,0=

kε1

k=1

1 ε

(k1)ε

ζsds1]kε,(k+1)ε](t), wheret∈ [0, T],kεεT < (kε+1)ε.Then asε→0

ζεζ

MG1(0,T )→0 and ζε,0ζ

MG1(0,T )→0.

Proof. The proofs of the two cases are similar. Here we only prove the second case. Our proof starts with the obser- vation that for anyζ, ζMG1(0, T )

ζε,0ζε,0

MG1(0,T )ζζ

MG1(0,T ). (3.1)

By the definition of the spaceMG1(0, T ), we know that for everyζMG1(0, T ), there exists a sequence of processes {ζn}with

ζtn=

mn1

k=0

ξtnn

k1]tkn,tk+1n ](t)

(7)

andξtnn

kLiptn

k)such that ζζn

MG1(0,T )→0 asn→ ∞. (3.2)

It is easily seen that for everyn ζn;ε,0ζn

MG1(0,T )→0 asε→0. (3.3)

Thus we get ζε,0ζ

MG1(0,T )

≤ζε,0ζn;ε,0

MG1(0,T )nζn;ε,0

MG1(0,T )nζ

MG1(0,T )

≤2ζnζ

MG1(0,T )nζn;ε,0

MG1(0,T ).

The second inequality follows from (3.1). Combining (3.2) and (3.3), first lettingε→0, then lettingn→ ∞, we have ζε,0ζ

MG1(0,T )→0 asε→0.

Theorem 3.2. LetAt=t

0hsdswithhMG1(0, T )be a process with stationary and independent increments(w.r.t.

the filtration).Then we havehcfor some constantc.

Proof. Letc:= ˆE(AT)/T ≥ − ˆE(AT)/T =:c. FornN, setε=T /(2n), and definehT /(2n),0as in Lemma3.1.

Then we have h−hT /(2n),0

MG1(0,T )

= ˆE 2n1

k=0

(k+1)T /(2n)

kT /(2n)

hshT /(2n),0s ds

≥ ˆE n1

k=1

(2k+1)T /(2n) 2kT /(2n)

hshT /(2n),0s ds

= ˆE n1

k=1

(2k+1)T /(2n) 2kT /(2n)

hsds−

2kT /(2n) (2k1)T /(2n)

hsds

= ˆE

n1

k=1

(A(2k+1)T /2nA2kT /2n)(A2kT /2nA(2k1)T /2n) .

Consequently, from the condition of independence of the increments and their stationarity, we have hhT /(2n),0

MG1(0,T )

n1

k=1

Eˆ

(A(2k+1)T /2nA2kT /2n)(A2kT /2nA(2k1)T /2n)

=

n1

k=1

(cc)T /(2n)

=(cc)(n−1)T /(2n).

(8)

So by Lemma3.1, lettingn→ ∞, we havec=c. Furthermore, we note thatMt:=Atctis aG-martingale. In fact, fort > s, we see

Eˆs(Mt)

= ˆEs(MtMs)+Ms

= ˆE(MtMs)+Ms

=Ms.

The second equality is due to the independence of increments ofMw.r.t. the filtration.

So {Mt} is a symmetric G-martingale with finite variation, from which we conclude that Mt ≡0, hence that

At=ct.

Corollary 3.3. Assumeσ > σ >0.Then we have that{dsdBs}∈/MG1(0, T ).

Proof. The proof is straightforward from Theorem3.2.

Corollary 3.4. There is no symmetricG-martingale{Mt}which is a standard Brownian motion underG-expectation (i.e.Mt=t).

Proof. Let{Mt}be a symmetricG-martingale. If{Mt}is also a standard Brownian motion, by Theorem 4.8 in [10]

or Corollary 5.2 in [11], there exists{hs} ∈MG2(0, T )such that Mt=

t

0

hsdBs

and t

0

h2sdBs=t.

Thus we have dsdBs=hs2MG1(0, T ), which contradicts the conclusion of Corollary3.3.

Proposition 3.5. LetAt =t

0hsdswithhMG1(0, T )be a process with independent increments.ThenAt is sym- metric for everyt∈ [0, T].

Proof. By arguments similar to those in the proof of Theorem3.2, we have hhT /(2n),0

MG1(0,T )

≥ ˆE

n1

k=0

(A(2k+1)T /2nA2kT /2n)(A2kT /2nA(2k1)+T /2n)

=

n1

k=0

E(Aˆ (2k+1)T /2nA2kT /2n)+ ˆE

(A2kT /2nA(2k1)+T /2n) .

The right side of the first inequality is only the sum of the odd terms. Summing up the even terms only, we have h−hT /(2n),0

MG1(0,T )

n1

k=0

E(Aˆ (2k+2)T /2nA(2k+1)T /2n)+ ˆE

(A(2k+1)T /2nA2kT /2n) .

(9)

Combining the above inequalities, we have 2h−hT /(2n),0

MG1(0,T )

2n1

k=0

Eˆ

A(k+1)T /2nAkT /2n + ˆE

(A(k+1)T /2nAkT /2n)

≥ ˆE

2n1

k=0

A(k+1)T /2nAkT /2n + ˆE

2n1

k=0

(A(k+1)T /2nAkT /2n)

= ˆE(AT)+ ˆE(AT).

Thus by Lemma3.1, lettingn→ ∞, we haveE(Aˆ T)+ ˆE(AT)=0, which means thatAT is symmetric.

FornN, defineδn(s)in the following way:

δn(s)=

n1

i=0

(−1)i1]iT

n,(i+n1)T](s) for alls∈ [0, T]. In [12] we proved that limn→∞E(ˆ T

0 δn(s)hsds)=0 forhMG1(0, T ).

LetFt=σ{Bs|st}andF= {Ft}t∈[0,T].

In the following, we shall use some notations introduced in Remark2.13.

For every PPM and t ∈ [0, T], set At,P := {QPM|Q|Ft =P|Ft}. Proposition 3.4 in [9] gave the fol- lowing result: For t ∈ [0, T], assume ξL1GT) and ηL1Gt). Then η= ˆEt(ξ ) if and only if for every PPM

η=ess sup

Q∈At,P

PEQ|Ft), P-a.s.,

where ess supP denotes the essential supremum underP. Theorem 3.6. Let At =t

0hsdBs be a process with stationary,independent increments(w.r.t.the filtration)and hMG1,+(0, T ).IfATLβGT)for someβ >1,we haveAt=cBt for some constantc≥0.

Proof. For the readability, we divide the proof into several steps:

Step1. SetKt:=t

0hsds. We claim thatKT is symmetric.

Step1.1. Letμ= ˆE(AT)/T andμ= − ˆE(AT)/T. First, we shall prove that μ

σ2 =σμ2. Actually, for any 0≤s < tT, we have

Eˆs t

s

hrdr

= ˆEs t

s

θr1dAr

≥ 1 σ2Eˆs

t

s

dAr

= μ

σ2(ts) q.s.,

where the inequality holds due toθs := ddsBsσ2, q.s. Noting that μtAt is nonincreasing by Lemma 4.3 in Section4since it is aG-martingale with finite variation, we have, for everyηL2

F0,Pη-a.s., Eˆs

t s

hrdr

=ess sup

Q∈At,Pη

PηEQ t

s

hrdrFs

(10)

=ess sup

Q∈At,Pη

PηEQ t

s

θr1dArFs

μess sup

Q∈At,Pη

PηEQ

t s

θr1drFs

= μ σ2(ts).

SoEˆs(t

s hrdr)≥max{σμ2, μ

σ2}(ts)=:λ(ts), q.s.

On the other hand, Eˆs

t

s

hrdr

= ˆEs t

s

θr1dAr

≥ 1 σ2Eˆs

t

s

dAr

= −μ

σ2(ts), q.s.

and for everyηL2

F0,Pη-a.s., ˆ

Es

t

s

hrdr

=ess sup

Q∈At,Pη

PηEQ

t

s

hrdrFs

=ess sup

Q∈At,Pη

PηEQ

t

s

θr1dArFs

μess sup

Q∈At,Pη

PηEQ

t

s

θr1drFs

= −μ σ2(ts)

sinceAtμt is nonincreasing. So Eˆs

t

s

hrdr

≥ −min μ

σ2, μ σ2

(ts)=: −λ(ts), q.s.

Noting that Eˆ

T

0

δ2n(s)hsds

= ˆE

(2n1)T /(2n) 0

δ2n(s)hsds+ ˆE(2n1)T /(2n)

T

(2n1)T /(2n)

hsds

(λ)T 2n+ ˆE

(2n2)T /(2n) 0

δ2n(s)hsds+ ˆE(2n2)T /(2n)

(2n1)T /(2n) (2n2)T /(2n)

hsds

λλ 2n T + ˆE

(2n2)T /(2n)

0

δ2n(s)hsds

,

we have Eˆ

T 0

δ2n(s)hsds

λλ 2 T .

(11)

So

0= lim

n→∞Eˆ T

0

δ2n(s)hsds

λλ 2 T and μ

σ2 =σμ2 =:λ.

Step1.2. For everyηL2

F0,EPη(KT)=λT, which implies thatKT is symmetric.

Step1.2.1. We now introduce some notations: For 0≤s < tT andηL2

F0, setη=σ,η=σ,η=

σ2+σ2

2 on

]s, t]andη=η=η=ηon]s, t]c. FornN, setηrn=n1

i=0(σ1]t2i,t2i+1](r)+σ1]t2i+1,t2i+2](r))on]s, t]andηn=η on]s, t]c, wheretj=s+2nj (ts),j=0, . . . ,2n.

Step1.2.2.EPηn(t

s(hrλ)dr|Fs)→0,Pη-a.s., asn→ ∞.

Actually, we have,Pη-a.s., μ(ts)= ˆEs

t s

hrdBr

EPη t

s

hrdBrFs

=σ2EPη t

s

hrdrFs

.

So EPη

t

s

hrdrFs

λ(ts), Pη-a.s. (3.4)

By similar arguments we have that EPη

t

s

hrdrFs

λ(ts), Pη-a.s. (3.5)

Let’s compute the following conditional expectations:

EPηn t

s

(hrλ)δ2n(r)drFs

=EPFs

ηn

n1

i=0

EFPt2i

ηn

t2i+1

t2i

(hrλ)dr+EPFt2i+1

ηn

t2i+2

t2i+1

hr)dr

=:EPFs

ηn

n1

i=0

(Ai+Bi)

,

whereδ2n(r)=n1

i=0(1]t2i,t2i+1](r)−1]t2i+1,t2i+2](r)),tj=s+2nj (ts),j=0, . . . ,2n;

EPηn t

s

(hrλ)drFs

=EPFs

ηn

n1

i=0

(AiBi)

.

By (3.4) and (3.5) (noting thatηands, tare all arbitrary), we conclude thatAi, Bi≥0,Pηn-a.s. So EPηn

t s

(hrλ)drFs

EPηn

t s

(hrλ)δ2n(r)drFs

, Pη-a.s.

Noting that EPηn

t s

(hrλ)δ2n(r)drFs

≤ ˆEs t

s

(hrλ)δ2n(r)dr

, Pη-a.s.

Références

Documents relatifs

On étudie ensuite le cas où K n'est pas connexe et on obtient, lorsque G a un nombre fini de composantes connexes et lorsque K est un compact maximal, un isomorphisme de type Van

Si [J, 0] est une G-structure transverse à cette fibration, elle définit une famille différentiable de G-structures sur V paramétrées par la variété lV.. Par

Dans la suite, /désigne une fonction de B(G) à valeurs entières; n est une représentation unitaire fortement continue de G dans un espace de Hilbert H, et a et b sont deux éléments

ABSTRACT. ^{K)) l'algèbre réelle des opérateurs différentiels sur G (resp. sur K) invariants par les translations à gauche de G, (resp.. BENABDALLAH, Université Claude

The G-category associated to a crossed module M-G is stably closed iff everv commutator derivation G-M

Dans le paragraphe 4 nous prouvons le théorème d’homotopie pour les G-microfibrés topologiques..

Une variété presque quaternionienne admet une connexion presque quater- nionienne canonique de torsion T. Pour que le tenseur de structure soit. nul, il faut et il

Toute utilisa- tion commerciale ou impression systématique est constitutive d’une in- fraction pénale.. Toute copie ou impression de ce fichier doit conte- nir la présente mention