Upper bound of a volume exponent for directed polymers in a random environment
Olivier Mejane
Laboratoire de statistiques et probabilités, Université Paul Sabatier, 118, route de Narbonne, 31062 Toulouse, France Received 18 March 2003; received in revised form 16 October 2003; accepted 21 October 2003
Available online 13 February 2004
Abstract
We consider the model of directed polymers in a random environment introduced by Petermann: the random walk is Rd-valued and has independent N(0, Id)-increments, and the random media is a stationary centered Gaussian process (g(k, x), k1, x∈Rd)with covariance matrix cov(g(i, x), g(j, y))=δij(x−y),whereis a bounded integrable function onRd.For this model, we establish an upper bound of the volume exponent in all dimensionsd.
2004 Elsevier SAS. All rights reserved.
Résumé
On considère le modèle de polymères dirigés en environnement aléatoire introduit par Petermann : la marche aléatoire sous- jacente est à valeurs dansRd, ses incréments sont des variables indépendantes de loiN(0, Id), et le milieu aléatoire est un processus gaussien stationnaire centré(g(k, x), k1, x∈Rd)de matrice de covariance cov(g(i, x), g(j, y))=δij(x−y), oùest une fonction bornée intégrable surRd. Pour ce modèle, nous établissons une majoration de l’exposant de volume, pour toute dimension d.
2004 Elsevier SAS. All rights reserved.
MSC: 60K37; 60G42; 82D30
Keywords: Directed polymers in random environment; Gaussian environment
1. Introduction
The model of directed polymers in a random environment was introduced by Imbrie and Spencer [7]. We focus here on a particular model studied by Petermann [8] in his thesis: let (Sn)n0 be a random walk in Rd starting from the origin, with independentN(0, Id)-increments, defined on a probability space(Ω,F,P), and let g=(g(k, x), k1, x∈Rd)be a stationary centered Gaussian process with covariance matrix
cov
g(i, x), g(j, y)
=δij(x−y),
E-mail address: olivier.mejane@lsp.ups-tlse.fr (O. Mejane).
0246-0203/$ – see front matter 2004 Elsevier SAS. All rights reserved.
doi:10.1016/j.anihpb.2003.10.007
whereis a bounded integrable function onRd.We suppose that this random mediagis defined on a probability space(Ωg,G, P ), where(Gn)n0is the natural filtration:
Gn=σ
g(k, x), 1kn, x∈Rd
forn1 (G0 being the trivialσ-algebra). We denote by E (respectivelyE) the expectation with respect toP (respectivelyP). We define the Gibbs measure.(n)by:
f(n)= 1 ZnE
f (S1, . . . , Sn)eβnk=1g(k,Sk)
for any bounded functionf on(Rd)n, whereβ >0 is a fixed parameter andZnis the partition function:
Zn=E
eβnk=1g(k,Sk) .
Following Piza [9] we define the volume exponent ξ=inf
α >0: 1{maxkn|Sk|nα}(n) →
n→∞1 in P-probability .
Here and in the sequel, |x| =max1id|xi| for any x =(x1, . . . , xd)∈Rd. Petermann obtained a result of superdiffusivity in dimension one, in the particular case where(x)=2λ1e−λ|x| for someλ >0: he proved that ξ3/5 for allβ >0 (for another result of superdiffusivity, see [6]).
Our main result gives on the contrary an upper bound for the volume exponent, in all dimensions:
∀d1, ∀β >0 ξ3
4. (1)
This paper is organized as follows:
• In Section 2, we first extend exponential inequalities concerning independent Gaussian variables, proved by Carmona and Hu [1], to the case of a stationary Gaussian process. Then, following Comets, Shiga and Yoshida [2], we combine these inequalities with martingale methods and obtain a concentration inequality.
• In Section 3, we obtain an upper bound for ξ when we consider only the value of the walkS at time n, and not the maximal one beforen. In fact we prove a stronger result, namely a large deviation principle for (1Sn/nα∈.(n), n1)whenα >3/4. This result and its proof are an adaptation of the works of Comets and Yoshida on a continuous model of directed polymers [3].
• In Section 4, we establish (1).
• Appendix A is devoted to the proof of Lemma 2.4, used in Section 2, which gives a large deviation estimate for a sum of martingale-differences. It is a slight extension of a result of Lesigne and Volný [5, Theorem 3.2].
2. Preliminary: a concentration inequality 2.1. Exponential inequalities
Lemma 2.1. Let(g(x), x∈Rd)be a family of Gaussian centered random variables with common varianceσ2>0.
We fixq, β >0,(x1, . . . , xn)∈(Rd)nand(λ1, . . . , λn)inRn. Then for any probability measureµonRd: e−β
2σ2
2 qE
eβni=1λig(xi)
Reβg(x)µ(dx)q
eβ
2σ2 2
q+n i=1|λi|2.
The proof is identical with the one made by Carmona and Hu in a discrete framework (µis the sum of Dirac masses), and is therefore omitted.
Lemma 2.2. Let (g(x), x ∈Rd) be a centered Gaussian process with covariance matrix cov(g(x), g(y))= (x−y). Letσ2=(0), and letµ be a probability measure on Rd. Then for allβ >0, there are constants c1=c1(β, σ2) >0 andc2=c2(β, σ2) >0 such that:
−c1
Rd
(x−y) µ(dx) µ(dy)E
log
R
eβg(x)−β
2σ2 2 µ(dx)
−c2
Rd
(x−y) µ(dx) µ(dy).
In particular,
−c1σ2E
log
Rd
eβg(x)−β
2σ2 2 µ(dx)
0.
Proof. Let{Bx(t), t0}x∈Rd be the family of centered Gaussian processes such that E
Bx(t)By(s)
=inf(s, t)(x−y), withBx(0)=0 for allx∈Rd. Define
X(t)=
Rd
Mx(t) µ(dx), t0,
whereMx(t)=eβBx(t )−β2σ2t /2. SincedMx(t)=βMx(t) dBx(t), one has
dMx, Myt=β2Mx(t)My(t) dBx, Byt=β2eβ(Bx(t )+By(t ))−β2σ2t(x−y) dt, anddX, Xt=
Rdβ2eβ(Bx(t )+By(t ))−β2σ2t(x−y) µ(dx) µ(dy) dt.Thus, by Ito’s formula, E(logX1)= −β2
2
Rd
µ(dx) µ(dy) (x−y)
1
0
E
eβ(Bx(t )+By(t ))−β2σ2t X2t
dt.
By Lemma 2.1, we have for allt: e−β2σ2tE
eβ(Bx(t )+By(t ))−β2σ2t Xt2
=E
eβ(Bx(t )+By(t )) (
ReβBx(t )µ(dx))2
e8β2σ2t. Hence:
−e8β2σ2−1 16σ2
Rd
(x−y) µ(dx) µ(dy)E(logX1)−1−e−β2σ2 2σ2
Rd
(x−y) µ(dx) µ(dy),
which concludes the proof sinceX1
=d
Rdeβg(x)−β
2σ2
2 µ(dx). 2 2.2. A concentration result
Proposition 2.3. Letν >1/2. Forn∈N,jnandfna nonnegative bounded function, such thatE(fn(Sj)) >0.
We noteWn,j=E(fn(Sj)eβnk=1g(k,Sk)). Then fornn0(β, ν), PlogWn,j−E(logWn,j)nν
exp
−1
4n(2ν−1)/3
.
Proof. We use the following lemma, whose proof is postponed to Appendix A.
Lemma 2.4. Let(Xin, 1in)be a martingale difference sequence and letMn=n
i=1Xni. Suppose that there existsK >0 such thatE(e|Xin|)Kfor alliandn. Then for anyν >1/2, and fornn0(K, ν),
P
|Mn|> nν
exp
−1
4n(2ν−1)/3
.
• We first assume thatfn>0.
To apply the Lemma 2.4, we defineXin,j=E(logWn,j|Gi)−E(logWn,j|Gi−1)so that log(Wn,j)−E(logWn,j)=
n
i=1
Xn,ji .
It is sufficient to prove that there existsK >0 such thatE(e|Xin,j|)Kfor alliand(n, j ).
For this, we introduce:
ein,j=fn(Sj)exp
1kn, k=i
βg(k, Sk)
, Wn,ji =E ein,j
.
Wn,ji >0 since we assumed that fn >0. If Ei is the conditional expectation with respect to Gi, then Ei(logWn,ji )=Ei−1(logWn,ji ),so that:
Xn,ji =Ei
logYn,ji
−Ei−1
logYn,ji
, (2)
with
Yn,ji =e−β2/2Wn,j Wn,ji =
Rd
eβg(i,x)−β2/2µin,j(dx), (3)
µin,j being the random probability measure:
µin,j(dx)= 1 Wn,ji E
ein,j|Si=x
P(Si∈dx).
Since µin,j is measurable with respect to Gn,i =σ (g(k, x),1kn, k=i, x∈Rd), we deduce from Lemma 2.2 that there exists a constantc=c(β) >0, which does not depend on(n, j, i), such that:
−cE
log
Rd
eβg(i,x)−β2/2µin,j(dx)|Gn,i
0,
and sinceGi−1⊂Gn,i, we obtain:
0−Ei−1
logYn,ji
c. (4)
Thus we deduce from (2) and (4) that for allθ∈R E
eθ Xin,j
ecθ+E
eθ Ei(logYn,ji ) withθ+:=max(θ ,0). By Jensen’s inequality,
eθ Ei(logYn,ji )Ei
Yn,ji θ
so that E
eθ Xin,j
ecθ+E Yn,ji θ
=ecθ+E E
Yn,ji θ
|Gn,i
.
Assume now that θ ∈ {−1,1}, hence in both cases, the function x →xθ is convex; using (3), we obtain (Yn,ji )θ
Rdeθ (βg(i,x)−β2/2)µin,j(dx),so that:
E Yn,ji θ
|Gn,i
Rd
E
eθ (βg(i,x)−β2/2)|Gn,i
µin,j(dx)=
Rd
E
eθ (βg(i,x)−β2/2)
µin,j(dx)
=E
eθ (βg(1,0)−β2/2) ,
using thatg(i, x)is independent fromGn,i, and is distributed asg(1,0)for alliandx. We conclude that for allnand 1i,jn,
E e|Xin,j|
E
eXin,j +E
e−Xin,j
K:=ec+eβ2.
• In the general case wherefn0, we introducehn=fn+δ for some 0< δ <1. The first part of the proof applies tohn: notingWn,jδ =E(hn(Sj)eβnk=1g(k,Sk)), it remains to show that logWn,jδ −E(logWn,jδ )P-a.s.
δ→→0
logWn,j−E(logWn,j). Sincefnis bounded by some constantCn>0, the following inequality holds for all 0< δ <1: logWn,j logWn,jδ log((Cn+1)Zn). Since 0ElogZnlogEZn=nβ2(0)/2<∞, the conclusion follows from dominated convergence. 2
Corollary 2.5. Letν >1/2. Let us fix a sequence of Borel sets(B(j, n), n1, j n). ThenP-almost surely, there existsn0such that for everynn0, everyjn,
log1Sj∈B(j,n)(n)−E
log1Sj∈B(j,n)(n)2nν.
Proof. Let us write An,j = {|logE(fn(Sj)eβnk=1g(k,Sk))−E[logE(fn(Sj)eβnk=1g(k,Sk))]|nν}. Proposi- tion 2.3 implies that
P
jn
An,j
nexp
−1
4n(2ν−1)/3
.
Hence, by Borel–Cantelli,P-almost-surely there existsn0such that for everynn0and everyj n:
logE
fn(Sj)eβnk=1g(k,Sk)
−E logE
fn(Sj)eβnk=1g(k,Sk)nν. Then one applies this result tofn(x)=1x∈B(j,n)and tofn(x)=1. 2
3. A first result
In this section, we prove that a large deviation principle holdsP-almost surely for the sequence of measures (1Sn/nα∈.(n), n1)ifα >3/4. This was first proved by Comets and Yoshida [3, Theorem 2.4.4], for a model of directed polymers in which the random walk is replaced by a Brownian motion and the environment is given by a Poisson random measure onR+×Rd.
Theorem 3.1. Letα >3/4. Then a large deviation principle for(1Sn/nα∈.(n), n1)holdsP-a.s., with the rate functionI (λ)= λ2/2 and the speedn2α−1,.denoting the Euclidean norm onRd. In particular, for allε >0,
nlim→∞− 1
n2α−1log1Snεnα(n)=ε2
2 P-a.s.
Remark 3.2. In particular this result implies that for allα >3/4, 1|Sn|nα(n) P-a.s.
n→∞→ 0. (5)
Proof. Let us fixλ∈Rd,n1, and then introduce the following martingale:
Mpλ,n=
eλ.Sp−pλ2/2 ifpn, eλ.Sn−nλ2/2 ifp > n,
withx.y denoting the scalar product between two vectorsx andy inRd. If Qλ,n is the probability defined by Girsanov’s change associated to this positive martingale, Girsanov’s formula ensures that, underQλ,n, the process (S˜p:=Sp−λ(p∧n), p1)has the same distribution asSunderP. Therefore:
Zn eλ.Sn(n)
=enλ2/2E
Mnλeβnk=1g(k,Sk)
=enλ2/2E
eβnk=1gλ,n(k,Sk+kλ)
=enλ2/2E eβ
n
k=1gλ,n(k,Sk) ,
where we denote bygλ,nthe translated environment gλ,n(k, x):=g
k, x+λ(k∧n) .
By stationarity, this environment has the same distribution as(g(k, x), k1, x∈Rd), hence E
eβnk=1gλ,n(k,Sk) d
=E
eβnk=1g(k,Sk) , thus
Elog eλ.Sn(n)
=nλ2/2. (6)
Now let us fixα >3/4. Withnα−1λinstead ofλ, (6) gives Elog
enα−1λ.Sn(n)
=n2α−1λ2/2. (7)
Let us definefn(x)=enα−1λ.x. This function is positive andE(fn(Sn)eβnk=1g(k,Sk)) <∞, so that the result of Corollary 2.5 is still true withfn(x)instead of1x∈B(j,n). Since 2α−1>1/2, this implies
nlim→∞
1 n2α−1
log
enα−1λ.Sn(n)
−Elog
enα−1λ.Sn(n)
=0 P-a.s. (8)
From (7) and (8), we get:
nlim→∞
1 n2α−1log
enα−1λ.Sn(n)
= λ2/2 P-a.s.
Let us definehn(λ)=n2α−11 logenα−1λ.Sn(n)andh(λ)= λ2/2. From what we proved, we deduce the existence ofA⊂Ωg, with P (A)=1, on whichhn(λ)→h(λ)for allλ∈Qd. Now we show that onAthe convergence actually holds for allλ∈Rd, by using that the functionshnare convex andhis continuous. To this goal, we can reduce the proof to the cased=1. Indeed ifd 2, we fix(d−1)coordinates in Qd−1and use thathn is still convex as a function of the last coordinate, and then repeat the process. So let us assumed=1 and fixλ∈R∗. There exist two sequences(ai, i0)and(bi, i0)inQN that converge toλ,(ai, i0)being increasing and(bi, i0)decreasing. Let us fixi1 andn1. Sincehn is convex and satisfieshn(0)=0, the function x→hn(x)/xis increasing. Hence the following inequalities hold:
hn(ai)
ai hn(λ)
λ hn(bi) bi .
Sincehnconverges towardshonQ, it follows that h(ai)
ai lim inf
n→∞
hn(λ)
λ lim sup
n→∞
hn(λ)
λ h(bi) bi
.
The limit functionhbeing continuous, we obtain by lettingi→ +∞that the limit ofhn(λ)/λexists and is equal toh(λ)/λ, which proves thathn(λ)→h(λ)for allλ∈R.
One then concludes by the Gärtner–Ellis–Baldi theorem (see [4]). 2
4. Upper bound of the volume exponent
We now extend the result (5) to the maximal deviation from the origin:
Theorem 4.1. For alld1 andα >3/4, 1{maxkn|Sk|nα}(n) P-a.s.
n→∞→ 0.
Proof. We will use the following notations: forx∈Rdandr0,B(x, r)= {y∈Rd, |y−x|r}. Forα0 and j=(j1, . . . , jd)∈Zd,Bjα=B(j nα, nα). We will use the fact that the union of the balls(Bjα, j ∈(2Z)d\{0})form a partition ofRd\B(0, nα).
We first prove the following upper bound:
Proposition 4.2. Letn0 andkn. Then for anyj∈Zdandα >0, E
log1Sk∈Bjα(n)
−n2α−1 2
d
i=1
(ji−εi)2,
whereεi=sgn(ji) (=0 ifji=0).
Proof. Let note ak,jα =E(1Sk∈Bα
jeβni=1g(i,Si)), so that 1Sk∈Bαj(n) =ak,jα /Zn. Let be λ = ˜λ/k with λ˜i = (ji−εi)nα, 1id; then let us define the martingale
Mpλ,k=
eλ.Sp−pλ2/2 ifpk, eλ.Sk−kλ2/2 ifp > k,
wherex.y denotes the usual scalar product inRd andxthe associated euclidean norm. Under the probability Qλ,kassociated to this martingale,(Sp)p0has the law of the following shifted random walk underP:
S˜p=Sp+ ˜λ p
k ∧1
.
It follows that:
ak,jα =E
e−k1(˜λ.Sk+˜λ
2/2)1S
k∈Bjα−˜λeβ
n
i=1g(i,S˜ i)
, (9)
whereg(i, x)˜ =g(i, x+ ˜λ(i/k∧1)).
Now we notice that on the event{Sk∈Bjα− ˜λ}, one hasλ.S˜ k0: indeed if we writeSk=(Sk1, . . . , Skd), then for any 1id,|Ski −jinα+ ˜λi|nα, hence:
• forji1,λ˜i=(ji−1)nα0 and 0Ski 2nα,
• forji−1,λ˜i=(ji+1)nα0 and−2nαSki 0,
• forji=0,λ˜i=0,
so that in all casesλ˜iSki 0 and thusλ.S˜ k0. Therefore on the event{Sk∈Bjα− ˜λ}, e−k1(˜λ.Sk+˜λ
2/2)e−˜λ
2
2n e−n
2α−1 2
d 1(ji−εi)2, and (9) leads to:
ak,jα e−n
2α−1 2
d
1(ji−εi)2E 1S
k∈Bjα−˜λeβni=1g(i,S˜ i) .
On the other hand,ZnE(1S
k∈Bjα−˜λeβni=1g(i,Si)), and since by stationarity the environmentg˜ has the same distribution asg, it follows that for allj∈Zd,
E
log1Sk∈Bα
j(n)
−n2α−1 2
d
i=1
(ji−εi)2. 2
Letν >1/2. We deduce from Proposition 4.2 and from Corollary 2.5 (with B(k, n)=Bjα) that,P-a.s., for nn0,kn, and allj ∈Zd:
log1Sk∈Bjα(n)2nν−n2α−1 2
d
i=1
(ji−εi)2.
So,P-a.s., fornn0,1|Sk|nα(n)
j∈(2Z)d\{0}e2nν−n
2α−1 2
d
i=1(ji−εi)2,and 1{maxkn|Sk|nα}(n)
n
k=1
1|Sk|nα(n)
j∈(2Z)d\{0}
ne2nν−n
2α−1 2
d
i=1(ji−εi)2. But by symmetry, for anyC >0,
j∈(2Z)d\{0}
e−Cdi=1(ji−εi)22d
j12
e−C(j1−1)2 d
i=2
ji∈2Z
e−C(ji−εi)2
and using that
j2e−C(j−1)2
j1e−Cj =1−e−Ce−C,we conclude that,P-a.s., for some constantC(d) >0, and fornn0:
1{maxkn|Sk|nα}(n)C(d)ne2nν e−n2α−1/2 1−e−n2α−1/2. Thus for allα >ν+21,P-a.s.,
1{maxkn|Sk|nα}(n) →
n→∞0.
This is true for allν >1/2, which ends the proof. 2
Appendix A. Proof of Lemma 2.4
The beginning of the proof is exactly identical with the one made by Lesigne and Volný in [5, pp. 148–149].
Only the last ten lines differ.
Let us denote(Fni)1inthe filtration of(Xin)1in. The hypothesis that it is a martingale difference sequence means that for eachi,XinisFni-measurable and, ifi2,E[Xni |Fni−1] =0.
Let us fixa >0 and for 1indefine Yni=Xin1|Xi
n|an1/3−E Xni1|Xi
n|an1/3Fni−1
and
Zni =Xin1|Xi
n|>an1/3−E Xin1|Xi
n|>an1/3Fni−1 ,
and then define Mn =k
i=1Yni and Mn =k
i=1Zni. Since (Xin)1in is a martingale difference sequence, (Yni)1inand(Zin)1inare martingale difference sequences andXin=Yni+Zin(1in).
Let us fixt∈(0,1). For everyx >0, P
|Mn|> nx
P
|Mn|> nxt +P
|Mn|> nx(1−t)
. (A.1)
Since|Yni|2an1/3for 1in, Azuma’s inequality implies:
P
|Mn|> nxt
=P |Mn|
2an1/3> nxt 2an1/3
2 exp
−t2x2 8a2n1/3
. (A.2)
To control the second term in (A.1), we notice thatE((Mn)2)=n
i=1E(Zni)2. For each 1in, if we note Fni(x)=P(|Xin|> x):
E Zni2
=E Xin1|Xi
n|>an1/3
2
−E E
Xin1|Xi
n|>an1/3Fni−1
2
E
Xin1|Xi n|>an1/3
2
= −
+∞
an1/3
x2dFni(x).
SinceEe|Xin|K,Fni(x)Ke−xfor allx0, hence:
−
+∞
an1/3
x2dFi(x)Ka2n2/3e−an1/3+2K
+∞
an1/3
xe−xdx=K
a2n2/3+2an1/3+2 e−an1/3.
It follows thatE((Mn)2)nK(a2n2/3+2an1/3+2)e−an1/3,and:
P
|Mn|> nx(1−t)
K
x2(1−t)2
a2n−1/3+2an−2/3+2n−1
e−an1/3. (A.3)
We choosea=12(tx)2/3so thatt2x2
8a2 =a. From (A.1), (A.2) and (A.3), we deduce:
P
|Mn|> nx
2+ K
(1−t)2f (t, x, n)
exp
−1
2(tx)2/3n1/3
, (A.4)
withf (t, x, n)=14t4/3x−2/3n−1/3+t2/3x−4/3n−2/3+2x−2n−1. Now by takingx=nν−1, we have:
P
|Mn|> nν
=P
|Mn|> nx
2+ K
(1−t)2g(t, n)
exp
−1
2t2/3x2/3n1/3
, (A.5)
with g(t, n)=f (t, nν−1, n)= 14t4/3n−(2ν−1)/3+t2/3n−2(2ν−1)/3+2n−(2ν−1). Now we fix ε >0 and choose t0∈(0,1)such that 0<1−t02/3< ε/2. (A.5) implies that:
P
|Mn|> nν exp
1
2(1−ε)n(2ν−1)/3
2+ K
(1−t0)2g(t0, n)
exp
−ε
4n(2ν−1)/3
.
But, sinceν >1/2, (2+(1−Kt
0)2g(t0, n))exp(−4εn(2ν−1)/3) →
n→∞0. Therefore there existsn0(ε) such that, for all nn0(ε),
P
|Mn|> nν
exp
−1
2(1−ε)n(2ν−1)/3
. (A.6)
Whenε=1/2 this is exactly the statement of the Lemma 2.4.
References
[1] P. Carmona, Y. Hu, On the partition function of a directed polymer in a Gaussian random environment, Probab. Theory Related Fields 124 (3) (2002) 431–457.
[2] F. Comets, T. Shiga, N. Yoshida, Directed polymers in random environment: path localization and strong disorder, Bernoulli 9 (4) (2003) 705–723.
[3] F. Comets, N. Yoshida, Brownian directed polymers in random environment, Comm. Math. Phys. (2003), submitted for publication.
[4] A. Dembo, O. Zeitouni, Large Deviations Techniques and Applications, second ed., in: Applications of Mathematics, vol. 38, Springer- Verlag, New York, 1998.
[5] E. Lesigne, D. Volný, Large deviations for martingales, Stochastic Process. Appl. 96 (1) (2001) 143–159.
[6] C. Licea, C.M. Newman, M.S.T. Piza, Superdiffusivity in first-passage percolation, Probab. Theory Related Fields 106 (4) (1996) 559–591.
[7] J.Z. Imbrie, T. Spencer, Diffusion of directed polymers in a random environment, J. Statist. Phys. 52 (3–4) (1988) 609–626.
[8] M. Petermann, Superdiffusivity of directed polymers in random environment, Part of thesis, 2000.
[9] M. Piza, Directed polymers in a random environment: Some results on fluctuations, J. Statist. Phys. 89 (3–4) (1997) 581–603.