• Aucun résultat trouvé

To be more specific, beyond the first iteration, the trajectories take values in the set of fixed points of smoothing transformations (i.e., some generalized stable laws)

N/A
N/A
Protected

Academic year: 2022

Partager "To be more specific, beyond the first iteration, the trajectories take values in the set of fixed points of smoothing transformations (i.e., some generalized stable laws)"

Copied!
13
0
0

Texte intégral

(1)

JULIEN BARRAL, JACQUES PEYRI `ERE†‡, AND ZHI-YING WEN†&

Abstract. Mandelbrot multiplicative cascades provide a construction of a dynamical system on a set of probability measures defined by inequalities on moments. To be more specific, beyond the first iteration, the trajectories take values in the set of fixed points of smoothing transformations (i.e., some generalized stable laws).

Studying this system leads to a central limit theorem and to its functional version. The limit Gaussian process can also be obtained as limit of an ‘additive cascade’ of independent normal variables.

1. A dynamical system

Consider the set A ={0, . . . , b−1}, where b≥2. SetA=S

n≥0An, where, by convention,A0is the singleton{}whose the only element is the empty word. If w ∈ A, we denote by |w| the integer such that w ∈ A|w|. If n ≥ 1 and w=w1· · ·wn∈An then for 1≤k≤nthe wordw1· · ·wk is denoted byw|k. By convention,w|0=.

GivenvandwinAn,v∧wis defined to be the longest prefix common to bothv and w, i.e.,v|n0, wheren0= sup{0≤k≤n : v|k=w|k}.

Let Aω stand for the set of infinite sequences w=w1w2· · · of elements of A. Also, forx∈Aω andn≥0, letx|n stand for the projection ofxonAn.

Ifw∈A, we consider the cylinder [w] consisting of infinite words inAωwhose w is a prefix.

We index the closedb-adic subintervals of [0,1] byA: forw∈A, we set Iw=

 X

1≤k≤|w|

wkb−k, X

1≤k≤|w|

wkb−k+b−|w|

.

Iff : [0,1]7→Ris bounded, for every sub-intervalI= [α, β] of [0,1], we denote by ∆(f, I) the incrementf(β)−f(α) off over the intervalI.

Let P the set of Borel probability measures on R+. If µ ∈ P and p > 0, we denote bymp(µ) the moment of order pofµ, i.e.,

mp(µ) = Z

R+

xpµ(dx).

2000Mathematics Subject Classification. 37C99; 60F05, 60F17; 60G15, 60G17, 60G42.

Key words and phrases. Multiplicative cascades, Mandelbrot martingales, additive cascades, dynamical systems, functional central limit theorem, Gaussian processes, Random fractals.

INRIA Rocquencourt, B.P. 105, 78153 Le Chesnay Cedex, France.Julien.Barral@inria.fr.

Dept. Math., Tsinghua University, Beijing, 100084, P. R. China.

wenzy@mail.tsinghua.edu.cn.

Universit´e Paris-Sud, Math´ematique bˆat. 425, CNRS UMR 8628, 91405 Orsay Cedex, France.

Jacques.Peyriere@math.u-psud.fr.

& Partially supported by the National Basic Research Program of China (973 Program) (2007CB814800).

1

(2)

Then letP1 be the set of elements ofP whose first moment equals 1:

P1={µ∈ P : m1(µ) = 1}.

The smoothing transformationSµassociated withµ∈ Pis the mapping fromP to itself so defined: If ν ∈ P, one considers 2b independent random variables, Y(0), Y(1), . . . , Y(b−1), whose common probability distribution isν, andW(0), W(1), . . . , W(b−1) whose common probability distribution isµ; thenSµν is the probability distribution ofb−1 X

0≤j<b

W(j)Y(j).

This transformation and its fixed points have been considered in several con- texts, in particular by B. Mandelbrot who introduced it to construct a model for turbulence and intermittence (see [9, 10, 6, 11, 7, 4, 5]).

In this latter case, the measure µ is inP1 so that Sµ mapsP1 into itself. It is known that the condition R

xlog(x)µ(dx) <logb is then necessary and sufficient for the weak convergence of the sequenceSnµδ1 (whereδ1stands for the Dirac mass at point 1) towards a probability measureν, which therefore is a fixed point ofSµ

(see [9, 10, 6, 7, 4]). In other words, ifR

xlog(x)µ(dx)<logband if W(w)

w∈A

is a family of independent random variables whose probability distribution is µ, then the non-negative martingale

Yn=b−n X

w∈An

W(w|1)W(w|2)· · ·W(w|n) (1) is uniformly integrable and converges to a random variable Y whose probability distributionν belongs toP1 and satisfies Sµν =ν. This means that there existsb copies W(0), . . . , W(b−1) of W and b copies Y(0), . . . , Y(b−1) of Y such that these 2b random variables are independent and

Y =b−1

b−1

X

k=0

W(k)Y(k). (2)

In this case, we denote the measure ν byTµ. It is natural to try and iterateT.

But, in general this is not possible because ν =Tµ may not inherit the property R xlog(x)ν(dx)<logb. So, we have to find a domain stable under the action ofT.

This will be done by imposing conditions on moments.

Indeed, it is easily seen that the sequence (Yn)n≥1defined by (1) remains bounded in L2 norm if and only if E(W2) =m2(µ)< b, and that in this case Formula (2) yields

EY2= b−1

b−EW2 (3)

(since the random variables W(j) andY(j) are independent and of expectation 1, squaring both sides of Formula (2) yields b2EY2 = bEW2EY2+b(b−1)). It follows that if b ≥3 and 1≤EW2 < b−1, we haveEY2 ≤EW2 (the equality holding only if W = 1). Therefore, since the condition EW2< bis stronger than E(WlogW) < logb when EW = 1 (since the function t 7→ logEWt is convex), T is a transformation on the subset ofP1defined by

Pb=

µ∈ P1 : 1<m2(µ)< b−1 .

If µ ∈ Pb, due to (2), we can associate with each n ≥ 0 a random vari- able Wn+1 as well as 2b independent random variablesWn(0), . . . , Wn(b−1) and Wn+1(0), . . . , Wn+1(b−1) such that

Wn+1=1 b

b−1

X

k=0

Wn(k)Wn+1(k), (4)

(3)

Tnµis the probability distribution ofWn(k) for everyksuch that 0≤k≤b−1, and Tn+1µis the probability distribution ofWn+1 andWn+1(k) for every 0≤k≤b−1.

We advise the reader that if one writes Formula (4) with n−1 instead of n, the variablesWn(k) which then appear are different from the previous ones.

In Mandelbrot [9, 10], the random variableY represents the increment between 0 and 1 of the non-decreasing continuous function hon [0,1] obtained as the almost sure uniform limit of the sequence of non-decreasing continuous functionsφndefined by

φn(u) = Z u

0 n

Y

k=1

W(˜t|k)dt, (5)

where ˜t stands for the sequence of digits in the basebexpansion oft(of course the ambiguity for countably many t’s is harmless). In other words, for w ∈ A, we have

∆(φ, Iw) =b−|w|Y(w) Y

1≤j≤|w|

W(w|j), (6)

where Y(w) has the same distribution as Y and is independent of the variables W(w|j).

Let us denote by F(µ) the probability distribution of the limit φ, considered as a random continuous function.

We are going to study the dynamical system (Pb,T). This will lead to a de- scription of the asymptotic behavior of Tnµ, F(Tn−1µ)

n≥1 asngoes to∞.

We need some more definitions. For b≥3, set w2(b) = min

b−1, bb4−4b2+ 12b−8 b4+ 8b2−12b+ 4

and, for tsuch that 1< t < w2(b), w3(b, t) = b2

2 +1 2

s

b(b4−4b2+ 12b−8)−t b4+ 8b2−12b+ 4

b−t .

One always hasw3(b, t)< b2−1.

Also set Db=n

µ∈ P : m1(µ) = 1,1<m2(µ)< w2(b), andm3(µ)< w3 b,m2(µ)o . Theorem 1 (Central limit theorem). Supposeb≥3. Letµ∈Pb, and, forn≥0, define σn=

Z

(x−1)2Tnµ(dx) 1/2

. Then

(1) The limit of (b−1)n/2σn exists and is positive; so limn→∞Tnµ=δ1. (2) If µ∈Db, then, sup

n≥0

Z

|x−1|

σn

3

Tnµ(dx)<∞.

(3) Suppose that there exists p >2 such thatsup

n≥0

Z

|x−1|

σn

p

Tnµ(dx)<∞.

Then, if Wn is a variable whose distribution is Tnµ, Wn−1 σn

converges in distribution towards N(0,1).

Theorem 2 (Functional CLT). Supposeb≥3. Letµ∈Pb. Then (1) The probability distributions F(Tnµ) weakly converges towardsδId. (2) Suppose that there exists p >2 such that

sup

n≥1

Z

|x−1|

σn

p

Tnµ(dx)<∞.

(4)

(In particular, this holds ifµlies in the domain of attractionDb.) Then, if hn is a random function distributed according toF(Tn−1µ), the distribution of hn−Id

σn

weakly converges towards the distribution of the unique contin- uous Gaussian process (Xt)t∈[0,1], such that X(0) = 0 and, for all j ≥1, the covariance matrix Mj of the vector ∆(X, Iw)

w∈Aj is given by Mj(w, w0) =

(b−2j 1 + (b−1)|w|

ifw=w0, b−2j(b−1)|w∧w0| otherwise.

In Section 4, we will give an alternate construction of this Gaussian processX: It will be obtained as the almost sure limit of an additive cascade of normal variables.

Remark 1. It would be interesting to know whether the new limit process and central limit theorems provided in this paper could be useful for modeling in any area.

2. Proof of Theorem 1

Throughout this section and the next one, we assumeb≥3.

Proposition 3. Ifµ∈Pb andσ2n= Z

(x−1)2Tnµ(dx), the sequence(b−1)n/2σn

converges to σ0

s

b−2 b−2−σ02.

Proof. Equations (4) and (3) yield EWn+12 = b−1

b−EWn2, from which we get the formula

σn+12 =EWn+12 −1 = σn2

b−1−σn2, (7)

which can be written as

σn+12

b−2−σn+12 = (b−1)−1σn2 b−2−σ2n . This yields

σn2

b−2−σn2 = σ20

b−2−σ20(b−1)−n. (8) Proposition 4. Ifµ∈Db, then both sequences m2(Tnµ)

n≥1 and m3(Tnµ)

n≥1

are non-increasing and converge to 1 as ngoes to∞.

Proof. For n≥0, we setun =m2(Tnµ) andvn =m3(Tnµ) and deduce from (4) that, for all n≥0, we have

un+1 = b−1 b−un

(9) vn+1 = (b−1) 3unun+1+b−2

b2−vn (10)

ifun< bandvn < b2

(Formula (9) is a restatement of (3), and taking cubes in Formula (4) yields b3EWn+13 =bEWn3EWn+13 + 3b(b−1)EWn2EWn+12 +b(b−1)(b−2) hence For- mula (10)). Since 1≤u0 < b−1, as we already saw it, Equation (9) implies that un decreases, except in the trivial case µ=δ1. Moreoverun converges towards 1, the stable fixed point of t7→(b−1)/(b−t).

(5)

The conditionsm2(µ)≤w2(b) andm3(µ)< w3 b,m2(µ)

are optimal to ensure that v1 ≤ v0, and also they impose v0 < b2−1. We conclude by recursion: if vn+1≤vn< b2, then we have

vn+2 ≤ (b−1) 3un+1un+2+b−2 b2−vn

≤ (b−1) 3unun+1+b−2 b2−vn

=vn+1.

Thus (vn)n≥0 is non-increasing and 1≤vn < b2−1, so we deduce from (10) and the fact thatun converges to 1 thatvnconverges to the smallest fixed point of the

mapping x7→(b2−1)/(b2−x), namely 1.

Proposition 5. There existsC >0 such that, forµ∈Db andn≥1, we have b2−EWn3

E|Zn+13 | ≤(b−1)3/2E|Zn3|+C (E|Zn3|)2/3+ (E|Zn3|)1/3+ 1 ,

where Zn= Wn−1 σn .

Proof. We use the following simplified notations: W =Wn,Y =Wn+1Y andσW

stand for the standard deviations ofY andW,ZY−1Y |Y−1|,ZWW−1|W−1|, and r=σWY.

Then Equation (2) becomes b|Y −1| ≤

b−1

X

i=0

W(i)|Y(i)−1|+

b−1

X

i=0

|W(i)−1|, i.e.,

b ZY

b−1

X

i=0

W(i)ZY(i)+r

b−1

X

i=0

ZW(i), (11)

which yields

b2−E(W3)

E(ZY3)≤r3E(ZW3 ) +

3

X

i=0

3 i

riTi,

where

T0 = 3(b−1)EW2EZY2 EZY + (b−1)(b−2)(EZY)3, T1 = E(W2ZW)EZY2 + 2(b−1)E(W ZW)(EZY)2

+ (b−1)EZWEW2EZY2 + (b−1)(b−2)EZW(EZY)2, T2 = EZY E(W ZW2 ) + 2(b−1)E(W ZW)EZY EZW

+ (b−1)EZY EZW2 + (b−1)(b−2)EZY(EZW)2, T3 = 3(b−1)EZWEZW2 + (b−1)(b−2)(EZW)3.

As, for X ∈ {W, Y}we haveEZX≤(EZX2)1/2= 1, andEX2< b−1, we get the simpler bound

b2−EW3

EZY3 ≤r3EZW3 +

3

X

i=0

3 i

riTi0,

where

T00 = (b−1)(4b−5),

T10 = E(W2ZW) + 2(b−1)E(W ZW) + (b−1)(2b−3), T20 = E(W ZW2 ) + 2(b−1)E(W ZW) + (b−1)2,

T30 = b2−1.

SinceEW3< b2−1, the H¨older inequality yields

E(W2ZW)≤(EW3)2/3(EZW3 )1/3≤(b2−1)2/3(EZW3 )1/3

(6)

and

E(W ZW2 )≤(EW3)1/3(EZW3 )2/3≤(b2−1)1/3(EZW3 )2/3. Furthermore,E(W ZW)≤ EW2EZW2 1/2

≤√ b−1.

We know from (7) that r < √

b−1. Therefore, there exists a constantC > 0 independent ofµsuch that

b2−EW3

EZY3 ≤(b−1)3/2EZW3 +C (EZW3 )2/3+ (EZW3 )1/3+ 1 .

Corollary 6. If µ∈Db then sup

n≥1

Z

σn−3|x−1|3Tnµ(dx)<∞.

Proof. Sinceb2−EWn3converges towardsb2−1, andb2−1>(b−1)3/2, the bound in the last proposition yields that Zn is bounded in L3. This accounts for the first two assertions of Theorem 1. Proving its last assertion requires a careful iteration of Formula (4). Recall that we set Zn = Wn−1

σn

. Equation (4) yields

Zn+1=1 b

b−1

X

k=0

σnZn(k)Zn+1(k) + σn

σn+1

Zn(k) +Zn+1(k)

. (12)

If we set Rn= 1

b

b−1

X

j=0

Zn(j)Zn−1(j)σn−1+1 b

σn−1 σn −√

b−1 b−1

X

j=0

Zn−1(j), (13) then Equation (12) rewrites as

Zn+1=Rn+1+

√b−1 b

b−1

X

k=0

Zn(k) +1 b

b−1

X

k=0

Zn+1(k). (14) We are going to use repeatedly Formula (14). Let stand for empty word on any alphabet. Fix n >1, define Rn(, ) =Rn as well asZn(, ) =Zn, and write using (14)

Zn=Zn(, ) =Rn(, ) +

√b−1 b

X

j∈A

Zn−1(j,0) + 1 b

X

j∈A

Zn(j,1). (15) Since we are interested in distributions only, we can take copies of these variables so that we can write

Zn(j,1) = Rn(j,1) +

√b−1 b

X

k∈A

Zn−1(jk,10) +1 b

X

k∈A

Zn(jk,11) Zn−1(j,0) = Rn−1(j,0) +

√b−1 b

X

k∈A

Zn−2(jk,00) +1 b

X

k∈A

Zn−1(jk,01).

Notice that since by definition in Formula (15) the random variables of the form Zn−1(j, w) andZn(j, w), (j, w) ∈A × {0,1}, are mutually independent, and the same holds for the random variablesRn(j, w) andRn−1(j, w), (j, w)∈A×{0,1}, as well as for the random variablesZn−2(jk, w),Zn−1(jk, w) andZn(jk, w), (jk, w)∈ A2× {0,1}2.

(7)

Then Formula (15) rewrites as Zn(, ) =Rn(, ) +b−1X

j∈A

√b−1Rn−1(j,0) +Rn(j,1) + b−2X

w∈A2

(b−1)Zn−2(w,00) +√

b−1 Zn−1(w,01) +Zn−1(w,10)

+Zn(w,11) and so on. At last we getZn =T1,n+T2,n, with

T1,n =

n−1

X

k=0

b−k X

m∈{0,1}k w∈Ak

(b−1)(k−ς(m))/2Rn−k+ς(m)(w, m) (16)

T2,n = b−n X

m∈{0,1}n w∈An

(b−1)(n−ς(m))/2Zς(m)(w, m), (17)

where ς(m) stands for the sum of the components ofm. Moreover, all variables in Equation (17) are independent, and in Equation (16), the variables corresponding to the same kare independent.

For reader’s convenience, we also provide a more constructive approach to ob- tain the previous decomposition of Zn. At first, we notice that the meaning of Equation (14) is the following: given independent variablesZn(k) andZn+1(k) (for 0 ≤k < b) equidistributed with Zn andZn+1, if we define Rn by Equation (13), then the right hand side of Equation (14) is equidistributed withZn+1.

Letnbe fixed larger than 2. One starts with a collection Zl(w, m) 0≤l≤n, w∈An,{0,1}n

of independent random variables such that the Zl(·,·) have the same distribution as Zl.

One defines by recursion Rl(w, m) = 1

b

b−1

X

j=0

Zl−1(wj, m0)Zl(wj, m1)σl−1+1 b

σl−1 σl −√

b−1 b−1

X

j=0

Zl−1(wj, m0)

and

Zl(w, m) =Rl(w, m) +

√b−1 b

b−1

X

j=0

Zl−1(wj, m0) +1 b

b−1

X

j=0

Zl(wj, m1), for 0≤l≤n, (w, m)∈Aj× {0,1}j withj≥n−l.

Due to (14), all these new variables Zl(·.·) are equidistributed withZl, and we getZn(, ) =T1,n+T2,n.

Proposition 7. We have lim

n→∞ET1,n2 = 0, soT1,n converges in distribution to 0.

Proof. Setr2n=ER2n. We have b rn22n−1+

σn−1

σn

−√ b−1

2

,

which together with Formulae (7) and (8) implies that there existsC >0 such that rn2 ≤C(b−1)−n for all n ≥1. By using the independence properties of random

(8)

variables in (16) as well as the triangle inequality, we obtain

ET1,n2 1/2

≤ X

0≤k<n

b−k

 X

0≤j≤k

k j

bk(b−1)jr2n−j

1/2

≤ C X

0≤k<n

b−k

 X

0≤j≤k

k j

bk(b−1)j(b−1)j−n

1/2

≤ C X

0≤k<n

b−k/2 (b−1)2+ 1k/2

(b−1)−n/2

= C(b−1)−n/2 X

0≤k<n

(b−1)2+ 1 b

k/2

= O

1− b−2 b(b−1)

n/2 .

Proposition 8. If there exists p >2such that

sup

n≥1

Z

|x−1|

σn

p

Tnµ(dx)<∞,

(i.e., (|Zn|)n≥1 is bounded inLp), thenT2,n converges in distribution toN(0,1).

Proof. IfY is a positive random variable,a,pandεare positive numbers withp >2, we have

E a2Y21{aY >ε}

≤ a2(EYp)2/p P(aY > ε)1−2/p

≤ a2(EYp)2/p ε−papEYp1−2/p

=apε2−pEYp. So, we have

X

m∈{0,1}n w∈An

b−2n(b−1)(n−ς(m))E

Zς(m)(w, m)21

b−n(b−1)(n−ς(m))/2|Zς(m)(w,m)|>ε

n

X

k=0

n k

bn−np(b−1)p(n−k)/2ε2−pE|Zk|p≤ε2−p

(b−1)p/2+ 1 bp−1

n sup

k≥0E|Zk|p, and this last expression converges towards 0 as ngoes to ∞. But, as we have

ET2,n2 = X

m∈{0,1}n w∈An

b−2n(b−1)n−ς(m)=

n

X

k=0

n k

b−n(b−1)n−k= 1,

the Lindeberg theorem yields the conclusion.

3. Proof of Theorem 2

We begin by the following observation: for any real functionf on [0,1], one has ω(f, δ)≤2(b−1) X

j≥−loglogδb

sup

w∈Aj

∆(f, Iw), (18)

where, ω(f, δ) stands for the modulus of continuity of a functionf on [0,1]:

ω(f, δ) = sup

t,s∈[0,1]

|t−s|≤δ

|f(t)−f(s)|.

(9)

Proposition 9. Suppose that µ ∈ Pb. If hn is a random continuous function distributed according to F(Tn−1µ), setZn= hn−Id

σn . The probability distributions of the random continuous functionsZn,n≥1, form a tight sequence.

Proof. By Theorem 7.3 of [3], since (hn−Id)(0) = 0 almost surely for alln≥1, it is enough to show that for each positiveε

δ→0limlim sup

n→∞ P ω(Zn, δ)≥2(b−1)ε

= 0, (19)

We first establish the following lemma.

Lemma 10. Let γ andH be two positive numbers such that 2H+γ−1<0. Also letn0≥1 be such thatsupn≥n0−1EWn2≤bγ. Forj≥1,n≥n0 andt >0we have

P

sup

w∈Aj

∆ Zn, Iw

≥t b−jH

≤(b−1)t−2(j+ 1)3bj(2H+γ−1).

Proof. Let j ≥ 1, w ∈ Aj and n ≥ n0. Formula (6) shows that the increment

n(w) = ∆(Zn, Iw) takes the form

n(w) =b−jσn−1

"

Wn(w)

j

Y

k=1

Wn−1(w|k)−1

#

=b−jZn(w)

j

Y

k=1

Wn−1(w|k)

+b−j

j

X

l=1

σn−1

σn Zn−1(w|l)

l−1

Y

k=1

Wn−1(w|k).

(20)

Consequently,

P |∆n(w)| ≥t b−jH

≤P b−jZn(w)

j

Y

k=1

Wn−1(w|k)≥t b−jH j+ 1

!

+

j

X

l=1

P b−jσn−1

σn Zn−1(w|l)

l−1

Y

k=1

Wn−1(w|k)≥ t b−jH j+ 1

! .

By using the Markov inequality, the equalityEZk2= 1, and the fact thatEWn−12 ≥ 1, we obtain that each probability in the previous sum is less than

(b−1)t−2(j+ 1)2b−2(1−H)j EWn−12 j ,

so that the sum of these probabilities is bounded by (b−1)t−2(j+ 1)3bj(γ−2(1−H)). Consequently,

P ∃w∈Aj, |∆n(w)| ≥t b−jH

≤ (b−1)t−2(j+ 1)3bj(γ−2(1−H)+1)

= (b−1)t−2(j+ 1)3bj(2H+γ−1).

Now, we can continue the proof of Proposition 9. Fix H, γ, and n0 as in Lemma 10, setjδ =−logbδ, and assume thatn≥n0. Due to (18) and Lemma 10, we have

ω(Zn, δ)≥2(b−1)ε ⊂

 X

j≥jδ

sup

w∈Aj∆(Zn, Iw)> ε

⊂ [

j≥jδ

sup

w∈Aj

∆(Z, Iw)>(1−b−H)bjδHε b−jH

,

(10)

so

P ω(Zn, δ)≥2(b−1)ε

≤(b−1)b−2jδH (1−b−H)2ε2

X

j≥jδ

(j+ 1)3b(2H+γ−1)j. Consequently,

δ→0lim sup

n≥n0

P ω(Zn, δ)>2(b−1)ε

= 0.

Proposition 11. Suppose thatµ∈Db. For everyn≥1lethn be a random contin- uous function whose probability distribution is F(Tnµ). Fix j≥1. The probability distribution of the vector

∆(hnσ−Id

n , Iw)

w∈Aj converges, asn goes to∞, to that of a Gaussian vector whose covariance matrix Mj is given by

Mj(w, w0) =

(b−2j(1 + (b−1)|w|) ifw=w0, b−2j(b−1)|w∧w0| otherwise .

Proof. We use the same notations as in the proof of Lemma 10. Let j ≥ 1 and w∈Aj. In the right hand side of (20), the random variablesZn(w) andZn−1(w|l), 1 ≤l ≤j, are independent and their probability distribution converge weakly to N(0,1), while the common probability distribution of the Wn−1(w|l), 1≤ l ≤j, converges to δ1, and σn−1σ

n converges to√ b−1.

This implies that there exist N(v)

v∈Sj

k=1Ak and Ne(w)

w∈Aj two families of N(0,1) random variables so that all the random variables involved in these families are independent, and

n→∞lim ∆n(w)

w∈Aj

dist= b−j Ne(w) +√ b−1

j

X

k=1

N(w|k)

!

w∈Aj

. (21) The fact that the vector in the right hand side of (21) is Gaussian is an immediate consequence of the independence between the normal laws involved in its definition.

The computation of the covariance matrix is left to the reader.

4. The limit process as the limit of an additive cascade

Recall that, ifv∈A, [v] stands for the cylinder inAω consisting of sequences beginning byv. LetA+ stand for the set of non-empty words on the alphabetA. We are going to show that there exists a finitely additive random measureM on Aω satisfying almost surely for allw∈A+

M([w]) =b−j ζ(w) +√ b−1

j

X

k=1

ξ(w|k)

!

,wherej=|w| (22) instead of (21), where the variables ξ(w)

w∈A+ are independent with common dis- tributionN(0,1) and the variableζ(w) isN(0,1) and independent of ξ(w|j)

1≤j≤|w|. Indeed, if we setS(w) =Pj

k=1 ξ(w|k), sinceM([w]) =P

`∈AM([w`]) we should have

b ζ(w) +√

b−1S(w)

= X

`∈A

ζ(w`) +√ b−1

j+1

X

k=1

ξ (w`)|k

!

= X

`∈A

ζ(w`) +√

b−1ξ(w`) +b√

b−1S(w).

(11)

Iterating this last formula, gives ζ(w) =b−n X

v∈An

ζ(wv) +√ b−1

n

X

j=1

b−j X

v∈Aj

ξ(wv).

The first term of the right hand side converges to 0 with probability 1 since its L2 norm is b−n/2. The second term is a martingale bounded in L2 norm. Therefore its limit, a N(0,1) variable, is a.s. equal toζ(w).

Finally, we get a finitely additive Gaussian random measure defined on the cylin- ders ofAωby

M([w]) =b−|w|√ b−1

lim

n→∞

X

v∈Sn k=1Ak

b−|v|ξ(wv) + X

1≤k≤|w|

ξ(w|k)

. (23) Then, the limit process of the previous sections can be seen as the primitive of the projection ofM on [0,1].

Of course (23) makes sense even forb= 2.

It is easy to compute covariances:

E M([v])M([w])

=

(b−2|v|(1 + (b−1)|v|) ifw=v, (b−1)b−(|v|+|w|)|v∧w| otherwise.

It is then straighforward to check that, with probability 1, for all ε > 0 we have supv∈An|M([v])| = o(b−n(1−ε)). This can be refined, in particular thanks to the multifractal analysis of the branching random walk S(w) = P

1≤j≤|w|ξ(w|j). In term of the associated Gaussian process (Xt)t∈[0,1], it is natural to consider for all α∈Rthe sets

Eα=

t∈[0,1) : lim sup

n→∞

∆(X, In(t)) nb−n =α√

b−1

,

Eα=

t∈[0,1) : lim inf

n→∞

∆(X, In(t)) nb−n =α√

b−1

, and

Eα=Eα\ Eα,

where In(t) stands for the semi-open to the right b-adic interval of generation n containingt.

In the next statement, dimE stands for the Hausdorff dimension of the setE.

Theorem 12. With probability 1,

(1) the modulus of continuity of X is aO δlog(1/δ) , (2) X does not belong to the Zygmund class,

(3) the set E0contains a set of full Lebesgue measure at each point of whichX is not differentiable,

(4) dimEα= dimEα= dimEα= 1− α2

2 logb if |α| ≤√

2 logb, andEα=∅ if

|α|>√

2 logb. Furthermore,E2 logb andE2 logb are nonempty.

Remark 2. We do not know whether there are points inE0at whichX is differen- tiable. We do not either if the pointwise regularity of X is 1 everywhere.

Proof. Forw∈A, as above we setζ(w) =√

b−1 limn→∞P

v∈Sn

k=1Akb−|v|ξ(wv).

We haveP

n≥1P ∃w∈An, |ζ(w)|>2√

2 logb√ n

<∞, hence, with probabil- ity 1, supw∈An|ζ(w)|= O(√

n).

(12)

Also, P

n≥1P ∃ w ∈ An, |S(w)| >2n√ 2 logb

<∞ and, with probability 1, supw∈An|S(w)|= O(n). This yields the property regarding the modulus of conti- nuity thanks to (18). This proves the first assertion.

To see thatX is not in the Zygmund class, it is enough to findt∈(0,1) such that lim sup

h→0,h6=0

f(t+h) +f(t−h)−2f(h) h

=∞. Taket=b−1andh=b−n. Lettandt stand for the infinite words 0(b−1)(b−1). . .(b−1)· · · and 1(b−1)(b−1). . .(b−1)· · ·. We have

f(t+h) +f(t−h)−2f(h) h√

b−1

=

ζ(t|n)−ζ(t|n)

√b−1 +S(t|n)−S(t|n) .

Since |ζ(t|n)−ζ(t|n)| = O(√

n) and since the random walks S(t|n) and S(t|n) are independent, the law of the iterated logarithm yields the desired behavior as h=b−n goes to 0.

The fact thatE0contains a set of full Lebesgue measure on whichX is nowhere differentiable is a consequence of the Fubini theorem combined with the property

|ζ(et|n)|=O(√

n) and the law of the iterated logarithm which almost surely holds for the random walk (S(et|n))n≥1 for eacht∈[0,1].

Since, with probability 1, we have supw∈An|ζ(w)| = O(√

n), we only have to take into account the term S(w) in the asymptotic behavior of ∆(X, Iw)

|w|b|w| as |w|

goes to∞. Thus, in the definition of the sets Eα, Eα and Eα, ∆(X, Iw)

√b−1|w|b|w| can be replaced by S(w)

|w| . Then the result is mainly a consequence of the work [2] on the multifractal analysis of Mandelbrot measures.

To get an upper bound for the Hausdorff dimensions, we set β(q) = lim inf

n→∞ −1

nlogb X

w∈An

exp(qS(w))

forq∈R. Standard large deviation estimates show that dimFα≤inf

q∈R

−αq logb−β(q) for allα∈RandF ∈ {E, E, E} (the occurrence of a negative dimension meaning that the corresponding set is empty). Also, using the fact thatβ(q) is the supremum of those numbers tsuch that lim supn→∞bntP

w∈Anexp(qS(w))<∞yields β(q)≥ lim

n→∞−1 nlogbE

X

w∈An

exp(qS(w)) =−1− q2 2 logb.

Since both sides of this inequality are concave functions, we actually have, with probability 1, β(q) ≥ −1−q2/2 logb for all q ∈ R. Consequently, the upper bound for the dimension used withα=−β0(q) logb=qyields, with probability 1, dimFq ≤1−q2/2 for allq∈[−√

2 logb,√

2 logb] andFq=∅ if|q|>√ 2 logb.

For the lower bounds, we only have to consider the sets Eα. If q ∈ [−√

2 logb,√

2 logb], let φq be the non-decreasing continuous function associated with the family Wq(w) = exp(qW(w)−q2/2)

w∈A+ as φ was with W(w)

w∈A+ in Section 1. We learn from [2] that, with probability 1, all the functions φq, q∈(−√

2 logb,√

2 logb) are simultaneously defined; their derivatives (in the sense of distributions) are positive measures denoted byµq. Then, compu- tations very similar to those used to perform the multifractal analysis of µ1 in [2]

show that, with probability 1, for allq∈(−√

2 logb,√

2 logb) the dimension ofµq

is 1−q2/2 logb andµq(Eq)>0.

(13)

Forq∈ {−√

2 logb,√

2 logb}it turns out (see [8, 2]) that the formula µq(Iw) = lim

p→∞− X

v∈Ap

Π(wv) log Π(wv), where

Π(wv) =b−|w|+p

|w|

Y

k=1

Wq(w|k)

p

Y

j=1

Wq(wv|j),

defines almost surely a positive measure carried byEq. 5. Other random Gaussian measures and processes

This timeb≥2, ξ(w)

w∈A+is a sequence of independentN(0,1) variables, and α(w)

w∈A+ and β(w)

w∈A+ are sequences of numbers subject to the conditions α(w) = X

`∈A

α(w`),

X

v∈A+

|α(wv)|p|β(wv)|p<∞ for somep∈(1,2].

Then, for allw∈A+, the martingaleP

v∈Anα(wv)β(wv)ξ(wv) is boundedLp norm (ifp <2 this uses an inequality from [1]) and the formula

M([w]) = lim

n→∞

X

v∈Sn k=1Ak

α(wv)β(wv)ξ(wv) +α(w) X

1≤j≤|w|

β(w|j)ξ(w|j) (24) almost surely defines a random measure which generalizes the one considered in the previous sections. Here again, the primitive of the projection on [0,1] of this measure defines a continuous process, which is Gaussian ifp= 2.

Remark 3. The hypotheses under which this last construction can be performed can be relaxed: if the random variables ξ(w),w∈A+, are independent, centered, and P

v∈A+|α(wv)|p|β(wv)|pE(|ξ(wv)|p)<∞, Formula (24) still yields a random measure.

The fine study of the associate process as well as some improvement of Theo- rem 12 will be achieved in a further work.

References

[1] B. von Bahr, C.-G. Esseen, Inequalities for the rth absolute moment of a sum of random variables, 15r52, Ann. Math. Stat.,36(1965), 299–303.

[2] J. Barral, Continuity of the multifractal spectrum of a statistically self-similar measure, J. The- oretic. Probab.,13(2000), 1027–1060.

[3] P. Billingsley, Convergence of Probability Measures, Wiley Series in Probabily and Statistics.

Second Edition, 1999.

[4] R. Durrett and T. Liggett,Fixed points of the smoothing transformation,Z. Wahrsch. verw.

Gebiete64(1983), 275–301.

[5] Y. Guivarc’h,Sur une extension de la notion de loi semi-stable,Ann. Inst. H. Poincar´e, Probab.

et Statist.26(1990), 261–285.

[6] J.-P. Kahane,Sur le mod`ele de turbulence de Benoˆıt Mandelbrot,C. R. Acad. Sci. Paris,278 (1974), 621–623.

[7] J.-P. Kahane, J. Peyri`ere,Sur certaines martingales de Benoˆıt Mandelbrot,Adv. Math.22 (1976), 131–145.

[8] Q. Liu,On generalized multiplicative cascades,Stoch. Process. Appl.86(2000), 263–286.

[9] B.B. Mandelbrot, Multiplications al´eatoires it´er´ees et distributions invariantes par moyenne pond´er´ee al´eatoires,C. R. Acad. Sci. Paris278(1974), 289–292, 355–358.

[10] B.B. Mandelbrot, Intermittent turbulence in self-similar cascades: divergence of hight mo- ments and dimension of the carrier,J. Fluid. Mech. 62(1974), 331–358.

[11] J. Peyri`ere,Turbulence et dimension de Hausdorff,C. R. Acad. Sci. Paris,278(1974), 567–

569.

Références

Documents relatifs

Proposition 5.1 of Section 5 (which is the main ingredient for proving Theorem 2.1), com- bined with a suitable martingale approximation, can also be used to derive upper bounds for

While for independent and identically distributed r.v.s it is very simple to estimate the variance, this is not the case for de- pendent r.v.s for which we should construct an

A Quenched Functional Central Limit Theorem for Planar Random Walks in Random Sceneries.. Nadine Guillotin-Plantard, Julien Poisat, Renato Soares

Our theorem shows that the results of Bingham &amp; Doney (1974, 1975) and de Meyer (1982) remain true in the general case (where A i are not necessarily bounded), so that they can

Research on the epoxide-opening cascades for synthesis of ladder polyethers in the Jamison group during my time in this lab was a collaborative effort of Dr.. Graham

Il épluchait et découpait minutieusement une poire (d’Août à Octobre), une pomme (d’Octobre à Décembre), une banane (tout l’hiver jusqu’aux premiers jours de mai), une

Finally, note that El Machkouri and Voln´y [7] have shown that the rate of convergence in the central limit theorem can be arbitrary slow for stationary sequences of bounded

[11] have considered asymptotic laws for a randomized version of the classical occupation scheme, where the discrete probability p ∈ Prob I is random (more.. precisely, p is