• Aucun résultat trouvé

Persistence of iterated partial sums

N/A
N/A
Protected

Academic year: 2022

Partager "Persistence of iterated partial sums"

Copied!
12
0
0

Texte intégral

(1)

www.imstat.org/aihp 2013, Vol. 49, No. 3, 873–884

DOI:10.1214/11-AIHP452

© Association des Publications de l’Institut Henri Poincaré, 2013

Persistence of iterated partial sums 1

Amir Dembo

a

, Jian Ding

b,2

and Fuchang Gao

c,3

aDepartment of Mathematics, Stanford University, Stanford, CA 94305, USA. E-mail:adembo@stat.stanford.edu

bDepartment of Statistics, University of Chicago, 5734 S. University Avenue, Chicago, IL 60637, USA. E-mail:jianding@galton.uchicago.edu cDepartment of Mathematics, University of Idaho, 875 Perimeter Drive MS 1103, Moscow, ID 83844, USA. E-mail:fuchang@uidaho.edu

Received 30 January 2011; revised 27 July 2011; accepted 29 July 2011

Abstract. LetS(2)n denote the iterated partial sums. That is,S(2)n =S1+S2+ · · · +Sn, whereSi=X1+X2+ · · · +Xi. Assuming X1, X2, . . . , Xnare integrable, zero-mean, i.i.d. random variables, we show that the persistence probabilities

p(2)n :=P

1maxinSi(2)<0 ≤c

E|Sn+1| (n+1)E|X1|,

withc≤6√

30 (andc=2 wheneverX1is symmetric). The converse inequality holds whenever the non-zero min(−X1,0)is bounded or when it has only finite third moment and in additionX1is squared integrable. Furthermore,pn(2)n1/4for any non- degenerate squared integrable, i.i.d., zero-meanXi. In contrast, we show that for any 0< γ <1/4 there exist integrable, zero-mean random variables for which the rate of decay ofpn(2)isnγ.

Résumé. SoitSn(2)la somme partielle itérée, c’est à direSn(2)=S1+S2+· · ·+Sn, oùSi=X1+X2+· · ·+Xi. Pour des variables aléatoiresX1, X2, . . . , Xni.i.d. intégrables et de moyenne nulle, nous montrons que les probabilités de persistance satisfont

p(2)n :=P max

1inSi(2)<0 ≤c

E|Sn+1| (n+1)E|X1|,

avecc≤6√

30 (etc=2 dès queX1est symétrique). En outre, l’inégalité inverse est vraie quandP(−X1> t)eαt pour un α >0 ou siP(X1> t)1/t→0 quandt→ ∞. Pour ces variables, on a doncp(2)n n1/4siX1admet un moment d’ordre 2. Par contre nous montrons que pour tout 0< γ <1/4, il existe des variables intégrables de moyenne nulle pour lesquellesp(2)n décroît commenγ.

MSC:60G50; 60F10

Keywords:First passage time; Iterated partial sums; Persistence; Lower tail probability; One-sided probability; Random walk

1The research was supported in part by NSF grants DMS-0806211 and DMS-1106627.

2This research partially done when Jian Ding was at MSRI program in 2012.

3This research partially done when Fuchang Gao visited Stanford University in March 2010.

(2)

1. Introduction

The estimation of probabilities of rare events is one of the central themes of research in the theory of probability. Of particular note arepersistenceprobabilities, formulated as

qn=P

1maxknYk< y

, (1.1)

where{Yk}nk=1is a sequence of zero-mean random variables. For independentYi the persistence probability is eas- ily determined to be the product ofP(Yk < y)and to a large extent this extends to the case of sufficiently weakly dependent and similarly distributedYi, where typically qn decays exponentially inn. In contrast, in the classical case of partial sums, namelyYk=Sk=k

i=1Xi with{Xj}i.i.d. zero-mean random variables, it is well known that qn=O(n1/2)decays as a power law. This seems to be one of the very few cases in which a power law decay forqn can be proved and its exponent is explicitly known. Indeed, within the large class of similar problems where dependence betweenYi is strong enough to rule out exponential decay, the behavior ofqn is very sensitive to the precise structure of dependence between the variablesYi and even merely determining its asymptotic rate can be very challenging (for example, see [4] for recent results in caseYk=n

i=1Xi(1ck,n)i are the values of a random Kac polynomials evaluated at certain non-random{ck,n}).

We focus here on iterated sums of i.i.d. zero-mean, random variables{Xi}. That is, withSn=n

k=1Xk and Sn(2)=

n

k=1

Sk= n

i=1

(ni+1)Xi, (1.2)

we are interested in the asymptotics asn→ ∞of the persistence probabilities p(2)n (y):=P

1maxknSk(2)< y

, p(2)n (y):=P

1maxknSk(2)y

, (1.3)

wherey≥0 is independent ofn. Withy n it immediately follows from Lindeberg’sCLT(when Xi are square integrable), thatpn(2)(y)→0 asn→ ∞and our goal is thus to find a sharp rate for this decay to zero.

Note that for any fixedy >0 we have thatp(2)n (y)pn(2)(y)pn(2)(0)up to a constant depending only ony, here and throughout the paper,ABmeans that there exist two positive constantsC1andC2, such thatC1ABC2A.

Indeed, becauseEX1 >0, clearlyP(X1<ε) >0 forε=y/k and some integerk≥1. Now, for any n≥1 and z≥0,

p(2)n (z)p(2)n (z)≥P(X1<ε)p(2)n1(z+ε)≥P(X1<ε)p(2)n (z+ε)

and applying this inequality forz=iε,i=0,1, . . . , k−1 we conclude that p(2)n (0)

P(X1<ε)k

p(2)n (y). (1.4)

Of course, we also have the complementary trivial relationspn(2)(0)p(2)n (0)p(2)n (y)p(2)n (y), so it suffices to consider onlypn(2)(0)andp(2)n (0)which we denote hereafter byp(2)n andp(2)n , respectively. Obviously,p(2)n andp(2)n have the same order (withp(2)n =p(2)n wheneverX1has a density), and we consider both only in order to draw the reader’s attention to potential identities connecting the two sequences{pn(2)}and{p(2)n }.

Persistence probabilities such aspn(2)appear in many applications. For example, the precise problem we consider here arises in the study of the so-called sticky particle systems (cf. [12] and the references therein). In case of standard normalXi it is also related to entropic repulsion for∇2-Gaussian fields (cf. [3] and the references therein), though here we consider the easiest version, namely a one dimensional∇2-Gaussian field. In his 1992 seminal paper, Sinai [11] proved that ifP(X1=1)=P(X1= −1)=1/2, thenpn(2)n1/4. However, his method relies on the fact that for Bernoulli{Xk}all local minima ofkSk(2)correspond to values ofkwhereSk=0, and as such form a sequence of regeneration times. For this reason, Sinai’s method can not be readily extended to most other distributions. Using

(3)

a different approach, more recently Vysotsky [13] managed to extend Sinai’s result thatpn(2)n1/4toXi which are double-sided exponential, and a few other special types of random walks. At about the same time, Aurzada and Dereich [2] used strong approximation techniques to prove the bounds n1/4(logn)4pn(2)n1/4(logn)4 for zero-mean random variables{Xi}such thatE[eβ|X1|]<∞for someβ >0. However, even forXi which are standard normal variables it was not known before the present results whether these logarithmic terms are needed, and if not, how to get rid of them. Our main result, stated below, fully resolves this question, requiring only that EX21is finite and positive.

Theorem 1.1. For i.i.d.{Xk}of zero mean and0<E|X1|<∞,letSn(2)=S1+S2+ · · · +Sn,whereSi=X1+X2+

· · · +Xi.Then, n

k=0

p(2)k p(2)nkc12E|Sn+1|

E|X1| , (1.5)

wherec1≤6√

30,andc1=2ifX1is symmetric.The converse inequality n

k=0

p(2)k pn(2)k≥ 1 c2

E|Sn+1|

E|X1| (1.6)

holds for some finite c2 whenever X1 is bounded, or withX1 only having finite third moment and X1 squared integrable.Taken together,these bounds imply that

1 4c1c2

E|Sn+1|

(n+1)E|X1|≤p(2)nc1

E|Sn+1|

(n+1)E|X1|. (1.7)

Furthermore,assumingonlythatEX1=0and0<E(X21) <∞,we have that

pn(2)n1/4. (1.8)

Remark 1.2. In contrast to (1.8),for any 0< γ <1/4 there exists integrable, zero-mean variableX1 for which p(2)n nγ.Indeed,consideringP(Y1> y)=yα1y1with1< α <2,the bounds(1.7)hold for the bounded below, zero-mean,integrable random variableX1=Y1−EY1.Settingan=n1/α,clearlynP(|X1|> anx)xαasn→ ∞, hencean1Snbnconverges in distribution to a zero-mean,one-sided Stableα variableZα,and it is further easy to check that bn =an1nE[X11|X1|≤an] →b= −EY1.In fact,it is not hard to verify that {an1Sn} is a uniformly integrable sequence and consequently n1/αE|Sn| →E|Zα −EY1|finite and positive.From Theorem1.1we then deduce that p(2)n nγ for γ =(1−1/α)/2. This rate matches with the corresponding one for integrated Lévy α-stable process,cf. [10].

The sequences{Sk}and{Sk(2)}are special cases of the class of auto-regressive processesYk=L

=1aYk+Xk with zero initial conditions, i.e. Yk≡0 whenk≤0 (whereSk corresponds toL=a1=1 andSk(2) corresponds to L=a1=2,a2= −1). While for such stochastic processes(Yk, . . . , YkL+1)is a time-homogeneous Markov chain of state spaceRLandqn=P(τ > n)is merely the upper tail of the first hitting timeτ of[y,)by the first coordinate of the chain, the general theory of Markov chains does not provide the precise decay ofqn, which even in caseL=1 ranges from exponential decay fora1>0 small enough (which can be proved by comparing with O-U process, cf.

[1]), via the O(n1/2)decay fora=1 to a constantnqnin the limita1↑ ∞. While we do not pursue this here, we believe that the approach we develop for proving Theorem1.1can potentially determine the asymptotic behavior ofqnfor a large collection of auto-regressive processes. This is of much interest, since for example, as shown in [7], the asymptotic tail probability that random Kac polynomials have no (or few) real roots is determined in terms of the limit asr→ ∞of the power law tail decay exponents for the iteratesSk(r)=k

i=1Si(r1),r≥3.

(4)

Our approach further suggests that there might be some identities connecting the sequences{p(2)n }and{p(2)n }. Note that, if we denote

p(1)n =P

1maxknSk<0

, p(1)n =P

1maxknSk≤0

,

then as we show in the proof of the following proposition that there are indeed identities connecting the sequences {p(1)n }and{p(1)n }.

Proposition 1.3. IfXi are mean zero i.i.d.symmetric random variables then for alln≥1, p(1)n(2n−1)!!

(2n)!! ≤p(1)n . (1.9)

In particular,ifX1also has a density,then p(1)n =(2n−1)!!

(2n)!! . (1.10)

Remark 1.4. Proposition1.3is not new and can be found in[6],SectionXII.8.In fact,it is shown there that for all zero-mean random variables with bounded second moment(not necessary symmetric),

p(1)n n1/2. (1.11)

The novel point is our elegant proof,which serves as the starting point of our approach to the study ofp(2)n .

Remark 1.5. LetB(s)denote a Brownian motion starting atB(0)=0and consider the integrated Brownian motion Y (t )= 0tB(s)ds.Sinai[11]proved the existence of positive constantsA1andA2such that for anyT ≥1,

A1T1/4≤P sup

t∈[0,T]Y (t )≤1

A2T1/4. (1.12)

Upon settingε=T3/2andt=uT,by Brownian motion scaling this is equivalent up to a constant to the following result that can be derived from an implicit formula of McKean[8] (cf. [5]):

lim

ε0+

ε1/6P sup

u∈[0,1]Y (u)ε

= 3Γ (5/4) 4π

2√ 2π.

Since the iterated partial sumsSn(2) corresponding to i.i.d.standard normal random variables{Xi},forms a “dis- cretization” of Y (t ), the right-most inequality in (1.12) readily follows from Theorem 1.1. Indeed, with E[Y (k)Y (m)] = k2(3mk)/6 and E[Sk(2)Sm(2)] =k(k +1)(3m−k +1)/6 for mk, setting Z(k) =

(1+1/k)(1+1/(2k)Y (k),results withE[(Sk(2))2] =E[Z(k)2]and it is further not hard to show thatf (m, k):=

E[Sm(2)S(2)k ]/E[Z(m)Z(k)] ≥1for allm=k(asf (k+1, k)≥1anddf (m, k)/dm >0for anymk+1).Thus,by Slepian’s lemma,we have that for anyy

P

1maxknZ(k) < y

pn(2)(y),

and settingnas the integer part ofT ≥1it follows that P

sup

t∈[0,T]Y (t )≤1 ≤P

1maxknY (k)≤1 ≤P

1maxknZ(k) <2

p(2)n (2).

Sincepn(2)(2)cp(2)n for some finite constantcand alln,we conclude from Theorem1.1that P

sup

t∈[0,T]Y (t )≤1

≤2c(n+1)1/4≤2cT1/4.

(5)

2. Proof of Proposition1.3

SettingS0=0 letMn=max0jnSj and consider the{0,1,2, . . . , n}-valued random variable N =min{l≥0:Sl=Mn}.

For eachk=1,2, . . . , n−1 we have that

{N=k} = {Xk>0, Xk+Xk1>0, . . . , Xk+Xk1+ · · · +X1>0;

Xk+1≤0, Xk+1+Xk+2≤0, . . . , Xk+1+Xk+2+ · · · +Xn≤0}. By the independence of{Xi}, the latter identity implies that

P(N =k)=P(Xk>0, Xk+Xk1>0, . . . , Xk+Xk1+ · · · +X1>0)

×P(Xk+1≤0, Xk+1+Xk+2≤0, . . . , Xk+1+Xk+2+ · · · +Xn≤0)

=pk(1)p(1)nk,

where the last equality follows from our assumptions thatXi are i.i.d. symmetric random variables. Also note that P(N =0)=p(1)n and

P(N =n)=P(Xn>0, Xn+Xn1>0, . . . , Xn+Xn1+ · · · +X1>0)=p(1)n . Thus, settingp(1)0 =p(1)0 =1 we arrive at the identity

n

k=0

p(1)k p(1)nk= n

k=0

P(N =k)=1, (2.1)

holding for alln≥0.

Fixingx∈ [0,1), upon multiplying (2.1) byxn and summing overn≥0, we arrive atP (x)P (x)=11x, where P (x)=

k=0pk(1)xkandP (x)=

k=0p(1)k xk. Now, ifX1also has a density thenpk(1)=p(1)k for allkand so by the precedingP (x)=P (x)=(1x)1/2. Consequently,p(1)n is merely the coefficient ofxnin the Taylor expansion at x=0 of the function(1x)1/2, from which we immediately deduce the identity (1.10).

IfX1does not have a density, let{Yi}be i.i.d. standard normal random variables, independent of the sequence{Xi} and denote bySk andS˜k the partial sums of{Xi}and{Yi}, respectively. Note that for anyε >0, each of the i.i.d.

variablesXi+εYi is symmetric and has a density, with the corresponding partial sums beingSk+εS˜k. Hence, for anyδ >0 we have that

P

1maxknSk<δ ≤P

1maxkn(Sk+εS˜k)≤0 +P

1maxknεS˜kδ

=(2n−1)!!

(2n)!! +P

1maxknεS˜kδ

.

Taking firstε↓0 followed byδ↓0, we conclude that P

1maxknSk<0

(2n−1)!!

(2n)!! ,

and a similar argument works for the remaining inequality in (1.9).

(6)

3. Proof of Theorem1.1

By otherwise consideringXi/E|Xi|, we assume without loss of generality thatE|X1| =1. To adapt the method of Section2for dealing with the iterated partial sumsSn(2), we introduce the parametert∈Rand consider the iterates Sj(2)(t)=S0(t)+ · · · +Sj(t),j≥0, of the translated partial sumsSk(t)=t+Sk,k≥0. That is,Sj(2)(t)=(j+1)t+ Sj(2)for eachj≥0.

Having fixed the value oft, we define the following{0,1,2, . . . , n}-valued random variable Kt=min

l≥0:Sl(2)(t)= max

0jnSj(2)(t)

.

Then, for eachk=2,3, . . . , n−2, we have the identity {Kt=k} =

Sk(t) >0, Sk(t)+Sk1(t) >0, . . . , Sk(t)+Sk1(t)+ · · · +S1(t) >0;

Sk+1(t)≤0, Sk+1(t)+Sk+2(t)≤0, . . . , Sk+1(t)+Sk+2(t)+ · · · +Sn(t)≤0

=

Sk(t) >0;Xk<2Sk(t), . . . , (k−1)Xk+ · · · +X2< kSk(t)

Sk+1(t)≤0

Xk+2≤ −2Sk+1(t), . . . , (nk−1)Xk+2+ · · · +Xn≤ −(nk)Sk+1(t) .

Next, for 2≤knwe defineYk,2σ (X2, . . . , Xk)andYk,nσ (Xk, . . . , Xn)such that Yk,2=max

Xk

2 ,2Xk+Xk1

3 , . . . ,(k−1)Xk+ · · · +X2

k

,

Yk,n=max Xk

2 ,2Xk+Xk+1

3 , . . . ,(nk+1)Xk+ · · · +Xn nk+2

.

It is then not hard to verify that the preceding identities translate into {Kt=k} =

Sk(t) >0≥Sk+1(t)

Yk,2< Sk(t)

Yk+2,n≤ −Sk+1(t)

(3.1)

=

Sk+(Yk,2)+< t≤ −Xk+1Sk(Yk+2,n)+

(3.2) holding for eachk=2, . . . , n−2. Further, fork=1 andk=n−1 we have that

{Kt=1} =

S1(t) >0

S2(t)≤0

Y3,n≤ −S2(t) , {Kt=n−1} =

Sn1(t) >0

Yn1,2< Sn1(t)

Sn(t)≤0 ,

so upon settingY1,2=Yn+1,n= −∞, the identities (3.1) and (3.2) extend to all 1≤kn−1.

For the remaining cases, that is, fork=0 andk=n, we have instead that {Kt=0} =

t≤ −X1(Y2,n)+

, (3.3)

{Kt=n} =

Sn+(Yn,2)+< t

. (3.4)

In contrast with the proof of Proposition1.3, here we have events{(Yk,2)+< Sk(t)}and{(Yk+2,n)+≤ −Sk+1(t)} that are linked throughSk(t)and consequently not independent of each other. Our goal is to unhook this relation and in fact the parametertwas introduced precisely for this purpose.

3.1. Upper bound For any integern >1, let

An= max

1kn{−Sk+1}, Bn= − max

1kn{Sk}.

(7)

By definition AnBn. Further, for any 1≤kn−1, from (3.1) we have that the event{Kt =k} implies that {Sk(t) >0≥Sk+1(t)} = {−Sk< t≤ −Sk+1}and hence that{Bn1< tAn1}. From (3.2) we also see that for any 1≤kn−1,

R1{Kt=k}dt≥(Xk+1)1{Yk,2<0}1{Yk+2,n0}

and consequently, An1Bn1=

R1{Bn1<tAn1}dt≥

n1

k=1

R1{Kt=k}dt

n1

k=1

(Xk+1)1{Yk,2<0}1{Yk+2,n0}. (3.5)

Taking the expectation of both sides we deduce from the mutual independence ofYk,2,Xk+1andYk+2,nthat E[An1Bn1] ≥

n1

k=1

E

(Xk+1)

P(Yk,2<0)P(Yk+2,n≤0).

Next, observe that since the sequence{Xi}has an exchangeable law, P(Yk,2<0)=P

Xk<0,2Xk+Xk1<0, . . . , (k−1)Xk+ · · · +X2<0

=P

X1<0,2X1+X2<0, . . . , (k−1)X1+ · · · +Xk1<0

=pk(2)1. (3.6)

Similarly, P(Yk+2,n≤0)=p(2)n1k. WithXk+1 having zero mean, we have thatE[(Xk+1)] =E[(Xk+1)+] =1/2 (by our assumption thatE|Xk+1| =E|X1| =1). Consequently, for anyn >2,

E[An1Bn1] ≥1 2

n1

k=1

p(2)k1p(2)n1k=1 2

n2

k=0

p(2)k p(2)n2k.

WithE[Sn+1] =0 and{Xk}exchangeable, we clearly have that E[AnBn] =E

1maxkn{Sn+1Sk+1} +E

1maxknSk

=2E

1maxknSk

. (3.7)

Recall Ottaviani’s maximal inequality that for a symmetric random walk P(maxnk=1Skt )≤2P(Snt ) for any n, t≥0, hence in this case

E

1maxknSk ≤2

0

P(Snt )dt=E|Sn|.

To deal with the general case, we replace Ottaviani’s maximal inequality by Montgomery-Smith’s inequality P

1maxkn|Sk| ≥t

≤3 max

1knP

|Sk| ≥t /3

≤9P

|Sn| ≥t /30

(see [9]), from which we deduce that E

1maxknSk ≤9

0 P

|Sn| ≥t /30

dt=270E|Sn| (3.8)

and thereby get (1.5). Finally, since npn(2) is non-increasing and p(2)np(2)n , the upper bound of (1.7) is an immediate consequence of (1.5).

(8)

3.2. Lower bound

Turning to obtain the lower bound, let

mn:= −X1(Y2,n)+, Mn:= −Sn+(Yn,2)+.

Note that for anyn≥2, by using the last term of the maxima in the definition ofYn,2andY2,n, we have Yn,2+Y2,n≥1

n

(n−1)Xn+ · · · +X2 +1

n

(n−1)X2+ · · · +Xn

=SnX1,

and consequently,

MnmnX1Sn+(Y2,n+Yn,2)+(X1Sn)+=(X2+ · · · +Xn). (3.9) In particular,Mnmn. From (3.3) and (3.4) we know that ifmn< tMnthen necessarily 1≤Ktn−1. Therefore,

Mnmn=

R1{mn<tMn}dt≤

n1

k=1

R1{Kt=k}dt. (3.10)

In view of (3.2) we have that for any 1≤kn−1, bk:=E

R1{Kt=k}dt

=E

Xk+1+(Yk,2)++(Yk+2,n)+ .

By the mutual independence of the three variables on the right side, and since{Xk}have identical distribution, we find that

bk=

0 P

Xk+1<x, (Yk,2)++(Yk+2,n)+< x dx

0

P(X1> x)P(Yk,2< x)P(Yk+2,n< x)dx. (3.11)

Next, settingTi,k(2)=T1,k+ · · · +Ti,kforTi,k=Xk+ · · · +Xk+1i,i≥1 andT0,k:=0, observe that for any 0≤jk−1 and≥1,

Tj(2)+,k+=T(2)1,k++(j+1)T,k++Tj,k(2).

Hence, withA,k:= {Ti,k(2)<0, i=1, . . . , −1}, just as we did in deriving the identity (3.6), we have that for any ≥1,

{Yk+,2<0} =

A,k+, T(2)1,k++(j+1)T,k++Tj,k(2)<0,0≤jk−1 , {Yk,2< x} =

Tj,k(2)< (j+1)x,1≤jk−1 .

Consequently,

{Yk,2< x} ∩ {T,k+<x} ∩A,k+⊆ {Yk+,2<0}. By exchangeability of{Xm}we have that for anyk, ,

P(A,k+)=p(2)1=P(Y,2<0).

(9)

Thus, applying Harris’s inequality for the non-increasing eventsA,k+ and{T,k+<x}, we get by the indepen- dence of{Xm}that

pk(2)+1=P(Yk+,2<0)≥P(Yk,2< x)P(T,k+<x)p(2)1. SinceT,k+has the same law asSwe thus get the bound

P(Yk,2< x)pk(2)+1 p(2)1P(S<x)

for any≥1. Similarly, we have that for any≥1, P(Yk+2,n< x)p(2)nk+1

p(2)1P(S≤ −x) .

Clearlykp(2)k is non-increasing, so combining these bounds we find that bkc2

2p(2)k pn(2)k, (3.12)

forc2:=2 0P(X1> x)g(x)2dx, where g(x):=sup

1

p(2)1P(S<x)

. (3.13)

For (X1) bounded it clearly suffices to show thatg(x) >0 for each fixedx >0, and this trivially holds by the positivity ofP(X1<r)forr >0 small enough (hence,p(2) ≥P(X1<r) also positive). Assuming instead that X1has finite (and positive) second moment, from (1.11) and the trivial boundp(2)1p(1) we have that for some κ >0 and allx,

g(x)κsup

1

1

P(S<x)

.

Further, by theCLTthere existsM <∞large enough such thatη:=infx>0P(SxM2<x)is positive. Hence, setting = xM2, we deduce that in this caseg(x)c/(1+xM)for somec >0 and allx≥0. Consequently,c2is then finite provided

3M

0

(1+xM)2P(X1> x)dx≤E

1+(X1)M3

<,

i.e. whenever(X1)has finite third moment. Next, considering the expectation of both sides of (3.10) we deduce that under the above stated conditions, for anyn >2,

E(Mnmn)c2

2

n1

k=1

p(2)k1p(2)nk1.

In view of (3.9) we also have thatE(Mnmn)≥E[(Sn1)] =12E|Sn1|, from which we conclude that (1.6) holds for alln≥1.

Turning to lower boundpn(2)as in (1.7), recall thatnpn(2)is non-increasing. Hence, applying (1.6) forn=2m+1 and utilizing the previously derived upper bound of (1.7) we have that

1

c2E|S2(m+1)| ≤2 m

k=0

pk(2)p(2)m ≤2c1pm(2) m

k=0

E|Sk+1| k+1

≤4c1p(2)m

(m+1)E|Sm+1|, (3.14)

Références

Documents relatifs

, 430 where price is the price paid in auction (in million $ ), area is the area of the painting (height times width, both measured in inches), ratio is the aspect ratio (height

It is plain that we are not only acquaintcd with the complex &#34; Self-acquainted-with-A,&#34; but we also know the proposition &#34;I am acquainted with A.&#34; Now here the

The Landau–Selberg–Delange method gives an asymptotic formula for the partial sums of a multiplicative function f whose prime values are α on average.. In the literature, the average

It has applications to the determination of basis of a strongly ∆-ergodic Markov chain (see [6] and references therein), the perturbed Markov chains (see [7]), the design and

Let us finally mention that our results could easily be lifted to Alternating-time µ- calculus (AMC) [AHK02]: the PTIME algorithm proposed in [AHK02] for explicit CGSs, which

The best earlier result is the Kolmo- gorov-Seliverstov-Plessner theorem s.. We can now repeat the

We will need the following local law, eigenvalues rigidity, and edge universality results for covariance matrices with large support and satisfying condition (3.25).... Theorem

We suppose that the survival chance σ po of a larva within a given flower, influenced by intraspecific competition for seeds and existing defense mechanisms on behalf of the plant,