• Aucun résultat trouvé

Tail and moment estimates for chaoses generated by symmetric random variables with logarithmically concave tails

N/A
N/A
Protected

Academic year: 2022

Partager "Tail and moment estimates for chaoses generated by symmetric random variables with logarithmically concave tails"

Copied!
34
0
0

Texte intégral

(1)

www.imstat.org/aihp 2012, Vol. 48, No. 4, 1103–1136

DOI:10.1214/11-AIHP441

© Association des Publications de l’Institut Henri Poincaré, 2012

Tail and moment estimates for chaoses generated by symmetric random variables with logarithmically concave tails 1

Radosław Adamczak

a

and Rafał Latała

a,b

aInstitute of Mathematics, University of Warsaw, Banacha 2, 02-097 Warszawa, Poland. E-mail:R.Adamczak@mimuw.edu.pl bInstitute of Mathematics, Polish Academy of Sciences, ´Sniadeckich 8, 00-956 Warszawa, Poland. E-mail:R.Latala@mimuw.edu.pl

Received 9 July 2010; revised 21 April 2011; accepted 10 June 2011

Abstract. We present two-sided estimates of moments and tails of polynomial chaoses of order at most three generated by in- dependent symmetric random variables with log-concave tails as well as for chaoses of arbitrary order generated by independent symmetric exponential variables. The estimates involve only deterministic quantities and are optimal up to constants depending only on the order of the chaos variable.

Résumé. Nous établissons un encadrement des moments et des queues d’un chaos polynomial d’ordre au plus trois engendré par des variables aléatoires indépendantes symétriques à queues log-concaves et pour des chaos d’ordre quelconque engendrés par des variables aléatoires indépendantes symétriques exponentielles. Ces estimations ne font intervenir que des quantités déterministes et sont optimales à des constantes près qui ne dépendent que de l’ordre du chaos.

MSC:Primary 60E15; secondary 60G15

Keywords:Polynomial chaoses; Tail and moment estimates; Metric entropy

1. Introduction

A (homogeneous) polynomial chaos of orderdis a random variable defined as n

i1,...,id=1

ai1,...,idXi1· · ·Xid, (1)

whereX1, . . . , Xn is a sequence of independent real random variables and(ai1,...,id)1i1,...,idnis ad-indexed sym- metric array of real numbers, satisfyingai1,...,id=0 whenever there existsk=lsuch thatik=il.

Random variables of this type appear in many branches of modern probability, e.g. as approximations of multiple stochastic integrals, elements of Fourier expansions in harmonic analysis on the discrete cube (when the underlying variablesXi’s are independent Rademachers), in subgraph counting problems for random graphs (in this caseXi’s are zero–one random variables) or in statistical physics.

Chaoses of order one are just linear combinations of independent random variables and their behavior is well- understood. Chaoses of higher orders behave in a more complex way as the summands in (1) are no longer inde- pendent. Nevertheless, due to their simple algebraic structure, many counterparts of classical results for sums of independent random variables are available. Among well known results there are Khinchine type inequalities and tail

1Research partially supported by MNiSW Grant N N201 397437 and the Foundation for Polish Science.

(2)

bounds involving the variance (see e.g. [4,11,19] or Chapter 3 of [7]). There are also two-sided inequalities in terms of expectations of suprema of empirical processes, which themselves are in general difficult to estimate (see [3,5] for Gaussian chaoses and [1,18] for the general case of chaoses generated by variables with log-concave tails).

In several cases, under additional assumptions on the distribution ofXi’s, even more precise results are known, which give two sided estimates on moments of polynomial chaoses in terms of deterministic quantities involving only the coefficientsai1,...,id (the estimates are accurate up to a constant depending only ond). Examples include Gaussian chaoses of arbitrary order [15], chaoses generated by nonnegative random variables with log-concave tails [16] and chaoses of order at most two, generated by symmetric random variables with log-concave tails ([10] ford=1 and [14] ford=2).

The aim of this paper is to provide some extensions of these results. In particular we provide two-sided estimates for moments of chaoses of order three generated by symmetric random variables with log-concave tails (Theorems 3.1and3.2) and for chaoses of arbitrary order, generated by symmetric exponential variables (Theorem3.4).

Before we formulate precisely our main results let us recall the notion of decoupled chaos and decoupling inequal- ities. A decoupled chaos of orderd is a random variable of the form

n i1,...,id=1

ai1,...,idX1i

1· · ·Xdi

d, (2)

where(ai1,...,id)1i1,...,idn is ad-indexed array of real numbers andXil,i=1, . . . , n,l=1, . . . , d, are independent random variables.

One can easily see that each decoupled chaos can be represented in the form (1) with a modified matrix and for suitably largern. However it turns out that for the purpose of estimating tails or moments of chaoses it is enough to con- sider decoupled chaoses. More precisely, we have the following important result due to de la Peña and Montgomery- Smith [8].

Theorem 1.1. Let(ai1,...,id)1i1,...,idnbe a symmetricd-indexed array such thatai1,...,id =0whenever there exists k=lsuch thatik=il.LetX1, . . . , Xnbe independent random variables and(Xij)1in,j=1, . . . , d,be independent copies of the sequence(Xi)1in.Then for allt≥0,

Ld1P

n i1,...,id=1

ai1,...,idX1i

1· · ·Xid

d

Ldt

≤P

n i1,...,id=1

ai1,...,idXi1· · ·Xidt

LdP

n i1,...,id=1

ai1,...,idX1i

1· · ·Xdi

d

Ld1t

,

whereLd(0,)depends only ond.In particular,for allp≥1, L˜d1

n i1,...,id=1

ai1,...,idXi1

1· · ·Xid

d

p

n i1,...,id=1

ai1,...,idXi1· · ·Xid

p

≤ ˜Ld

n i1,...,id=1

ai1,...,idX1i

1· · ·Xdi

d

p

,

whereL˜ddepends only ond.

If we are not interested in the values of numerical constants, the above theorem reduces estimation of tails and moments of general chaoses of orderd to decoupled chaoses. The importance of this result stems from the fact that the latter can be treated conditionally as chaoses of smaller order, which allows for induction with respect to d.

Since the reduction is straightforward, in the sequel when formulating our results we will restrict our attention to the decoupled case.

Let us finish the introduction by remarking that two-sided bounds on moments of chaoses of the form (1) can be used to give two-sided estimates for more general random variables, i.e. tetrahedral polynomials inX1, . . . , Xd, e.g. to polynomials in which every variable appears in a power at most 1. This is thanks to the following simple observation, which to our best knowledge has remained unnoticed.

(3)

Proposition 1.2. For j =0,1, . . . , d let (aji

1,...,ij)1i1,...,ijn be aj-indexed symmetric array of real numbers(or more generally elements of some normed space),such thataij

1,...,ij =0if ik=il for some1≤k < lj (forj =0 we have just a single numbera0).LetX1, . . . , Xnbe independent mean zero random variables.Then there exists a constantLd(0,),depending only ond,such that for allp≥1,

d j=0

n i1,...,ij=1

aij

1,...,ijXi1· · ·Xij p

Ld

d j=0

n i1,...,ij=1

aij

1,...,ijXi1· · ·Xij p

.

Note that a reverse inequality boils down just to the triangle inequality inLpand so the above proposition imme- diately gives two-sided estimates of moments of tetrahedral polynomials from estimates for homogeneous chaoses.

Since the details are straightforward we will not state explicitly the results which can be obtained from the inequal- ities we present. The easy (given general results on decoupling) but notationally involved proof of Proposition1.2is deferred to theAppendix.

The organization of the article is as follows. After introducing the necessary notation (Section2) we state our main results (Section3) and devote the rest of the paper to their quite involved proof. In the course of the proof we provide entropy estimates for special kinds of metrics on subsets of certain product sets (Section5.2) as well as bounds on empirical processes indexed by such sets (Sections 6 and7 where we also provide some partition theorems). We believe that these results may be of independent interest. In Section8we conclude the proof of our result for chaoses of order three and in Section9we give a proof of estimates for chaoses of arbitrary order generated by exponential variables.

2. Definitions and notation

Let(Xij)1in,1jdbe a matrix of independent symmetric random variables with logarithmically concave tails, i.e.

such that the functionsNij:[0,∞)→ [0,∞]defined by Nij(t)= −lnPXijt

are convex. We assume that r.v.’s are normalized in such a way that inf t≥0:Nij(t)≥1

=1. (3)

We set Nˆij(t)=

t2 for|t| ≤1, Nij

|t|

for|t|>1.

Remark. When working withd=1we will suppress the upper indexj and write simplyXi orNi. Recall that thepth moment of a real random variableXis defined asXpp=E|X|p.

For a sequence (xi)i of real numbers (sometimes multiindexed) we will denote x2=

ixi2 and x1=

i|xi|.

Fori∈ {1, . . . , n}d andI⊂ {1, . . . , d}we writeiI=(ik)kI. ByPd we will denote the family of all partitions of {1, . . . , d}into nonempty, pairwise disjoint subsets. ForJ = {I1, . . . , Ik} ∈Pd,p≥2 and a multiindexed matrix(ai) we define

(ai)N

J,p=

s1I1,...,skIk

sup

i

ai k l=1

xil

Il:

isl

Nˆisl

slxil

Il

iIl\{sl}

2

p,1≤lk

. (4)

Remark. WhenIlis a singleton,i.e.Il= {sl},then for any fixed value of the indexisl,(xil

Il)i

Il\{sl}2= |xil

sl|.

(4)

In particular ford=3 we have (aij k)N

{1,2,3},p=sup

ij k

aij kxij k:

i

Nˆi1

j,k

x2ij k

p

+sup

ij k

aij kxij k:

j

Nˆj2

i,k

xij k2

p

+sup

ij k

aij kxij k:

k

Nˆk3

i,j

xij k2

p

,

(aij k)N

{1,2}{3},p=sup

ij k

aij kxijyk:

i

Nˆi1

j

xij2

p,

k

Nˆk3(yk)p

+sup

ij k

aij kxijyk:

j

Nˆj2

i

xij2

p,

k

Nˆk3(yk)p

and

(aij k)N{

1}{2}{3},p=sup

ij k

aij kxiyjzk:

i

Nˆi1(xi)p,

j

Nˆj2(yj)p,

k

Nˆk3(zk)p

.

ByB1nandB2nwe will denote respectively the standardn1 andn2 balls, e.g.B1n= {x∈Rn: x1≤1}andB2n= {x∈Rn: x2≤1}.

Throughout the article we will writeLd, Lto denote constants depending only ond and universal constants re- spectively. In all cases the values of a constant may differ at each occurence.

ByAdBwe mean that there exists a constantLd(0,), such thatLd1BALdB. We will also denoteXj=(Xij)1inand writeEjfor the expectation with respect toXj.

3. Main results

Theorem 3.1. For anyd≥1andp≥2we have

i

aiX1i

1· · ·Xid

d

p

≥ 1 Ld

JPd

(ai)N

J,p. (5)

Theorem 3.2. Ford≤3andp≥2,

i

aiX1i

1· · ·Xid

d

p

Ld

J∈Pd

(ai)N

J,p. (6)

Remark 1. LetXij=cgij,wheregij are i.i.d.N(0,1)r.v.’s and1< c <10/9is a constant for which the normaliza- tion(3)holds.Thent2/L≤ ˆNij(t)Lt2and forJ = {I1, . . . , Ik} ∈Pd,p≥2

(ai)N

J,pdpk/2(ai)

J, where

(ai)

J =sup

i

ai

k l=1

xil

Il: xil

Il

2≤1,1≤lk

. (7)

(5)

Theorems3.1and3.2(for arbitraryd)in this case were established in[15].

A standard application of the Paley–Zygmund inequality (see e.g. Corollary 3.3.2 of [7]) and the fact thatpth and 2pth moments of chaoses generated by random variables with log-concave tails are comparable up to constants depending only on the order of the chaos yield the following corollary (for details see the proof of Corollary 1 in [15]).

Corollary 3.3. Ford≤3andt >0, 1

Ldet /Ld ≤P

i

aiXi1

1· · ·Xdi

d

J∈Pd

(ai)N

J,t

Ldet Ld.

We are not able to show Theorem3.2ford >3 in the general case. However we know that it holds for exponential random variables.

Theorem 3.4. IfNij(t)=tfor alli, j andt >0,then for anyd≥1andp≥2the estimate(6)holds.

4. Proof of Theorem3.1

We will proceed by induction with respect tod. The cased=1 was proved in [10]. Let us therefore assume that the theorem holds for all positive integers smaller thand >1.

Note that since we allow the constants to depend ond, it is enough to show that the left-hand side of (5) is minorized by each of the summands on the right-hand side.

For anyJ = {I1, . . . , Ik} ∈Pd, withk≥2, the induction assumption applied conditionally on(Xj)jI1 gives

i

aiXi1

1· · ·Xdi

d

p

≥ 1 Ld#I1

EI1

iI1

ai

rI1

Xri

r

iI c

1

N

J\{I1},p

p1/p

, (8)

whereN=(Nij)1in,jIc

1.

Let us fix arbitrarys1I1, . . . , skIk. We have EI1

iI1

ai

rI1

Xri

r

iI c

1

N

J\{I1},p

p

≥EI1

sup

iI c

1

iI1

ai

rI1

Xir

r

k

l=2

xil

Il

:

isl

Nˆil

slxil

Il

iIl\{sl}

2

p,2≤lk p

≥sup

EI1

iI c

1

iI1

ai

rI1

Xri

r

k

l=2

xil

Il

p

:

isl

Nˆil

slxil

Il

iIl\{sl}

2

p,2≤lk

≥ 1 Lp#I

1

sup

i

ai k l=1

xil

Il

:

isl

Nˆil

slxil

Il

iIl\{sl}

2

p,1≤lk p

,

where the last inequality follows from another application of the induction assumption, this time to a chaos of order

#I1. Since the indicess1, . . . , sd run over sets of cardinality not exceeding d, the above estimate together with (8) imply that

i

aiXi1

1· · ·Xdi

d

p

≥ 1

LdaiNJ,p.

(6)

The casek=1 requires a different approach. Again it is enough to show that for eachl∈ {1, . . . , d},

i

aiX1i

1· · ·Xid

d

p

≥ 1

Ldsup

i

aixi:

il

Nˆil

l

i{l}c

xi2 1/2

p

.

Consider anyxisuch that

ilNˆil

l((

i{l}cxi2)1/2)p. By the symmetry ofXli we have

i

aiX1i

1· · ·Xid

d

p

p

=ElE{l}c

il

Xli

l

i{l}c

ai

k=l

Xki

k

p

≥El

il

Xil

lE{l}c

i{l}c

ai

k=l

Xki

k

p

≥ 1 LpdEl

il

Xil

l

i{l}c

a2i 1/2

p

≥ 1 Lpd

sup

il

i{l}c

ai2 1/2

αil:

il

Nˆil

lil)p p

≥ 1 Lpd

il

i{l}c

ai2

1/2

i{l}c

xi2 1/2

p

≥ 1 Lpd

i

aixi p,

where the first inequality follows from Jensen’s inequality, the second one from hypercontractivity of chaoses gener- ated by log-concave random variables combined with the contraction principle and the third one from the induction assumption.

5. Preliminary facts

In this section we present the basic notation and tools to be used in the proof of our main results.

5.1. Some additional notation

1. Byγn,twe will denote the distribution oft Gn, whereGn=(g1, . . . , gn)is the standard Gaussian vector inRn. Let alsoGin=(gi1, . . . , gni)be independent copies ofGn.

2. Byνn,t we will denote the distribution oftEn, whereEn=1, . . . , ξn)is a random vector inRnwith independent coordinates distributed according to the symmetric exponential distribution with parameter 1. Thus νn,t has the density

n,t(x)=(2t )nexp

−1 t

n i=1

|xi|

dx.

We also putEni =1i, . . . , ξni)for i.i.d. copies ofEn. Let us note thatEξi2=2.

(7)

3. For any normαonRn1···nd =Rn1⊗ · · · ⊗Rnd (which we will identify with the space ofd-indexed matrices, i.e.

x1⊗ · · · ⊗xd=(d

j=1xji

j)1i1,...,idn, wherexj=(x1j, . . . , xnj)), letραbe the distance onRn1× · · · ×Rnd (which we will identify withRn1+···+nd), defined by

ρα(x,y)=α(x1⊗ · · · ⊗xdy1⊗ · · · ⊗yd), wherex=(x1, . . . , xd),y=(y1, . . . , yd).

Forx∈Rn1+···+nd andr≥0 letBα(x, r)be the closed ball in the metricραwith centerxand radiusr.

4. Now, forT ⊂Rn1× · · · ×Rnd,t >0, define WdT(α, t )=

d k=1

tk

I⊂{1,...,d},#I=k

WIT(α),

where forI⊂ {1, . . . , d}, WIT(α)=sup

xT

Eα

k /I

xik

k

kI

gki

k

i1,...,id

.

5. Similarly, fort >0,T ⊂Rn1× · · · ×Rnd we put

VdT(α, t ):=

d k=1

tk

I⊂{1,...,d}: #I=k

VIT(α),

where

VIT(α):=sup

xT

Eα

k /I

xik

k

kI

ξik

k

i1,...,id

.

6. Fors, t >0,T ⊂Rn1× · · · ×Rnd, we define UdT(α, s, t ):=

d k=1

I,J⊂{1,...,d},

#(IJ )=k,IJ=∅

s#It#JUI,JT (α),

where

UI,JT (α):=sup

xT

Eα

k /(IJ )

xik

k

kI

gik

k

kJ

ξik

k

i1,...,id

.

Remark. Let us notice thatUT,I(α)=VIT(α),whereasUI,T(α)=WIT(α).

The quantityWIT was defined in [15], where it played an important role in the analysis of moments of Gaussian chaoses. The quantitiesVIT andUI,JT will play an analogous role for chaoses generated by general random variables with logarithmically concave tails (as will become clear in the next section, they will allow us to bound the covering numbers for more general sets than those which were important in the Gaussian case).

5.2. Entropy estimates

In this section we present some general entropy estimates which will be crucial for bounding suprema of stochastic processes in the proof of Theorem3.2.

The first lemma we will need is a reformulation of Lemma 1 in [15]. The original statement from [15] is slightly weaker however the proof given therein justifies the version presented below.

(8)

Lemma 5.1. For any normsα1, α2onRn,yB2nandt >0, γn,t

x: αi(xy)≤4tEαi(Gn), i=1,2

≥1

2e1/(2t2). Lemma 5.2. For any normsα1, α2onRn,yaB1nandt >0,

νn,t

x: αi(xy)≤4tEαi(En), i=1,2

≥1 2ea/t. Proof. Let

K:= x∈Rn: α1(x)≤4tEα1(En), α2(x)≤4tEα2(En) . By Chebyshev’s inequality,

1−νn,t(K)≤P

α1(tEn) >4Eα1(tEn) +P

α2(tEn) >4Eα2(tEn)

<1/2.

We get for anyyaB1n, νn,t(y+K)=(2t )n

K

exp

−1 t

n i=1

|xi+yi|

dx≥exp

−1 t

n i=1

|yi|

K

n,t(x)

≥exp(−a/t )νn,t(K)≥1

2exp(−a/t ).

Finally, notice that ifxy+K, thenαi(xy)≤4tEαi(En),i=1,2.

Before we formulate the next lemma, let us defineμn,s,t(wheres, t >0) as the convolution ofγn,s andνn,t. Lemma 5.3. For any normsα1, α2onRn,anya >0,yB2n+aB1nands, t >0,let

K= x: α1(xy)≤4sEα1(Gn)+4tEα1(En), α2(x)≤4sEα2(Gn)+4tEα2(En)+α2(y) . Then

μn,s,t(K)≥1

4e1/(2s2)a/t.

Proof. We havey=y1+y2for somey1B2n,y2aB1n. Define K1= x∈Rn:αi(xy1)≤4sEαi(Gn), i=1,2

, K2= x∈Rn:αi(xy2)≤4tEαi(En), i=1,2

. Forx=x1+x2, wherexjKj,j=1,2,

α1(xy)α1(x1y1)+α1(x2y2)≤4sEα1(Gn)+4tEα1(En) and similarly

α2(x)α2(xy)+α2(y)≤4sEα2(Gn)+4tEα2(En)+α2(y), thereforeK1+K2K. We thus have

μn,s,t(K)μn,s,t(K1+K2)γn,s(K1n,t(K2)≥1

4e1/(2s2)a/t,

where in the last inequality we used Lemmas5.1and5.2.

(9)

Lemma 5.4. For anys, t >0,a=(a1, . . . , ad)(0,)dandx(B2n1+a1B1n1)× · · · ×(B2nd +adB1nd)we have μn1+···+nd,s,t

Bα

x, Ud{x}(α,4s,4t )

≥4dexp

−1

2ds2a1t1

. (9)

Proof. We will proceed by induction ond. Ford =1, inequality (9) follows by Lemma5.3. Now suppose that (9) holds ford−1. We will show that it is also satisfied ford. Let us first notice that

α d

i=1

xid

i=1

yi

α1

xdyd +αyd

d1

i=1

xi

d1

i=1

yi

, (10)

whereα1andαyare norms onRnd andRn1···nd1 respectively, defined by

α1(z):=α d1

i=1

xiz

and αy(z):=α(zy).

Then

sEα1(Gn)+tEα1(En)=sU{{dx}},(α)+t U{x,}{d}(α). (11) Moreover if we putπ(x)=(x1, . . . , xd1)and define a normα2s,tonRnd by the formula

α2s,t(y):=Ud{π(x)1 }y, s, t ) then

sEαs,t2 (Gn)+tEα2s,t(En)+αs,t2 xd

=

I,J⊂{1,...,d} IJ=∅,IJ=∅

s#It#JUI,J{x}(α, t )

sU{{dx}},(α)+t U{x,}{d}(α)

. (12)

Notice also that by the induction assumption we have for anyz∈Rnd,

μn1+···+nd−1,s,t

y∈Rn1+···+nd1: αz d1

i=1

xi

d1

i=1

yi

α24s,4t(z)

≥41dexp

(d−1)s2/2(a1+ · · · +ad1)t1

. (13)

Finally let A(x):=

y∈Rn1+···+nd: α1

xdyd

≤4sEα1(Gnd)+4tEα1(End),

α4s,4t2 yd

≤4sEα4s,4t2 (Gnd)+4tEα4s,4t2 (End)+α24s,4t xd

,

αyd

d1

i=1

xi

d1

i=1

yi

α4s,4t2 yd

.

By (10)–(12) we getA(x)Bα(x, Ud{x}(α,4s,4t ))and therefore by (13), Lemma5.3and Fubini’s theorem we get μn1+···+nd,s,t

Bα

x, Ud{x}(α,4s,4t )

μn1+···+nd,s,t

A(x)

Références

Documents relatifs

Unit´e de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 RENNES Cedex Unit´e de recherche INRIA Rhˆone-Alpes, 655, avenue de l’Europe, 38330 MONTBONNOT ST

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Les misérables fils et filles du bassin versant de la rivière Barrette, comme tous ceux du pays en dehors sont abandonnés à eux-mêmes, dépourvus de tous les services sociaux de

Owing to a formal mathematical analogy, the results of the flow boundary value problems with prescribed stretching velocity or prescribed skin friction also apply to the Darcy

Section 4 is devoted to the argument of tensorization which yields the isoperimetric inequality from the ones for the radial measure and the uniform probability measure on the

A case of reactiva- tion salmonellosis was diagnosed if a patient fulfilled all four of the following criteria: (i) the patient was asymp- tomatic on admission; (ii) bacteremia

Najar: Asymptotic behavior of the integrated density of states of acoustic operator with long range random perturbations. Najar: Non-Lifshitz tails at the spectrum bottom of some

APPLICATION: SYMMETRIC PARETO DISTRIBUTION Proposition 3.1 below investigates the bound of the tail probability for a sum of n weighted i.i.d.. random variables having the