• Aucun résultat trouvé

Stein's density approach and information inequalities

N/A
N/A
Protected

Academic year: 2021

Partager "Stein's density approach and information inequalities"

Copied!
14
0
0

Texte intégral

(1)

DOI: 10.1214/ECP.v18-2578 ISSN: 1083-589X

COMMUNICATIONS in PROBABILITY

Stein’s density approach and information inequalities

Christophe Ley

Yvik Swan

Abstract

We provide a new perspective on Stein’s so-called density approach by introducing a new operator and characterizing class which are valid for a much wider family of probability distributions on the real line. We prove an elementary factorization property of this operator and propose a new Stein identity which we use to derive information inequalities in terms of what we call the generalized Fisher information distance. We provide explicit bounds on the constants appearing in these inequalities for several important cases. We conclude with a comparison between our results and known results in the Gaussian case, hereby improving on several known inequalities from the literature.

Keywords: generalized Fisher information ; magic factors ; Pinsker’s inequality ; probability

metrics; Stein’s density approach.

AMS MSC 2010: 60F05; 94A17.

Submitted to ECP on December 30, 2011, final version accepted on January 24, 2013. Supersedes arXiv:1210.3921v1.

1

Introduction

Charles Stein’s crafty exploitation of the characterization

X ∼ N (0, 1) ⇐⇒ E [f0(X) − Xf (X)] = 0 for all boundedf ∈ C1(R) (1.1) has given birth to a “method” which is now an acclaimed tool both in applied and in theoretical probability. The secret of the “method” lies in the structure of the operator

Tφf (x) := f0(x) − xf (x)and in the flexibility in the choice of test functions f. For the

origins we refer the reader to [40, 38, 37]; for an overview of the more recent achieve-ments in this field we refer to the monographs [28, 3, 4, 12] or the review articles [27, 31].

Among the many ramifications and extensions that the method has known, so far the connection with information theory has gone relatively unexplored. Indeed while it has long been known that Stein identities such as (1.1) are related to information theoretic tools and concepts (see, e.g., [20, 22, 14]), to the best of our knowledge the only references to explore this connection upfront are [5] in the context of compound Poisson approximation, and more recently [32, 33] for Poisson and Bernoulli approxima-tion. In this paper and the companion paper [23] we extend Stein’s characterization of the Gaussian (1.1) to a broad class of univariate distributions and, in doing so, provide an adequate framework in which the connection with information distances becomes transparent.

Université libre de Bruxelles, Belgium. E-mail: chrisley@ulb.ac.beUniversité du Luxembourg, Luxembourg. E-mail: yvik.swan@gmail.com

(2)

The structure of the present paper is as follows. In Section 2 we provide the new perspective on the density approach from [39] which allows to extend this construction to virtually any absolutely continuous probability distribution on the real line. In Section 3 we exploit the structure of our new operator to derive a family of Stein identities through which the connection with information distances becomes evident. In Section 4 we compute bounds on the constants appearing in our inequalities; our method of proof is, to the best of our knowledge, original. Finally in Section 5 we discuss specific examples.

2

The density approach

LetGbe the collection of positive real functionsx 7→ p(x)such that (i) their support

Sp := {x ∈ R : p(x)(exists and) is positive} is an interval with closure S¯p = [a, b], for

some−∞ ≤ a < b ≤ ∞, (ii) they are differentiable (in the usual sense) at every point in(a, b)with derivativex 7→ p0(x) := dydp(y)|y=x and (iii)

R

Spp(y)dy = 1. Obviously, each

p ∈ Gis the density (with respect to the Lebesgue measure) of an absolutely continuous random variable. Throughout we adopt the convention

1 p(x)=

 1

p(x) ifx ∈ Sp

0 otherwise;

this implies, in particular, thatp(x)/p(x) = ISp(x), the indicator function of the support

Sp. As final notation, forp ∈ Gwe writeEp[l(X)] :=RS

pl(x)p(x)dx.

With this setup in hand we are ready to provide the two main definitions of this paper (namely, a class of functions and an operator) and to state and prove our first main result (namely, a characterization).

Definition 2.1. Top ∈ Gwe associate (i) the collectionF (p)of functionsf : R → Rsuch that the mappingx 7→ f (x)p(x)is differentiable on the interior ofSpandf (a+)p(a+) =

f (b−)p(b−) = 0, and (ii) the operatorTp: F (p) → R?: f 7→ Tpf defined through

Tpf : R → R : x 7→ Tpf (x) := 1 p(x) d dy(f (y)p(y)) y=x . (2.1)

We callF (p) the class of test functions associated with p, and Tp the Stein operator

associated withp.

Theorem 2.2. Letp, q ∈ G and letQ(b) =Rb

aq(u)du. Then

R+∞

−∞ Tpf (y)q(y)dy = 0for all

f ∈ F (p)if, and only if,q(x) = p(x)Q(b)for allx ∈ Sp.

Proof. If Q(b) = 0 the statement holds trivially. We now take Q(b) > 0. To see the sufficiency, note that the hypotheses onf,pandqguarantee that

Z ∞ −∞ Tpf (y)q(y)dy = Q(b) Z b a d du(f (u)p(u))|u=ydy = Q(b) f (b−)p(b−) − f (a+)p(a+) = 0.

To see the necessity, first note that the conditionR

RTpf (y)q(y)dy = 0 implies that the

functiony 7→ Tpf (y)q(y)be Lebesgue-integrable. Next define forz ∈ Rthe function

lz(u) := (I(a,z](u) − P (z))ISp(u)

withP (z) :=Rz

a p(u)du, which satisfies

Z b

a

(3)

Then the function fzp(x) := 1 p(x) Z x a lz(u)p(u)du = − 1 p(x) Z b x lz(u)p(u)du !

belongs toF (p)for allzand satisfies the equation

Tpfzp(x) = lz(x)

for allx ∈ Sp. For this choice of test function we then obtain

Z +∞ −∞ Tpfzp(y)q(y)dy = Z +∞ −∞ lz(y)q(y)dy = (Q(z) − P (z)Q(b))ISp(z),

with Q(z) := Razq(u)du. Since this integral equals zero by hypothesis, it follows that

Q(z) = P (z)Q(b)for allz ∈ Sp, hence the claim holds.

The above is, in a sense, nothing more than a peculiar statement of what is often referred to as a “Stein characterization”. Within the more conventional framework of real random variables having absolutely continuous densities, Theorem 2.2 reads as follows.

Corollary 2.3 (The density approach). LetXbe an absolutely continuous random vari-able with densityp ∈ G. LetY be another absolutely continuous random variable. Then

E [Tpf (Y )] = 0for allf ∈ F (p)if, and only if, eitherP(Y ∈ Sp) = 0orP(Y ∈ Sp) > 0and

P ( Y ≤ z | Y ∈ Sp) = P(X ≤ z)

for allz ∈ Sp.

Corollary 2.3 extends the density approach from [39] or [11, 12] to a much wider class of distributions; it also contains the Stein characterizations for the Pearson given in [34] and the more recent general characterizations studied in [15, 18]. There is, how-ever, a significant shift operated between our “derivative of a product” operator (2.1) and the standard way of writing these operators in the literature. Indeed, while one can always distribute the derivative in (2.1) to obtain (at least formally) the expansion

Tpf (x) =  f0(x) +p 0(x) p(x)f (x)  ISp(x), (2.2)

the latter requiresf be differentiable onSp in order to make sense. We do not require

this, neither do we require that each summand in (2.2) be well-defined onSpnor do we

need to impose integrability conditions onffor Theorem 2.2 (and thus Corollary 2.3) to hold! Rather, our definition ofF (p)allows to identify a collection of minimal conditions on the class of test functionsf for the resulting operatorTpto be orthogonal topw.r.t.

the Lebesgue measure, and thus characterizep.

Example 2.4. Take p = φ, the standard Gaussian. Then F (φ) is composed of all real-valued functions f such that (i) x 7→ f (x)e−x2/2 is differentiable on

R and (ii)

limx→±∞f (x)e−x 2/2

= 0. In particularF (φ)contains the collection of all differentiable bounded functions and

Tφf (x) = f0(x) − xf (x),

which is Stein’s well-known operator for characterizing the Gaussian (see, e.g., [37, 3, 12]). There are of course many other subclasses that can be of interest. For example

(4)

the classF (φ)also contains the collection of functions f (x) = −f00(x)with f0 a twice

differentiable bounded function; for these we get

Tφf (x) = xf00(x) − f000(x),

the generator of an Ornstein-Uhlenbeck process, see [2, 19, 28]. The class F (φ) as well contains the collection of functions of the formf (x) = Hn(x)f0(x)forHn then-th

Hermite polynomial andf0any differentiable and bounded function. For thesef we get

Tφf (x) = Hn(x)f00(x) − Hn+1(x)f0(x),

an operator already discussed in [17] (equation (38)).

Example 2.5. Take p = Exp the standard rate-one exponential distribution. Then

F (Exp)is composed of all real-valued functionsf such that (i)x 7→ f (x)e−x is differ-entiable on(0, +∞), (ii)f (0) = 0and (iii) limx→+∞f (x)e−x = 0. In particularF (Exp)

contains the collection of all differentiable bounded functions such thatf (0) = 0and

TExpf (x) = (f0(x) − f (x)) I[0,∞)(x),

the operator usually associated to the exponential, see [25, 29, 39]. The classF (Exp)

also contains the collection of functions of the formf (x) = xf0(x)forf0 any

differen-tiable bounded function. For thesef we get

TExpf (x) = (xf00(x) + (1 − x)f0(x)) I[0,∞)(x),

an operator put to use in [10].

Example 2.6. Finally takep = Beta(α, β)the beta distribution with parameters(α, β) ∈ R+0 × R

+

0. Then F (Beta(α, β))is composed of all real-valued functions f such that (i)

x 7→ f (x)xα−1(1 − x)β−1is differentiable on(0, 1), (ii)lim

x→0f (x)xα−1(1 − x)β−1= 0and

(iii)limx→1f (x)xα−1(1 − x)β−1= 0. In particularF (Beta(α, β))contains the collection of

functions of the formf (x) = (x(1 − x))f0(x)withf0any differentiable bounded function.

For thesef we get

TBeta(α,β)f (x) = ((α(1 − x) − βx) f0(x) + x(1 − x)f00(x)) I[0,1](x),

an operator recently put to use in, e.g., [18, 15].

There are obviously many more distributions that can be tackled as in the previ-ous examples (including the Pearson case from [34]), which we leave to the interested reader.

3

Stein-type identities and the generalized Fisher information

distance

It has long been known that, in certain favorable circumstances, the properties of the Fisher information or of the Shannon entropy can be used quite effectively to prove information theoretic central limit theorems; the early references in this vein are [35, 7, 6, 24]. Convergence in information CLTs is generally studied in terms of information (pseudo-)distances such as the Kullback-Leibler divergence between two densities p

andq, defined as dKL(p||q) = Eq  log q(X) p(X)  , (3.1)

(5)

or the Fisher information distance J (φ, q) = Eq "  X + q 0(X) q(X) 2# (3.2)

which measures deviation between any densityqand the standard Gaussianφ. Though they allow for extremely elegant proofs, convergence in the sense of (3.1) or (3.2) re-sults in very strong statements. Indeed both (3.1) and (3.2) are known to dominate more “traditional” probability metrics. More precisely we have, on the one hand, Pinsker’s inequality dTV(p, q) ≤ 1 √ 2 p dKL(p||q), (3.3)

fordTV(p, q) the total variation distance between the laws p and q (see, e.g., [16, p.

429]), and, on the other hand,

dL1(φ, q) ≤

2pJ (φ, q) (3.4)

fordL1(φ, q) the L1 distance between the laws φ andq (see [21, Lemma 1.6]). These

information inequalities show that convergence in the sense of (3.1) or (3.2) implies convergence in total variation or in L1, for example. Note that one can further use De Brujn’s identity on (3.3) to deduce that convergence in Fisher information is itself stronger than convergence in relative entropy.

While Pinsker’s inequality (3.3) is valid irrespective of the choice of pand q (and enjoys an extension to discrete random variables), both (3.2) and (3.4) are reserved for Gaussian convergence. Now there exist extensions of the distance (3.2) to non-Gaussian distributions (see [5] for the discrete case) which, as could be expected, have also been shown to dominate the more traditional probability metrics. There is, however, no general counterpart of Pinsker’s inequality for the Fisher information distance (3.2); at least there exists, to the best of our knowledge, no inequality in the literature which extends (3.4) to a general couple of densitiespandq.

In this section we use the density approach outlined in Section 2 to construct Stein-type identities which provide the required extension of (3.4). More precisely, we will show that a wide family of probability metrics (including the Kolmogorov, the Wasser-stein and theL1distances) is dominated by the quantity

J (p, q) := Eq "  p0(X) p(X) − q0(X) q(X) 2# . (3.5)

Our bounds, moreover, contain an explicit constant which will be shown in Section 4 to be at worst as good as the best bounds in all known instances. In the spirit of [5] we call (3.5) the generalized Fisher information distance between the densitiespandq, although here we slightly abuse of language since (3.5) rather defines a pseudo-distance than a bona fide metric between probability density functions.

We start with an elementary statement which relates, forp 6= q, the Stein operators

TpandTq through the difference of their respective score functions p 0 p and

q0 q.

Lemma 3.1. Letpandqbe probability density functions inGwith respective supports

SpandSq. LetSq ⊆ Sp and define

r(p, q)(x) := p 0(x) p(x) − q0(x) q(x)  ISp(x).

Suppose thatF (p) ∩ F (q) 6= ∅. Then, for allf ∈ F (p) ∩ F (q), we have

(6)

and therefore

Eq[Tpf (X)] = Eq[f (X)r(p, q)(X)] . (3.6)

Proof. SplittingSp intoSq∪ {Sp\ Sq}, we have

f (y)p(y) = f (y)q(y)p(y)/q(y)ISq(y) + f (y)p(y)ISp\Sq(y)

for any real-valued functionf. At anyxin the interior ofSp we thus can write

Tpf (x) = d dy(f (y)q(y)p(y)/q(y)) y=x p(x) ISq(x) + Tpf (x)ISp\Sq(x) = d dy(f (y)q(y)) y=x p(x) p(x) q(x)+ f (x)q(x) d dy(p(y)/q(y)) y=x p(x) + Tpf (x)ISp\Sq(x) = Tqf (x) + f (x) q(x) p(x) d dy(p(y)/q(y)) y=x + Tpf (x)ISp\Sq(x).

The first claim readily follows by simplification, the second by taking expectations under q which cancels the first termTqf (x) (by definition) as well as the third term

Tpf (x)ISp\Sq(x)(since the supports do not coincide).

Remark 3.2. Our proof of Lemma 3.1 may seem circumvoluted; indeed a much

eas-ier proof is obtainable by writing Tp under the form (2.2). We nevertheless stick to

the “derivative of a product” structure of our operator because this dispenses us with superfluous – and, in some cases, unwanted – differentiability conditions on the test functions.

From identity (3.6) we deduce the following immediate result, which requires no proof.

Lemma 3.3. Letpandqbe probability density functions inGwith respective supports

Sq ⊆ Sp. Let l be a real-valued function such that Ep[l(X)] and Eq[l(X)] exist; also

suppose that there existsf ∈ F (p) ∩ F (q)such that

Tpf (x) = (l(x) − Ep[l(X)])ISp(x); (3.7)

we denote this functionflp. Then

Eq[l(X)] − Ep[l(X)] = Eq[flp(X)r(p, q)(X)]. (3.8)

The identity (3.8) belongs to the family of so-called “Stein-type identities” discussed for instance in [17, 8, 1]. In order to be of use, such identities need to be valid over a large class of test functionsl. Now it is immediate to write out the solutionflp of the so-called “Stein equation” (3.7) explicitly for any givenpandl; it is therefore relatively simple to identify under which conditions on l and q the requirement flp ∈ F (q) is verified (sinceflp∈ F (p)is anyway true).

Remark 3.4. For instance, for p = φ the standard Gaussian, one easily sees that

limx→±∞flφ(x) = 0, hence, whenSq = Sφ = R, q only has to be (differentiable and)

bounded forflφ to belong toF (q). However, whenSq ⊂ R, thenqhas to satisfy,

more-over, the stronger condition of vanishing at the endpoints of its support Sq since flφ

(7)

We shall see in the next section that the required conditions forflp ∈ F (q) are sat-isfied in many important cases by wide classes of functionsl. The resulting flexibility makes (3.8) a surprisingly powerful identity, as can be seen from our next result.

Theorem 3.5. Letpandqbe probability density functions inGwith respective supports

Sq ⊆ Spand such thatF (p) ∩ F (q) 6= ∅. Let

dH(p, q) = sup l∈H

|Eq[l(X)] − Ep[l(X)]| (3.9)

for some class of functionsH. Suppose that for alll ∈ Hthe functionflp, as defined in (3.7), exists and satisfiesflp∈ F (p) ∩ F (q). Then

dH(p, q) ≤ κpHpJ (p, q), (3.10) where κpH= sup l∈H q Eq[(f p l(X))2] (3.11) and J (p, q) = Eq[(r(p, q)(X))2], (3.12)

the generalized Fisher information distance between the densitiespandq.

This theorem implies that all probability metrics that can be written in the form (3.9) are bounded by the generalized Fisher information distanceJ (p, q)(which, of course, can be infinite for certain choices of p and q). Equation (3.10) thus represents the announced extension of (3.4) to any couple of densities(p, q)and hence constitutes, in a sense, a counterpart to Pinsker’s inequality (3.3) for the Fisher information distance. We will see in Section 5 how this inequality reads for specific choices ofH,pandq.

4

Bounding the constants

The constantsκpHin (3.11) depend on both densitiespandqand therefore, to be fair, should be denotedκp,qH . Our notation is nevertheless justified because we always have

κpH≤ sup

l∈H

kflpk∞, (4.1)

where the latter bounds (sometimes referred to as Stein factors or magic factors) do not depend onqand have been computed for many choices ofHandp. Consequently,

κpH is finite in many known cases – including, of course, that of a Gaussian target.

Example 4.1. Takep = φ, the standard Gaussian. Then, from (4.1), we get the bounds (i)κpH≤pπ/2forHthe collection of Borel functions in[0, 1](see [28, Theorem 3.3.1]); (ii) κpH ≤ √2π/4 for H the class of indicator functions for lower half-lines (see [28, Theorem 3.4.2]); and (iii) κpH ≤ pπ/2 supl∈Hmin (kl − Ep[l(X)] k∞, 2kl0k∞) for H the

class of absolutely continuous functions onR(see [13, Lemma 2.3]). See also [30, 28, 3, 12] for more examples.

Bounds such as (4.1) are sometimes too rough to be satisfactory. We now provide an alternative bound forκpHwhich, remarkably, improves upon the best known bounds even in well-trodden cases such as the Gaussian. We focus on target densities of the form

p(x) = ce−d|x|αIS(x), α ≥ 1, (4.2)

withSa scale-invariant subset ofR(that is, eitherRor the open/closed positive/negative real half lines),d > 0some constant andc the appropriate normalizing constant. The exponential, the Gaussian or the limit distribution for the Ising model on the complete graph from [11] are all of the form (4.2). Of course, forS = R, (4.2) represents power exponential densities.

(8)

Theorem 4.2. Takep ∈ Gas in (4.2) andq ∈ Gsuch thatSq = S. Considerh : R → R

some Borel function withp-meanEp[h(X)] = 0. Letf p

h be the unique bounded solution

of the Stein equation

Tpf (x) = h(x). (4.3) Then r Eq h (fhp(X))2i≤||h||∞ 2α1 . (4.4)

Proof. Under the assumption thatEp[h(X)] = 0, the unique bounded solution of (4.3) is

given by fhp(x) =        1 p(x) Z x −∞ h(y)p(y)dy ifx ≤ 0, −1 p(x) Z ∞ x h(y)p(y)dy ifx ≥ 0,

the function being, of course, put to 0 ifxis outside the support ofp. Then

Eq(fhp(X))2 = Z 0 −∞ q(x)  1 p(x) Z x −∞ h(y)p(y)dy 2 dx + Z ∞ 0 q(x)  1 p(x) Z ∞ x h(y)p(y)dy 2 dx =: I−+ I+, whereI− = 0(resp.,I+= 0) if ¯ S = R+(resp., ¯ S = R−).

We first tackleI−. Settingp(x) = ce−d|x|αIS(x)and using Jensen’s inequality, we get

I−= Z 0 −∞ q(x)  ed|x|α Z x −∞ h(u)e−d|u|αdu 2 dx ≤ Z 0 −∞ q(x)  ed|x|α Z x −∞ |h(u)|e−d|u|αdu 2 dx ≤ Z 0 −∞ q(x)  e2d|x|α Z x −∞ h2(u)e−2d|u|αdu  dx = 1 21/α Z 0 −∞ q(x) e2d|x|α Z 21/αx −∞ h2(u/21/α)e−d|u|αdu ! dx,

where the last equality follows from a simple change of variables. Applying Hölder’s inequality we obtain I−≤ γ 1/2 q 21/α v u u t Z 0 −∞ q(x) e2d|x|α Z 21/αx −∞ h2(u/21/α)e−d|u|α du !2 dx =: I1−, where γq = Pq(X < 0) := R0

−∞q(x)dx. Repeating the Jensen’s inequality-change of

variables-Hölder’s inequality scheme once more yields

I−≤ I1− ≤ I2− with I2−= γ 1 2+14 q 21α(1+12)   Z 0 −∞ q(x) e4d|x|α Z (21/α)2x −∞ h4  u (21/α)2  e−d|u|αdu !2 dx   1 4 .

(9)

Iterating this procedurem ∈ Ntimes we deduce I−≤ I1−≤ . . . ≤ Im− withIm−given by γqN (m)−1 2α1N (m)   Z 0 −∞ q(x) e2md|x|α Z (21/α)mx −∞ h2m  u (21/α)m  e−d|u|αdu !2 dx   1 2m , whereN (m) = 1 +12+14+ . . . +21m. Boundingh2 m u (21/α)m  by(||h||∞)2 m simplifies the above into (||h||∞)2γ N (m)−1 q 2α1N (m)   Z 0 −∞ q(x) e2md|x|α Z (21/α)mx −∞ e−d|u|αdu !2 dx   1 2m .

Since the mapping y 7→ η(y) := ed|y|αRy

−∞e−d|u| α

du attains its maximal value at 0 for

α ≥ 1(indeed,

η0(y) = 1 − ed|y|αd α|y|α−1 Z y −∞ e−d|u|αdu ≥ 1 − ed|y|α Z y −∞ dα|u|α−1e−d|u|αdu = 0,

henceηis monotone increasing), the interior of the parenthesis becomes

Z 0 −∞ q(x) e2md|x|α Z (21/α)mx −∞ e−d|u|αdu !2 dx ≤ Z 0 −∞ q(x)1 c2dx = γq c2.

Note that here we have used, for any supportS, R0

−∞ce −d|u|α

du ≤ 1. Elevated to the power1/(2m), this factor tends to1asm → ∞. Since we also havelimm→∞N (m) = 2

we finally obtain I− ≤ lim m→∞I − m≤ (||h||∞)2 2α2 Pq(X < 0).

Similar manipulations allow to boundI+ by (||h||∞)2 2α2

Pq(X > 0). Combining both bounds

then allows us to conclude that

q

Eq[(fhp(X))2] ≤

||h||∞

2α1

,

hence the claim holds.

This result of course holds true without worrying about fhp ∈ F (q). However, in order to make use of these bounds in the present context, the latter condition has to be taken care of. For densities of the form (4.2), one easily sees thatfhp ∈ F (q)for all (differentiable and) bounded densitiesqforα > 1, with the additional assumption, for

α = 1, thatlimx→±∞q(x) = 0.

Example 4.3. Takep = φ, the standard Gaussian. Then, from (4.4),

κpH≤√1 2supl∈H

kl − Eφ[l(X)] k∞. (4.5)

Comparing with the bounds from Example 4.1 we see that (4.5) significantly improves on the constants in cases (i) and (iii); it is slightly worse in case (ii).

(10)

5

Applications

A wide variety of probability distances can be written under the form (3.9). For instance the total variation distance is given by

dTV(p, q) = sup A⊂R Z A (p(x) − q(x))dx =1 2h∈HsupB[−1,1] |Ep[h(X)] − Eq[h(X)]|

withHB[−1,1]the class of Borel functions in[−1, 1], the Wasserstein distance is given by

dW(p, q) = sup h∈HLip1

|Ep[h(X)] − Eq[h(X)]|

withHLip1the class of Lipschitz-1 functions onRand the Kolmogorov distance is given

by dKol(p, q) = sup z∈R Z z −∞ (p(x) − q(x))dx = sup h∈HHL |Ep[h(X)] − Eq[h(X)]|

withHHLthe class of indicators of lower half lines. We refer to [16] for more examples

and for an interesting overview of the relationships between these probability metrics. Specifying the class Hin Theorem 3.5 allows to bound all such probability metrics in terms of the generalized Fisher information distance (3.12). It remains to compute the constant (3.11), which can be done for allpof the form (4.2) through (4.4). The following result illustrates these computations in several important cases.

Corollary 5.1. Takep ∈ G as in (4.2) andq ∈ Gsuch thatSq = S. Forα > 1, suppose

thatqis (differentiable and) bounded overS; forα = 1, assume moreover thatqvanishes at the infinite endpoint(s) ofS. Then we have the following inequalities:

1. dTV(p, q) ≤ 2− 1 αpJ (p, q) 2. dKol(p, q) ≤ 2− 1 αpJ (p, q) 3. dW(p, q) ≤ supl∈H Lip1||l − Ep[l(X)]||∞ 2α1 pJ (p, q) 4. dL1(p, q) = Z S |p(x) − q(x)|dx ≤ 21−1 αpJ (p, q).

If, for ally ∈ S, q is such that the function flp(x) = ed|x|α

(I[y,b)(x) − P (x)), where P

denotes the cumulative distribution function associated withp, belongs toF (q), then

dsup(p, q) = sup x∈R

|p(x) − q(x)| ≤pJ (p, q).

Proof. The first three points follow immediately from the definition of the distances and Theorems 3.5 and 4.2. To show the fourth, note that

Z

S

|p(x) − q(x)|dx = Ep[l(X)] − Eq[l(X)]

forl(u) = I[p(u)≥q(u)]− I[q(u)≥p(u)]= 2I[p(u)≥q(u)]− 1.For the last case note that

dsup(p, q) := sup y∈S

|p(y) − q(y)| = sup

y∈S

|Ep[ly(X) − Eq[ly(X)]|

forly(x) = δ{x=y}the Dirac delta function iny ∈ S. The computation of the constantκpH

in this case requires a different approach from our Theorem 4.2. We defer this to the Appendix.

(11)

We conclude this section, and the paper, with explicit computations in the Gaussian casep = φ, hence for the classical Fisher information distance. From here on we adopt the more standard notations and writeJ (X)instead ofJ (φ, q), forXa random variable with densityq(which has supportR). Immediate applications of the above yield

Z

S

|φ(x) − q(x)| dx ≤√2pJ (X),

which is the second inequality in [21, Lemma 1.6] (obtained by entirely different means). Similarly we readily deduce

sup

x∈R

|φ(x) − q(x)| ≤pJ (X);

this is a significant improvement on the constant in [21, 35].

Next further suppose thatX has densityqwith meanµand varianceσ2. TakeZ ∼ p

withp = φµ02

0, the Gaussian with meanµ0and varianceσ 2 0. Then J (X) = Eq "  q0(X) q(X) + X − µ0 σ2 0 2# = I(X) +(µ − µ0) 2 σ4 0 + 1 σ2 0  σ2 σ2 0 − 2  , whereI(X) = Eq(q0(X)/q(X))2 

is the Fisher information of the random variableX. General bounds are thus also obtainable from (3.10) in terms of

Ψ := Ψ(µ, µ0, σ, σ0) = (µ − µ0)2 σ4 0 + 1 σ2 0  σ2 σ2 0 − 1  .

and the quantity

Γ(X) = I(X) − 1 σ2

0

,

referred to as the Cramér-Rao functional forq in [26]. In particular, we deduce from Theorem 4.2 and the definition of the total variation distance that

dTV(φµ0,σ20, q) ≤ 1 √ 2 p Γ(X) + Ψ.

This is an improvement (in the constant) on [26, Lemma 3.1], and is also related to [9, Corollary 1.1]. Similarly, takingH the collection of indicators for lower half lines we can use (4.1) and the bounds from [13, Lemma 2.2] to deduce

dKol(φµ0,σ02, q) ≤ √ 2π 4 σ0 p Γ(X) + Ψ. Further specifyingq = φµ12 1 we see that σ0 p Γ(X) + Ψ ≤ σ12− σ02 σ0σ1 +|µ1− µ0| σ0 ,

to be compared with [28, Proposition 3.6.1]. Lastly takeZ ∼ φthe standard Gaussian and X = F (Z)d forF some monotone increasing function on R such that f = F0 is defined everywhere. Then straightforward computations yield

I(X) = E "  ψf(Z) + Z f (Z) 2# ,

(12)

withψf = (log f )0. In particular, if F is a random function of the form F (x) = Y xfor

Y > 0some random variable independent ofZ, then simple conditioning shows that the above becomes I(X) = E Z 2 Y2  = E  1 Y2  , so that dTV(φ, qX) ≤ 1 √ 2 s E  1 Y2  − 1 + E(Y2− 1)

whereqX refers to the density ofX d

= Y Z. This last inequality is to be compared with [9, Lemma 4.1] and also [36].

A

Bounds for the supremum norm

First note that, forly(x) = δ{x=y}, the solutionf p

ly(x)of the Stein equation (3.7) is of

the form

1 p(x)

Z x

a

{z=y}− p(y))p(z)dz = p(y)(I[y,b)(x) − P (x))

p(x) .

For all densitiesqsuch thatflp

y(x) ∈ F (q), Theorem 3.5 applies and yields

supy∈S|p(y) − q(y)| ≤ supy∈Sp(y) q

Eq[(I[y,b)(X) − P (X))2/(p(X))2]pJ (p, q),

wherebis either0or+∞. We now prove that

supy∈Sp(y)qEq[(I[y,b)(X) − P (X))2/(p(X))2] ≤ 1

forp(x) = c e−d|x|α and any density qsatisfying the assumptions of the claim. To this end note that straightforward manipulations lead to

Eq[ I[y,b)(X) − P (X) 2 /(p(X))2] = 1 c2 Z b a q(x)e2d|x|α(I[y,b)(x) − P (x))2dx = 1 c2 Z y a q(x)e2d|x|α(P (x))2dx + 1 c2 Z b y q(x)e2d|x|α(1 − P (x))2dx ≤ 1 c2e 2d|y|α (P (y))2 Z y a q(x)dx + 1 c2e 2d|y|α (1 − P (y))2 Z b y q(x)dx = 1 c2e 2d|y|α (P (y))2+ 1 c2e 2d|y|α (1 − 2P (y))Pq(X ≥ y),

where the inequality is due to the fact thate2d|x|α

P (x)(resp.,e2d|x|α

(1 − P (x)))is mono-tone increasing (resp., decreasing) on(a, y)(resp.,(y, b)); see the proof of Theorem 4.2. This again directly leads to

Eq[ I[y,b)(X) − P (X) 2 /(p(X))2] ≤ sup y∈(a,b) ce−d|y|α r 1 c2e 2d|y|α ((P (y))2+ (1 − 2P (y))P q(X ≥ y) ! = sup y∈(a,b) q (P (y))2+ (1 − 2P (y))P q(X ≥ y)  .

(13)

References

[1] Afendras, G., Papadatos, N., and Papathanasiou, V.: An extended Stein-type covariance identity for the Pearson family with applications to lower variance bounds. Bernoulli, 17 (2011), 507–529. MR-2787602

[2] Barbour, A. D.: Stein’s method for diffusion approximations. Probability theory and related fields, 84 (1990), 297–322. MR-1035659

[3] Barbour, A. D., and Chen, L. H. Y.: An introduction to Stein’s method, vol. 4. World Scientific, 2005. MR-2205339

[4] Barbour, A. D., and Chen, L. H. Y.: Stein’s method and applications, vol. 5. World Scientific, 2005. MR-2205339

[5] Barbour, A. D., Johnson, O., Kontoyiannis, I., and Madiman, M.: Compound Poisson approxi-mation via inforapproxi-mation functionals. Electron. J. Probab. 15 (2010), 1344–1368. MR-2721049 [6] Barron, A. R.: Entropy and the central limit theorem. Ann. Probab., 14 (1986), 336–342.

MR-815975

[7] Brown, L. D.: A proof of the central limit theorem motivated by the Cramér-Rao inequality. In Statistics and probability: essays in honor of C. R. Rao. North-Holland, Amsterdam, 1982, 141–148. MR-659464

[8] Cacoullos, T., and Papathanasiou, V.: Characterizations of distributions by variance bounds. Statist. Probab. Lett., 7 (1989), 351–356. MR-1001133

[9] Cacoullos, T., Papathanasiou, V., and Utev, S. A.: Variational inequalities with examples and an application to the central limit theorem. Ann. Probab., 22 (1994), 1607–1618. MR-1303658

[10] Chatterjee, S., Fulman, J., and Roellin, A.: Exponential approximation by Stein’s method and spectral graph theory. ALEA , 8 (2011), 197–223.

[11] Chatterjee, S., and Shao, Q.-M.: Nonnormal approximation by Stein’s method of exchange-able pairs with application to the Curie-Weiss model. Ann. Appl. Probab., 21 (2011), 464– 483. MR-2807964

[12] Chen, L. H. Y., Goldstein, L., and Shao, Q.-M.: Normal approximation by Stein’s method. Probability and its Applications (New York). Springer, Heidelberg, 2011. MR-2732624 [13] Chen, L. H. Y., and Shao, Q.-M.: Stein’s method for normal approximation. In An introduction

to Stein’s method, vol. 4 of Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap. Singapore Univ. Press, Singapore, 2005, 1–59. MR-2235448

[14] Cover, T., and Thomas, J.: Elements of Information Theory, vol. Second Edition. Wiley & Sons, New York, 2006.

[15] Döbler, C.: Stein’s method of exchangeable pairs for absolutely continuous, univariate dis-tributions with applications to the polya urn model. arXiv:1207.0533, July 2012.

[16] Gibbs, A. L., and Su, F. E.: On choosing and bounding probability metrics. International Statistical Review / Revue Internationale de Statistique, 70 (2002), 419–435.

[17] Goldstein, L., and Reinert, G.: Distributional transformations, orthogonal polynomials, and Stein characterizations. J. Theoret. Probab., 18 (2005), 237–260. MR-2132278

[18] Goldstein, L., and Reinert, G.: Stein’s method and the beta distribution. arXiv:1207.1460, July 2012.

[19] Götze, F.: On the rate of convergence in the multivariate clt. Ann. Probab., 19 (1991), 724–739. MR-1106283

[20] Johnson, O.: Information theory and the central limit theorem. Imperial College Press, London, 2004. MR-2109042

[21] Johnson, O., and Barron, A. B.: Fisher information inequalities and the central limit theorem. Probab. Theory Related Fields, 129 391–409 (2004). MR-2128239

[22] Kontoyiannis, I., Harremoës, P., and Johnson, O.: Entropy and the law of small numbers. IEEE Trans. Inform. Theory 51 (2005), 466–472. MR-2236061

[23] Ley, C., and Swan, Y.: Stein’s density approach for discrete distributions and information inequalities. arXiv:1211.3668v1, November 2012.

(14)

[24] Linnik, J. V.: An information-theoretic proof of the central limit theorem with Lindeberg conditions. Theor. Probability Appl. 4 (1959), 288–299. MR-0124081

[25] Luk, H. M.: Stein’s method for the gamma distribution and related statistical applications. PhD thesis, University of Southern California, 1994.

[26] Mayer-Wolf, E.: The Cramér-Rao functional and limiting laws. Ann. Probab., 18 (1990), 840–850. MR-1055436

[27] Nourdin, I., and Peccati, G.: Stein’s method meets Malliavin calculus: a short survey with new estimates. In Recent development in stochastic dynamics and stochastic analysis, vol. 8 of Interdiscip. Math. Sci. World Sci. Publ., Hackensack, NJ, 2010, 207–236. MR-2807823 [28] Nourdin, I., and Peccati, G.: Normal approximations with Malliavin calculus : from Stein’s

method to universality. Cambridge Tracts in Mathematics. Cambridge University Press, 2011.

[29] Picket, A.: Rates of convergence of χ2 approximations via Stein’s method. PhD thesis,

Lincoln College, University of Oxford, 2004.

[30] Röllin, A.: On the optimality of stein factors. Probability Approximations and Beyond (2012), 61–72.

[31] Ross, N.: Fundamentals of Stein’s method. Probab. Surv. 8 (2011), 210–293. MR-2861132 [32] Sason, I.: An information-theoretic perspective of the poisson approximation via the

chen-stein method. arXiv:1206.6811, June 2012.

[33] Sason, I.: On the entropy of sums of bernoulli random variables via the chen-stein method. arXiv:1207.0436, July 2012.

[34] Schoutens, W.: Orthogonal polynomials in Stein’s method. J. Math. Anal. Appl., 253 (2001), 515–531. MR-1808151

[35] Shimizu, R.: On fisher’s amount of information for location family. In Statistical Distributions in Scientific Work (1975), G. P. et al., Ed., vol. 3, 305–312.

[36] Shimizu, R.: Error bounds for asymptotic expansion of the scale mixtures of the normal distribution. Ann. Inst. Statist. Math. 39 (1987), 611–622.

[37] Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Math-ematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. II: Probability theory (Berkeley, Calif., 1972), Univ. California Press, 583–602. MR-0402873 [38] Stein, C.: Approximate computation of expectations. Institute of Mathematical Statistics

Lecture Notes—Monograph Series, 7. Institute of Mathematical Statistics, Hayward, CA, 1986. MR-882007

[39] Stein, C., Diaconis, P., Holmes, S., and Reinert, G.: Use of exchangeable pairs in the analysis of simulations. In Stein’s method: expository lectures and applications (2004), P. Diaconis and S. Holmes, Eds., vol. 46 of IMS Lecture Notes Monogr. Ser, Beachwood, Ohio, USA: Institute of Mathematical Statistics, 1–26.

[40] Stein, C. M.: Estimation of the mean of a multivariate normal distribution. Ann. Statist., 9 (1981), 1135–1151. MR-630098

Acknowledgments. The authors thank the referees and editors for their remarks and

suggestions which led to significant improvements of our work. Christophe Ley’s re-search is supported by a Mandat de Chargé de Recherche from the Fonds National de la Recherche Scientifique, Communauté française de Belgique. Christophe Ley is also a member of E.C.A.R.E.S.

Références

Documents relatifs

• Since, the nodes or/and edges of the network considered are described by a cube, it allows interesting analyzes for a user who can navigate within a graph enriched by cubes

In this paper and the companion paper [ 22 ] we extend Stein’s characteriza- tion of the Gaussian ( 1.1 ) to a broad class of univariate distributions and, in doing so, provide

The volume minimization algorithm gives a better performance but it needs a longer computation time (251 times more than the segment min- imization method) because an

In this paper, we propose new approach based on Bloom Filter to cope with storage space problem in data oriented architecture DONA.. Keywords: Future Internet, Information

A Neural Based Approach and Probability Density Approximation for Fault Detection and Isolation in Nonlinear Systems.. 12th Engineering Applications of Neural Networks (EANN 2011)

In this article we review a new method for proving the nonexistence of positive solutions of elliptic inequalities in unbounded domains in R N , N ≥ 2, which was recently introduced

Furthermore, a close inspection of the bi-parental genetic maps highlighted the presence of 111 multi-locus SNPs (0.7% of the markers), mapping in 2 or even 3 (on 4 occasions)

A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical con- ductivity distribution), and then