• Aucun résultat trouvé

On the Almost Sure Central Limit Theorem for Vector Martingales: Convergence of Moments and Statistical Applications

N/A
N/A
Protected

Academic year: 2021

Partager "On the Almost Sure Central Limit Theorem for Vector Martingales: Convergence of Moments and Statistical Applications"

Copied!
27
0
0

Texte intégral

(1)

HAL Id: inria-00348138

https://hal.inria.fr/inria-00348138

Submitted on 17 Dec 2008

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Martingales: Convergence of Moments and Statistical Applications

Bernard Bercu, Peggy Cénac, Guy Fayolle

To cite this version:

Bernard Bercu, Peggy Cénac, Guy Fayolle. On the Almost Sure Central Limit Theorem for Vector Martingales: Convergence of Moments and Statistical Applications. [Research Report] RR-6780, INRIA. 2008, pp.26. �inria-00348138�

(2)

a p p o r t

d e r e c h e r c h e

SN0249-6399ISRNINRIA/RR--6780--FR+ENG

Thème NUM

On the Almost Sure Central Limit Theorem for Vector Martingales : Convergence of Moments and

Statistical Applications

Bernard Bercu — Peggy Cénac — Guy Fayolle

N° 6780

December 2008

(3)
(4)

Centre de recherche INRIA Paris – Rocquencourt

Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le Chesnay Cedex

Statistical Applications

Bernard Bercu , Peggy C´enac , Guy Fayolle Th`eme NUM — Syst`emes num´eriques

Equipe-Projet Imara´

Rapport de recherche n°6780 — December 2008 — 23 pages

Abstract: We investigate the almost sure asymptotic properties of vector martingale transforms. Assuming some appropriate regularity conditions both on the increasing process and on the moments of the martingale, we prove that normalized moments of any even order converge in the almost sure cental limit theorem for martingales. A conjecture about almost sure upper bounds under wider hypotheses is formulated. The theoretical results are supported by examples borrowed from statistical applications, including linear autoregressive models and branching processes with immigration, for which new asymptotic properties are established on estimation and predic- tion errors.

Key-words: Almost sure central limit theorem, vector martingale, mo- ment, stochastic regression.

Universit´e Bordeaux 1, Institut de Math´ematiques de Bordeaux UMR 5251, and INRIA Bordeaux Sud-Ouest, 351 cours de la Lib´eration, 33405 Talence Cedex, France.

Bernard.Bercu@math.u-bordeaux1.fr

Universit´e de Bourgogne, Institut de Math´ematiques de Bourgogne, UMR 5584, 9 rue Alain Savary, BP 47870, 21078 Dijon Cedex, France. Peggy.Cenac@u-bourgogne.fr

INRIA CR Paris-Rocquencourt, Domaine de Voluceau, BP 105, 78153 Le Chesnay Cedex, France. Guy.Fayolle@inria.fr

(5)

moments et applications statistiques

esum´e : On ´etudie dans ce rapport des propri´et´es de convergence presque ure de transform´ees de martingales vectorielles. Sous certaines conditions d’existence de moments et de r´egularit´e du processus croissant, on montre en particulier la convergence des moments normalis´es de tout ordre pair dans le th´eor`eme central limite presque sˆur pour les martingales vectorielles. On formule une conjecture de borne presque sˆure, sous des hypoth`eses moins res- trictives et couvrant des familles plus vastes de processus. Enfin, on applique ces r´esultats des exemples issus d’applications statistiques, notamment les mod`eles autor´egressifs lin´eaires et certains processus de branchement avec immigration, ce qui permet d’´etablir de nouvelles propri´et´es asymptotiques sur les erreurs d’estimation et de pr´ediction.

Mots-cl´es : Th´eor`eme central limite presque sˆur, martingale vectorielle, moment, regression stochastique.

(6)

1 Introduction

Let (Xn) be a sequence of real independent identically distributed random variables withE[Xn] = 0 andE[Xn2] =σ2 and denote

Sn= Xn k=1

Xk.

It follows from the ordinary central limit theorem (CLT) that Sn

n

−→ NL (0, σ2),

which implies, for any bounded continuous real functionh

nlim→∞Eh hSn

n i=

Z

Rh(x)dG(x)

whereG stands for the Gaussian measureN(0, σ2). Moreover, by the cele- brated almost sure central limit theorem (ASCLT), the empirical measure

Gn= 1 logn

Xn k=1

1 kδSk

k

satisfies

Gn=G a.s.

In other words, for any bounded continuous real functionh

nlim→∞

1 logn

Xn k=1

1 khSk

k =

Z

Rh(x)dG(x) a.s.

The ASCLT was simultaneously proved by Brosamler [3] and Schatte [15]

and, in its present form, by Lacey and Phillip [10]. In contrast with the wide literature on the ASCLT for independent random variables, very few references are available on the ASCLT for martingales apart the recent work of Bercu and Fort [1, 2] and the important contribution of Chaabane and Maaouia [5,4] and Lifshits [13,14]. The ASCLT for martingales is as follows.

Let (εn) be a martingale difference sequence adapted to a filtrationF= (Fn) with E2n+1|Fn] = σ2 a.s. Let (Φn) be a sequence of random variables adapted to Fand denote by (Mn) the real martingale transform

Mn= Xn k=1

Φk1εk.

(7)

We also need to introduce the explosion coefficientfnassociated with (Φn) fn= Φ2n

sn where sn=

Xn k=0

Φ2k.

As soon as (fn) goes to zero a.s. and under reasonable assumption on the conditional momments of (εn), the ASCLT for martingales asserts that the empirical measure

Gn= 1 logsn

Xn k=1

fkδMksk

1

=G a.s. (1.1)

In other words, for any bounded continuous real functionh

nlim→∞

1 logsn

Xn k=1

fkh Mk

sk1 =

Z

Rh(x)dG(x) a.s. (1.2) It is quite natural to overcome the case of unbounded functions h. To be more precisely, one might wonder if convergence (1.2) remains true for unbounded functionsh. It has been recently shown in [1, 2] that, whenever n) has a finite conditional moment of order >2p , then the convergence (1.2) still holds for any continuous real functionh such that|h(x)| ≤x2p. Theorem 1.1(Convergence of moments in the scalar case). Assume that n) is a martingale difference sequence such that E2n+1|Fn] = σ2 a.s. and satisfying for some integer p1 and for some real number a >2p,

sup

n0E[|εn+1|a|Fn]< a.s.

If the explosion coefficient (fn) tends to zero a.s., then

nlim→∞

1 logsn

Xn k=1

fkMk2 sk1

p

= σ2p(2p)!

2pp! a.s. (1.3)

The limit (1.3) is exactly the moment of order 2pof the Gaussian distribution N(0, σ2). The purpose of the present paper is to extend the results of [1] to vector martingale transforms, which is strongly needed in various applications arising in statistics and signal processing.

Let (Mn) be a square integrable vector martingale with values inRd, adapted to a filtration F. Its increasing process consists of the sequence (hMin) of symmetric, positive semi-definite square matrices of orderdgiven by

hMin= Xn k=1

E[(MkMk1)(MkMk1)t|Fk1].

(8)

A first version of the ASCLT for discrete vector martingales was proposed in [5, 6], under fairly restrictive assumptions on the increasing process (hMin).

Hereafter, our goal is to establish the convergence of moments of even or- der in the ASCLT for (Mn) under suitable assumptions on the behaviour of (hMin). We shall work in the general setting of vector martingales trans- forms (Mn), which can be written as

Mn=M0+ Xn k=1

Φk1εk

where M0 can be taken arbitrary and (Φn) denotes a sequence of random vectors of dimensiond, adapted to F. We also introduce

Sn= Xn k=0

ΦkΦtk+S (1.4)

where S is a fixed deterministic matrix, symmetric and positive definite.

One can obviously see that if E2n+1|Fn] = σ2 a.s., then the increasing process of (Mn) takes the form hMin = σ2Sn1. The explosion coefficient associated with (Φn) is now given by

fn= ΦtnSn1Φn= dndn1 dn

(1.5) wheredn= det(Sn).

After this short survey, the paper will be organized as follows. The main the- oretical result for vector martingale transforms is given in Section 2, at the end of which a quite plausible interesting conjecture is formulated, involving minimal assumptions. The final Section 3 proposes some statistical appli- cations to estimation and prediction errors in linear autoregressive models and in branching processes with immigration.

2 On the convergence of moments

As mentioned above, our main result is given in theorem 2.1 and extends theorem 1.1 to the vector case. By the way, in mathematics, the difficulty of the problem is almost always a strictly increasing function of the dimension of some underlying space: it is also the case here !

Theorem 2.1. Let n) be a martingale difference sequence satisfying the homogeneity condition E2n+1|Fn] =σ2 a.s. and such that, for some integer p1 and some real number a >2p,

(Hp) sup

n0E

|εn+1|a Fn

< a.s.

(9)

In addition, assume that the explosion coefficient fn tends to zero a.s. and that there exists a positive random sequence n) increasing to infinity and an invertible symmetric matrix L, such that

nlim→∞

1

αnSn=L a.s. (2.1)

Then, the following limits hold almost surely

nlim→∞

1 logdn

Xn k=1

fk MktSk11Mkp

=ℓ(p) =2p

p1

Y

j=1

d+ 2j

, (2.2)

nlim→∞

1 logdn

Xn k=1

MktSk11Mkp

MktSk1Mkp

=λ(p) = p

dℓ(p). (2.3) Remark 1. The limitℓ(p) corresponds exactly to the moment of order 2pof the norm of a gaussian vectorN(0, σ2Id), so that theorem 2.1 shows indeed the convergence of moments of order 2pin the ASCLT for vector martingales.

Here the deterministic normalization given in [5] has been replaced by the natural random normalization given by the increasing process.

Remark 2.The convergence hypothesis (2.1) clearly implies fn 0 a.s., if and only ifαnαn1 a.s. As a matter of fact, we deduce from (2.1) that

nlim→∞

dn

αdn = detL >0 a.s.

Proof. For the sake of shortness, we shall define the following variables.

Vn = MntSn11Mn, (2.4) ϕn = αn1ΦtnL1Φn,

vn = αn11MntL1Mn.

First of all, by using the symmetry ofL, the convergence (2.1) ensures that fn = ϕn+o ϕn

a.s. (2.5)

Vn = vn+o vn

a.s. (2.6)

For we have

fn=ϕn+αn1ΦtnL1/2 αnL1/2Sn1L1/2I

L1/2Φn.

The matrix Rn = αnL1/2Sn1L1/2 I is symmetricand denoting by ρn its spectral radius, we can write

αn1ΦtnL1/2RnL1/2Φn

ρn ϕn,

(10)

and ρn converges to 0 almost surely, which leads to (2.5). The equation (2.6) is proved in the same way from the decomposition

Vn=vn+Mnt Sn11αn11L1 Mn. Hence, by (2.6), Vnp = vnp +o vnp

a.s. In order to find the limit (2.2), il suffices, by Toeplitz lemma, to study the convergence

nlim→∞

1 logdn

Xn k=1

fkVkp= lim

n→∞

1 logdn

Xn k=1

ϕkvkp. (2.7) Theorem 2.1 will be proved by induction with respect to p 1, as in [1]

in the scalar case. The first step consists in writing a recursive relation for MntL1Mn.

Let

βn= tr L1/2SnL1/2

, γn= βnβn1 βn , δn= MntL1Φn

βn , mn=βn11MntL1Mn.

(2.8)

According to the definition of (Mn), the following decomposition holds Mn+1t L1Mn+1=MntL1Mn+ 2εn+1ΦtnL1Mn+ε2n+1ΦtnL1Φn, so that

mn+1 = (1γn)mn+ 2δnεn+1+γnε2n+1. (2.9) The theorem relies essentially on the following lemma.

Lemma 2.2. Under the assumptions of theorem 2.1, we have

nlim→∞

1 logdn

Xn k=1

γkmpk = ℓ(p)

dp+1 a.s. (2.10)

In addition, if gn=MntSn11Φn, we also have

nlim→∞

1 logdn

Xn k=1

(1fk)g2kmpk1= λ(p)

pdp1 a.s. (2.11) Proof. Raising equality (2.9) to the powerp implies

mpn+1 = Xp k=0

Xk ℓ=0

2kCpkCk γnδnk (1γn)mnpk

εk+ℓn+1. (2.12)

After some straightforward simplifications, we obtain the relation

mpn+1+An(p) =mp1+Bn+1(p) +Wn+1(p), (2.13)

(11)

where

An(p) = Xn k=1

βkp βkpβpk1

mpk, Bn+1(p) =

2p1

X

ℓ=1

Xn k=1

bk(ℓ)εk+1,

Wn+1(p) = Xn k=1

γkpε2pk+1. For 1p1, we have

bk(ℓ) =

Xℓ/2 j=0

22jCpjCjjγkjδk2j (1γk)mkpℓ+j

, while, forp2p1,

bk(ℓ) =

Xℓ/2 j=ℓ(p1)

22jCpjCjjγkjδk2j (1γk)mkpℓ+j

+Cpp22pδ2pk γkp.

In order to take out useful information aboutAn(p), it is necessary to study the asymptotic behaviourWn+1(p),Bn+1(p) and mpn.

The casep= 1. Remarking that logβnPn

k=1γk, Chow’s lemma [7, page 22] implies

nlim→∞

1

logβnWn+1(1) =σ2 a.s.

Applying the strong law of large numbers for martingales and the inequality δ2nγnmn, we getBn+1(1) =o An(1)

a.s. In addition, from relation (2.30) in [17], it follows thatmn+1=o(logβn) a.s. Consequently, by (2.13),

nlim→∞

1

logβnAn(1) =σ2 a.s.

But the basic convergence assumption (2.1) implies immediately

nlim→∞

βn

γn =d a.s., so that logdn dlogβn a.s. Hence

nlim→∞

1

logdnAn(1) = σ2

d a.s.,

which establishes (2.10).

(12)

As for the proof of (2.11), one can proceed in the same way, starting from the decomposition

Vn+1 = MntSn1Mn+ 2εn+1ΦtnSn1Mn+ε2n+1fn

= hn+ 2εn+1gn+ε2n+1fn. which, for all n1, leads to

Vn+1+An=V1+Bn+1+Wn+1, (2.14) where

Andef= Xn k=1

ak(1), Bn+1def= 2 Xn k=1

εk+1gk, Wn+1 def= Xn k=1

ε2k+1fk.

By Riccati’s formula,

g2n= (1fn)an(1), (2.15) and, hence, coupling (2.15) with the strong law of large numbers for mar- tingales,

Bn+1=o An a.s.

On the other hand,mn=o logβn

, which, with (2.6), gives the almost sure estimateVn=o logdn

. Now, since Pn

k=1fklogdn, Chow’s lemma implies

nlim→∞ logdn1

Wn+1 =σ2 a.s.,

and to conclude the proof of Lemma 2.2 forp= 1, it suffices to divide (2.14) by logdn, letting n→ ∞.

The casep2. First, using again Chow’s lemma, we can write Wn+1(p) =o logdn

a.s.

Also, we shall show that Bn+1(p) = p

dp+1ℓ(p) logdn+o logdn

+o An(p)

a.s. (2.16) Setting, for 12p1,

εn+1 def= en+1(ℓ) +En+1|Fn] =en+1(ℓ) +σn(ℓ), (2.17) we write Pn

k=1bk(ℓ)εk+1=Cn+1(ℓ) +Dn(ℓ), with Cn+1(ℓ)def=

Xn k=1

bk(ℓ)ek+1(ℓ), and Dn(ℓ)def= Xn k=1

bk(ℓ)σk(ℓ).

(13)

First, for anysuch that 1p1, using again the strong law of large numbers and equation (2.30) of [17], we have

Cn+1(ℓ) =o logdn a.s.

Suppose 3p1. From Hlder’s inequality and the assumptions on the moments of εn, it follows that, for all 1 j 2p1, |σn(j)| is bounded, which implies

Dn(ℓ)

=OXn

k=1

γkℓ/2mpkℓ/2 .

For even ℓ, the induction assumption leads toDn(ℓ) =o(logdn) a.s. When is odd, Cauchy Schwarz inequality and the induction assumption yield

|Dn(ℓ)|=O Xn

k=1

γkmpk11/2 Xn k=1

γkmpk1/2

=o logdn a.s.

Suppose nowp2p1. It is easy to obtain

|Dn(ℓ)|=OXn

k=1

γkℓ/2mpkℓ/2 a.s.

Now, from the induction assumption, we get, for any integer6= 2, Dn(ℓ) =o logdn

a.s.

It remains to study Cn+1(ℓ). By Chow’s lemma, we have the almost sure equality

Cn+1(ℓ) =o νn(ℓ)

, with νn(ℓ) = Xn k=1

|bk(ℓ)|2p/ℓ =OXn

k=1

γkpm(2pk ℓ)p/ℓ . For ℓ > p, we apply Hlder’s inequality with exponents ℓ/p and ℓ/(ℓp).

Then, νn(ℓ) = o(logdn) a.s. In the particular case p = ℓ, we get by the strong law of large numbers

|Cn+1(ℓ)|2=O τn(p) logτn(p)

with τn(p) = Xn k=1

bk(p)2=OXn

k=1

γkpmpk , which leads to|Cn+1(ℓ)|=o(logdn) a.s., since, from equation (2.30) of [17], mpk =o (logdn)δ

, for 0< δ <1. As for the last term Dn(2), one needs to study, forp3, the quantity

Dn(2)

logdn = σ2 logdn

Xn k=1

X1 j=0

222jCp2jC2jjγkjδ2k2jmpk2+j,

= 2p(p1)σ2 logdn

Xn k=1

δk2mpk2+ 2 logdn

Xn k=1

γkmpk1.

(14)

It is easy to establish

g2n= ΦtnSn11Mn2

=d2 δ2n+o γnmn

. (2.18)

Then, the induction assumption and Toeplitz lemma imply

nlim→∞

1 logdn

Xn k=1

δk2mpk2= lim

n→∞

1 d2logdn

Xn k=1

ak(1)mpk2 = λ(p1) (p1)dp a.s.

Thus, we obtain

nlim→∞

Dn(2) logdn

=ℓ(p1)2p(p1)σ2 dp+1 +2

dp

= p

dp+1ℓ(p) a.s., [this formula being also valid forp= 2] which proves (2.16).

On the other hand, still applying equation (2.30) of [17], we derive the estimatempn=o(logdn). Thus,

nlim→∞

1

logdnAn(p) = p

dp+1ℓ(p) a.s.

Since

βnpβnp1 βnp =γn

p1

X

q=0

βn1 βn

p1q

n a.s., (2.19) we get finally

nlim→∞

1 logdn

Xn k=1

γkmpk= lim

n→∞

1

plogdnAn(p) = ℓ(p) dp+1, and the proof of (2.10) is terminated.

As for the second part of Lemma 2.2, i.e. equation (2.11), we could proceed along similar lines, via the equality

Vn+1p = Xp k=0

Xk ℓ=0

2kCpkCk fngnkhpnkεk+ℓn+1, withgn= ΦtnSn11Mn and hn=MntSn1Mn.

The proof of theorem 2.1 is completed as relations (2.10) and (2.11) are clearly direct consequences of (2.2) and (2.3), respectively. Indeed, since βnn, we have almost surely

Vn dmn, and fnn,

(15)

hence convergence (2.10) immediately yields (2.2). Moreover in the partic- ular case p = 1, the second convergence (2.11) is exactly (2.3). Now, for p2, the elementary expansion

xpyp = (xy)xp1

p1

X

q=0

y x

p1q

(2.20) leads to

an(p) = MntSn11Mnp

MntSn1Mnp

=an(1)Vnp1

p1

X

q=0

Vnan(1) Vn

p1q

.

(2.21)

Riccati’s formula yields

an(1) = (1fn)MntSn11ΦnΦtnSn11Mn

(1fn) tr(Sn1/21 ΦnΦtnSn1/21 )Vn

fnVn, hencean(1) =o(Vn) and, by (2.21),

an(p)pan(1)Vnp1 a.s., so that finally

nlim→∞

1 logdn

Xn k=1

ak(p) = lim

n→∞

pdp1 logdn

Xn k=1

ak(1)mpk1=λ(p) a.s. (2.22)

In most of statistical applications encountered so far, the convergence as- sumption (2.1) is satisfied. However, this technical hypothesis somehow cir- cumvents the vector problem, which in its full generality is not yet solved.

Indeed, (2.1) entails that all eigenvalues of the matrixSngrow to infinity at the same speedαn. Thus, our method of proof as some features in common with the scalar case. Hopefully, we should be able to establish the following result, stated for the moment as a conjecture, without assuming (2.1).

Conjecture 2.3.Let n) be a martingale difference sequence satisfying the homogeneity condition E2n+1|Fn] = σ2 a.s. and assumption (Hp) intro- duced in theorem 2.1, for some integer p1. Then, we have

Xn k=1

fk MktSk11Mkp

=O logdn

a.s.

Xn k=1

MktSk11Mkp

MktSk1Mkp

=O logdn

a.s.

Références

Documents relatifs

Keywords: Random walk; Ballistic; Random environment; Central limit theorem; Invariance principle; Point of view of the particle; Environment process; Green function.. Here is a

As an interesting corollary, we show that, for sequences of random variables lying in a fixed finite sum of Gaussian Wiener chaoses, the almost sure convergence is robust to

In Section 7 we study the asymptotic coverage level of the confidence interval built in Section 5 through two different sets of simulations: we first simulate a

Proposition 5.1 of Section 5 (which is the main ingredient for proving Theorem 2.1), com- bined with a suitable martingale approximation, can also be used to derive upper bounds for

The second item of theorem 1.1 is a consequence of inequalities (23) and of the following proposition... This concludes

Kantorovich distance, empirical measure, central limit theorem, law of the iterated logarithm, stationary sequences, Banach spaces.. Mathematics Subject

Finally, note that El Machkouri and Voln´y [7] have shown that the rate of convergence in the central limit theorem can be arbitrary slow for stationary sequences of bounded

Abstract: We established the rate of convergence in the central limit theorem for stopped sums of a class of martingale difference sequences.. Sur la vitesse de convergence dans le