DOI:10.1051/ps:2008038 www.esaim-ps.org
ALMOST SURE FUNCTIONAL LIMIT THEOREM FOR THE PRODUCT OF PARTIAL SUMS
Khurelbaatar Gonchigdanzan
1Abstract. We prove an almost sure functional limit theorem for the product of partial sums of i.i.d.
positive random variables with finite second moment.
Mathematics Subject Classification. 60F05, 60F15.
Received March 25, 2008. Revised August 10, 2008.
1. Introduction and main result
Limiting distributions of the product of partial sums of positive random variables have been widely studied in recent years. Arnold and Villase˜nor [1] proved the limit theorem for the partial sum of a sequence of exponential random variables. Rempala and Wesolowski [4] proved it for any independent and identically distributed (i.i.d.) random variables with finite variance. Later, Qi [5] considered a sequence of random variables with α-stable distribution and established the limit distribution of the product of the partial sums when 1< α≤2.
Recently, Zhang and Huang [6] proved the following invariance principle of the product of partial sums of i.i.d. positive random variables with meanμ >0 and varianceσ2:
⎛
⎝
[nt]
k=1
Sk
μk
⎞
⎠ μ σ√
n D
−→exp t
0
W(s)
s ds asn→ ∞. (1.1)
The goal of this paper is to obtain an almost sure version of the above invariance principle which can also be a functional version of the almost sure limit theorem obtained by Gonchigdanzan and Rempala [3]. Here is the result:
Theorem 1.1. Let(Xk)k≥1 be a sequence of i.i.d. positive random variables with mean μ >0and varianceσ2 and letSn =X1+· · ·+Xn. Define a process{ξn(t) : 0≤t≤1} by
ξn(t) :=
[nt]
k=1
Sk μk
μ σ√
n.
Keywords and phrases. Almost sure limit theorem, functional theorem, invariance principle, product of partial sums.
1 Department of Mathematical Sciences, University of Wisconsin, Stevens Point, WI 54481, USA;hurlee@uwsp.edu
Article published by EDP Sciences c EDP Sciences, SMAI 2010
Let Ft be the distribution function of the random variable on the right-hand side of (1.1). Then 1
logn n k=1
1
kI(ξk(t)≤x)−→a.s. Ft(x)asn→ ∞ (1.2) if and only if
1 logn
n k=1
1
kP(ξk(t)≤x)−→Ft(x) as n→ ∞. (1.3) Corollary 1.1. Let (Xk)k≥1 be a sequence of i.i.d. positive random variables with mean μ >0 and variance σ2. Then we have
1 logn
n k=1
1 kI
⎛
⎜⎜
⎜⎝
⎛
⎝[kt]
j=1
Sj μj
⎞
⎠ μ σ√
k ≤x
⎞
⎟⎟
⎟⎠
−→a.s. Ft(x)asn→ ∞.
2. Auxiliary results and proofs
Throughout the paper logxand log logxstand for ln(e∨n) and ln ln(n∨ee) respectively, and “” is meant for the big “O” notation.
2.1. Auxiliary results
Lemma 2.1. Let (Yn)n≥1 be a sequence of random variables. Set Sn =Y1+· · ·+Yn. Then we have
E
1max≤k≤n
k
j=1
log n+ 1
j Yj
≤3 log(n+ 1)E
1max≤k≤n|Sk| .
Proof. Observe that k
j=1
log n+ 1
j Yj ≤
k
j=1
log(n+ 1)Yj +
k
j=2
logj Yj
=T1+T2.
Obviously,T1≤log(n+ 1)|Sk|which implies
1max≤k≤nT1≤log(n+ 1) max
1≤k≤nSk. For the second termT2 we have
2max≤k≤nT2= max
2≤k≤n
k
j=2
logj Yj
= max
2≤k≤n
k
j=2
(Yj+Yj+1+· · ·+Yk)(logj−log(j−1))
≤ max
2≤k≤n
k j=2
|Yj+Yj+1+· · ·+Yk|(logj−log(j−1))
≤2 max
2≤k≤n|Y1+Y2+· · ·+Yk|n
j=2
(logj−log(j−1))≤2 log(n+ 1) max
1≤k≤n|Sk|.
Lemma 2.2. Let(Xk)k≥1 be a sequence of i.i.d. positive random variables with meanμand varianceσ2. Then settingSn =X1+· · ·+Xn we have
0max≤t≤1
μ σ√ n
[nt]
k=1
logSk μk− μ
σ√ n
[nt]
k=1
Sk
μk−1 −→a.s. 0 as n→ ∞.
Proof. Note that log(x+ 1) =x−r(x) wherer(x)/x2 → 12 as x→0. By the strong law of large numbers we haveSk/k−μ−→a.s. 0 as k→ ∞.
Thus by the law of iterated logarithm we get μ
σ√n
[nt]
k=1
logSk μk − μ
σ√n
[nt]
k=1
Sk
μk −1 a.s. 1 σ√n
[nt]
k=1
Sk k −μ
2
a.s. max
0≤t≤1
1 σ√
n
[nt]
k=1
1
klog logk 1 σ√
nlognlog logn→0.
2.2. Proof of Theorem 1.1
We use Berkes and Dehling’s [2] technique to prove our theorem. Observe that n
k=1
Sk
k −μ = n k=1
bk,n(Xk−μ)
wherebk,n=n
j=k1/j. Hence by Lemma 2.2 it suffices to show that for any Borel-subsetAofD[0,1]
1 logn
n k=2
1 kI
sˆk σ√
k ∈A −P ˆsk
σ√ k ∈A
−→a.s. 0 as n→ ∞, (2.1)
where ˆsn =[nt]
i=1bi,[nt](Xi−μ). From Berkes and Dehling [2] (p. 1647), to prove (2.1) it suffices to show that for any bounded Lipschitz function f onD[0,1] we have
1 logn
n k=1
1
kζk −→a.s. 0 asn→ ∞ (2.2)
whereζk=f(ˆsk/σ√
k)−Ef(ˆsk/σ√
k). In fact, the following implies (2.2) (see p. 1648, Berkes and Dehling [2]
for the proof):
E n
k=1
1 kζk
2
log2n(log logn)−ε for some ε >0. (2.3) Therefore, showing (2.3) would be sufficient for the proof of Theorem 1.1. Observing
ˆsl−ˆsk=b[kt]+1,[lt](S[kt]−[kt]μ) +
b[kt]+1,[lt](X[kt]+1−μ) +· · ·+b[lt],[lt](X[lt]−μ) forl≥kwe find that ˆsl−sˆk−b[kt]+1,[lt](S[kt]−μ[kt]) is independent of ˆs[kt] which implies that
Cov
f ˆsk
σ√ k , f
ˆsl−ˆsk−b[kt]+1,[lt](S[kt]−μ[kt]) σ√
l
= 0 for l≥k.
By the Lipschitz property off E(ζkζl)
Cov
f sˆk
σ√ k , f
ˆsl σ√
l −f
|ˆsl−ˆsk−b[kt]+1,[lt](S[kt]−μ[kt])| σ√
l
E
0max≤t≤1
|ˆsk+b[kt]+1,[lt](S[kt]−μ[kt])|
σ√ l E
0max≤t≤1
|sˆk| σ√
l +E
0max≤t≤1
|b[kt]+1,[lt](S[kt]−μ[kt])|
σ√ l
k l
1/2
E
0max≤t≤1
|ˆsk| σ√
k + k
l
1/2
E
0max≤t≤1b[kt]+1,[lt]|S[kt]−μ[kt]| σ√
k ·
Since max0≤t≤1b[kt]+1,[lt]= log(l/k)(l/k)γ (we choose 0< γ <1/2) E(ζkζl)
k l
1/2
E
0max≤t≤1
1 σ√ k
[kt]
i=1
bi,k(Xi−μ) +
k l
1/2−γ
E
0max≤t≤1
|S[kt]−μ[kt]|
σ√ k
= k
l
1/2
E
0max≤j≤k
1 σ√
k j
i=1
bi,k(Xi−μ) +
k l
1/2−γ
E
0max≤j≤k
|Sj−μj|
σ√ k
=M1+M2.
Now applying Lemma 2.1 toM1we obtain E(ζkζl)
k l
1/2
logkE
1max≤j≤k
1 σ√ k
j
i=1
(Xi−μ)
+ k
l
1/2−γ
E
1max≤j≤k
|Sj−μj|
σ√ k E(ζkζl)logk
k l
1/2−γ
E
1max≤j≤k
|Sj−μj|
σ√
k logk k
l
γ
where 0< γ <1/2−γ.
On the other handE(ζkζl)1 becauseζk is bounded . Hence we have the following estimate forE(ζkζl):
E(ζkζl)
1, if l/k≤exp
(logn)1−ε (k/l)γlogk, if l/k≥exp
(logn)1−ε whereεis any positive number. Hence,
1≤k≤l≤n l/k≤exp (logn)1−ε
E(ζkζl)
kl ≤
1≤k≤n
1 k
k≤l≤ke(logn)1−ε
1 l
n k=1
1
klog1−εnlog2−εn (2.4)
and
1≤l≤k≤n l/k≥exp (logn)1−ε
E(ζkζl)
kl ≤e−γ(logn)1−εlogn
1≤k≤l≤n
1
kl e−γ(logn)1−εlog3nlog2−εn. (2.5)
Thus (2.4) and (2.5) immediately imply (2.3) which completes the proof of Theorem 1.1.
Acknowledgements. I am very thankful to the referees for their insightful comments. I would also like to express my gratitude to Prof. C. Endre for his encouragement.
References
[1] B.C. Arnold and J.A. Villase˜nor, The asymptotic distribution of sums of records.Extremes1(1998) 351–363.
[2] I. Berkes and H. Dehling, Some limit theorems in log density.Ann. Probab.21(1993) 1640–1670.
[3] K. Gonchigdanzan and G. Rempala, A note on the almost sure limit theorem for the product of partial sums.Appl. Math. Lett.
19(2006) 191–196.
[4] G. Rempala and J. Wesolowski, Asymptotics for products of sums and U-statistics.Electron. Commun. Probab.7(2002) 47–54.
[5] Y. Qi, Limit distributions for products of sums.Statist. Probab. Lett.62(2003) 93–100.
[6] L. Zhang and W. Huang, A note on the invariance principle of the product of sums of random variables.Electron. Commun.
Probab.12(2007) 51–56.