• Aucun résultat trouvé

On Piterbarg theorem for maxima of stationary Gaussian sequences

N/A
N/A
Protected

Academic year: 2021

Partager "On Piterbarg theorem for maxima of stationary Gaussian sequences"

Copied!
13
0
0

Texte intégral

(1)Lithuanian Mathematical Journal, Vol. 53, No. 3, July, 2013, pp. 280–292. On Piterbarg theorem for maxima of stationary Gaussian sequences Enkelejd Hashorva a,1 , Zuoxiang Peng b,2 , and Zhichao Weng a,3 a. Department of Actuarial Science, Faculty of Business and Economics, University of Lausanne, UNIL-Dorigny, 1015 Lausanne, Switzerland b School of Mathematics and Statistics, Southwest University, 400715 Chongqing, PR China (e-mail: enkelejd.hashorva@unil.ch; pzx@swu.edu.cn; zhichao.weng@unil.ch). Received April 12, 2013; revised May 16, 2013. Abstract. Limit distributions of maxima of dependent Gaussian sequence are different according to the convergence rate of their correlations. For three different conditions on convergence rate of the correlations, in this paper, we establish the Piterbarg theorem for maxima of stationary Gaussian sequences. MSC: primary 60G70; secondary 60G10 Keywords: incomplete sample, joint limit distribution, maximum, stationary Gaussian sequence, weak and strong dependence, Piterbarg theorem. 1. Introduction. For a strictly stationary sequence {Xn , n  1}, the seminal paper [10] derived joint limiting distributions of maxima of complete and incomplete samples. The sample is often incomplete since observations are missing, which is formalized by introducing a sequence of indicator random variables {εn , n  1}, where {εn = 1} means that Xn is observed, whereas {εn = 0} corresponds to the case Xn is missing. Throughout this paper, {εn , n  1} are independent of the stationary process {Xn , n  1}. If F denotes the common distribution function of all Xn s, then the maxima of incomplete sample Mn (ε), n  1, is defined by  Mn (ε) = 1 2 3. n. max{Xj , εj = 1, j  n}. if. inf{t: F (t) > 0}. otherwise.. j=1 εj.  1,. The author is supported partially by the Swiss National Science Foundation project 200021-1401633/1. The author has been supported by the National Natural Science Foundation of China under grant 11171275 and by the Natural Science Foundation Project of CQ under cstc2012jjA00029. The author has been partially supported by the Swiss National Science Foundation project 200021-134785 and by the project RARE-318984 (a Marie Curie International Research Staff Exchange Scheme Fellowship within the 7th European Community Framework Programme).. c 2013 Springer Science+Business Media New York 0363-1672/13/5303-0280 . 280.

(2) On Piterbarg theorem for maxima of stationary Gaussian sequences. 281. It is well known (see, e.g., [3]) that if, for some norming constants an > 0 and bn ∈ R, lim F n (an x + bn ) = G(x). n→∞. ∀x ∈ R. with G being an extreme value distribution of some random variable M, then   lim P a−1 n (Mn − bn )  x = G(x) ∀x ∈ R n→∞. (1.1). (1.2). with Mn = max1in Xi under some additional weak dependence conditions. Assume that, for some constant P ∈ [0, 1], the indicator random sequence {εn , n  1} satisfies n εi Sn (1.3) := i=1 → P in probability n n as n → ∞. The contribution [10] proved that under the conditions D(un , vn ) and D (un ),   lim P Mn (ε)  un , Mn  vn = H(P, x, y) =: GP (x)G1−P (y). n→∞. (1.4). for x < y , where un := an x + bn , vn := an y + bn ; definitions of D(un , vn ) and D (un ) can be found in [5, 7] and [10]. For the case that (1.3) holds with P a random variable, recently [5] showed that (1.4) still holds with H(P, x, y) = E(GP (x)G1−P (y)). A closely related work to [10] is contribution [6], which considers a strongly dependent stationary Gaussian random sequence {Xn , n  1} with correlation rn = E(X1 Xn+1 ) such that lim rn ln n = γ ∈ [0, ∞).. n→∞. (1.5). In this case, F = Φ, the distribution function of an N (0, 1) random variable, and therefore, (1.1) holds with norming constants an and bn given by an = √. 1 , 2 ln n. bn =. √. 2 ln n −. ln ln n + ln 4π √ . 2 2 ln n. (1.6). Assuming that (1.5) holds, we have (see [6, 9, 14])       lim P a−1 2γW ) , n (Mn − bn )  x = E exp − exp(−x − γ +. n→∞. (1.7). where W is an N (0, 1) random variable. Clearly, when γ = 0, the limit distribution on the right-hand side of (1.7) is the Gumbel distribution Λ(x) = exp(− exp(−x)), x ∈ R, which is shown in the seminal paper [1]. Commonly, the case γ = 0 is referred to as the case of weak dependence since the limit distribution of the maxima of the stationary process is the same as that of an iid sequence with underlying distribution function Φ. We expect that for this case, again (1.4) holds under the general settings of [5], which is confirmed in the next section. If γ > 0, the limiting distribution of the maxima is a mixture distribution different from the Gumbel distribution, and thus, for that case, we cannot use the result of [5]. In order to overcome this difficulty, we shall borrow some ideas from [6], which cover the strong dependence case. In the seminal paper [13], Piterbarg considered the joint approximation of the maximum of a stationary Gaussian process over a discrete and continuous grid of points. The results in [6] and [10] are motivated by the ideas and techniques developed in the aforementioned paper. Therefore, we shall refer in this contribution to the joint limit distribution of maxima of complete and incomplete samples such as (1.4) as the Piterbarg theorem. Lith. Math. J., 53(3):280–292, 2013..

(3) 282. E. Hashorva, Z. Peng, and Z. Weng. In this paper, we are concerned only with stationary Gaussian sequences assuming that (1.5) holds with γ ∈ [0, ∞]. For the case of a weakly dependent stationary Gaussian sequence {Xn , n  1}, i.e., condition (1.5) (or the so-called Berman condition) holds with γ = 0, then both conditions D(un , vn ) and D (un ) hold with the choice of constants an and bn given by (1.6). Therefore, in view of [5], the Piterbarg theorem is valid (see Section 2). Our main results show that the Piterbarg theorem also holds for the general case of γ ∈ [0, ∞]. Furthermore, we generalize the recent findings of [16], which are motivated by [6]. For some related work on asymptotic behavior of extremes of Gaussian sequences, see [2, 11]. Brief organization of the rest of the paper: Section 2 presents the main results; their proofs are relegated to Section 3.. 2. Main results. In the sequel, let {Xn , n  1} be a standard stationary Gaussian sequence with underlying distribution function Φ and correlations {rn , n  1} satisfying the dependence condition (1.5) with γ ∈ [0, ∞]. As in the Introduction, in order to derive the Piterbarg theorem, we shall assume further that the indicator random variables {εn , n  1} are independent of the Gaussian sequence and further (1.3) holds. Our first result is closely related to the result of [5]. Theorem 1. Suppose that the stationary Gaussian sequence {Xn , n  1} is independent of indicator sequence {εn , n  1}. If, further, (1.3) holds with some random variable P , then, under the Berman condition,     lim P Mn (ε)  an x + bn , Mn  an y + bn = E ΛP (x)Λ1−P (y). n→∞. for all real x, y with x < y and constants an and bn given by (1.6). Note, in passing, that the Berman condition implies the convergence of sample maxima {Mn , n  1} (after normalization) to a unit Gumbel random variable, whereas the above result implies that     lim P Mn (ε)  an x + bn = E ΛP (x) ∀x ∈ R, n→∞. and thus, we have the joint convergence in distribution. Mn (ε) − bn Mn − bn d M), , −→ (M, an an. n → ∞,. (2.1). M) have joint distribution function H(P, x, y) defined by where (M,  E(ΛP (x)Λ1−P (y)) if x < y, H(P, x, y) = Λ(y) otherwise. In [6] and later in [16], the limit of P{Mn − Mn (ε)  an x, Mn (ε) − bn  an y} as n → ∞ for any x > 0 and y ∈ R is derived. By the continuous mapping theorem the joint convergence in (2.1) thus implies the following corollary, which generalizes Theorem 1, Corollary 1, and Theorem 2 in [16]. Corollary 1. Under the assumptions of Theorem 1, we have the joint convergence in distribution. Mn − Mn (ε) Mn (ε) − bn d M),. , −→ (M − M, n → ∞. an an Next, we consider strongly dependent Gaussian sequences.. (2.2).

(4) On Piterbarg theorem for maxima of stationary Gaussian sequences. 283. Theorem 2. Suppose that the stationary Gaussian sequence {Xn , n  1} is independent of indicator sequence {εn , n  1} and (1.3) holds with some random variable P . If, further, (1.5) holds with γ ∈ (0, ∞), then, for all real x, y with x < y , we have   lim P Mn (ε)  an x + bn , Mn  an y + bn = H(P, x, y),. n→∞. where an and bn are given by (1.6), and 

(5) +∞     H(P, x, y) = E exp −P exp(−x − γ + 2γz) − (1 − P) exp(−y − γ + 2γz) dΦ(z) .. (2.3). −∞. Clearly, the above result can be stated as a joint convergence in distribution, i.e., we have again that (2.1) M), which has joint distribution function H(P, x, y) given by (2.3) for all x < y , and holds with (M, H(P, x, y) equals the right-hand side of (1.7) for x  y . Consequently, we obtain the following result, which extends Theorem 3 in [16], where P is considered to be a constant. Corollary 2. Under the assumptions of Theorem 2, the joint convergence in distribution in (2.2) holds, where M) has the joint distribution function H(P, x, y) given by (2.3). (M, Remark 1. In view of [6] (see also [12]), condition (1.5) can be slightly relaxed in the case of γ ∈ (0, ∞). The result of Theorem 2 is still valid under the weaker condition stated in Eq. (5) in [6]. It is possible to have a joint convergence of sample maxima and that of incomplete sample maxima even if condition D (un ) is not satisfied see [10]. As shown therein for a stationary non-Gaussian process related to the storage process, both maxima are completely dependent, which is a result expected in view of the findings of [4]. In Theorem 3 below, we investigate strongly dependent Gaussian sequences satisfying (1.5) with γ = ∞. We first recall a result known in the literature for the convergence of the sample maxima. Namely, under the conditions (i) rn is convex with rn = o(1) and (ii) (rn ln n). −1. is monotone with (rn ln n). (2.4) −1. = o(1). (2.5). as n → ∞, [8] and [9] showed that, with ˜bn := (1 − rn )1/2 bn , lim P{Mn − ˜bn . n→∞. √. rn x} = Φ(x). ∀x ∈ R.. (2.6). An extension of (2.6) to the Piterbarg max-discretization theorem is given in Corollary 2.2 of [15]. Our last result extends the above convergence as follows. Theorem 3. Suppose that the stationary Gaussian sequence {Xn , n  1} is independent of indicator sequence {εn , n  1} and (1.3) is valid with some random variable P ∈ (0, 1]. If (2.4) and (2.5) hold, then, for all real x, y , we have    √ √  lim P Mn (ε) − ˜bn  rn x, Mn − ˜bn  rn y = Φ min(x, y) ,. n→∞. where ˜bn := (1 − rn )1/2 bn with bn given by (1.6). Lith. Math. J., 53(3):280–292, 2013..

(6) 284. E. Hashorva, Z. Peng, and Z. Weng. Again, we can cast the above result in the framework of joint convergence in distribution stated in (2.1), namely,. Mn (ε) − ˜bn Mn − ˜bn d , √ −→ (W, W ), n → ∞, (2.7) √ rn rn where W has the N (0, 1) distribution. Consequently, in the language of [6], we have. Mn (ε) − ˜bn Mn − Mn (ε) d , −→ (W, 0), √ √ rn rn. 3. n → ∞.. (2.8). Further results and proofs. In order to prove the main theorems, we need some auxiliary results. We borrow the following notation from [5]. Let un (x) = an x + bn , x ∈ R, and α = {αn , n  1} be a nonrandom sequence taking values in {0, 1}. For fixed k , let   Ks = j: (s − 1)m + 1  j  sm , 1  s  k, where m = [n/k]. Further, for a random variable P such that 0  P  1 a.s., set   1   0, 2k , t = 0, Bt,k = ω: P(ω) ∈  t t+1  , k−1 , , 0 < t  2 k k 2 2 and define.   Bt,k,α,n = ω: εj (ω) = αj , 1  j  n ∩ Bt,k .. In the following, C is a positive constant with values changing in different lines; we thus use C to omit the O(1) notation. Lemma 1. Let {Xn∗ , n  1} be a sequence of independent standard Gaussian random variables. Suppose that, for large n, the positive integers l are such that k < l < m = [n/k] and l = o(n). Then, for x < y ,   k          P M ∗ (Ks , α)  un (x), M ∗ (Ks )  un (y)  P Mn∗ (α)  un (x), Mn∗  un (y) −   s=1     (4k + 2)l 1 − Φ un (x) uniformly for all α ∈ {0, 1}n , where Mn∗ = max{Xj∗ , 1  j  n}, M ∗ (Ks ) = max{Xj∗ , j ∈ Ks },  ∗. M (Ks , α) =. and.  Mn∗ (α). =. . max{Xj∗ , αj = 1, j ∈ Ks }. if. −∞. otherwise,. j∈Ks. αj  1,. n. max{Xj∗ , αj = 1, 1  j  n}. if. −∞. otherwise.. j=1 αj.  1,. Proof. First, we classify km integers into 2k consecutive intervals as follows. For large n, let l be integers such that k < l < m and l = o(n). Write   Is = (s − 1)m + 1, . . . , sm − l , Js = {sm − l + 1, . . . , sm}.

(7) On Piterbarg theorem for maxima of stationary Gaussian sequences. 285. for 1  s  k , and set   Ik+1 = (k − 1)m + l + 1, . . . , km ,. Jk+1 = {km + 1, . . . , km + l}.. Since {Xn∗ , n  1} are independent, using the arguments similar to the proof of Lemma 4.3 in [10], we obtain the desired result.  Lemma 2. Under the conditions of Theorem 2, for x < y , we have   +∞.      ∗    P Mn (α)  vn (x, z), Mn∗  vn (y, z) dΦ(z) P Mn (α)  un (x), Mn  un (y) −    Cn. n . |rk − ρn | exp −. k=1. −∞. u2n (x). 1 + wk. uniformly for all α ∈ {0, 1}n , where vn (x, z) = (1 − ρn )−1/2 (un (x) − ρn z), ρn = γ/ln n, wk = max{|rk |, ρn }, and C is some positive constant. 1/2. Proof. Let {ξn,k , 1  k  n, n  1} be a triangular array of standard Gaussian random variables with correlation ρn = γ/ln n, and define  Mnξ (α). =. n. max{ξn,j , αj = 1, 1  j  n}. if. −∞. otherwise,. j=1 αj.  1,. and Mnξ = max{ξn,k , 1  k  n}. By Berman’s inequality (see, e.g., [7] or [12]),      P Mn (α)  un (x), Mn  un (y) − P Mnξ (α)  un (x), Mnξ  un (y)   Cn. n  k=1. u2 (x) |rk − ρn | exp − n 1 + wk. uniformly for all α ∈ {0, 1}n , where wk = max{|rk |, ρn }. According to the proof of Theorem 6.5.1 in [7], P. . Mnξ (α).  un (x),. Mnξ. +∞.    un (y) = P Mn∗ (α)  vn (x, z), Mn∗  vn (y, z) dΦ(z),. . −∞. where vn (x, z) = (1 − ρn )−1/2 (un (x) − ρn z), and thus, the claim follows.  1/2. Lemma 3. Let {Yn,k , 1  k  n, n  1} be a triangular array of standard Gaussian sequences with correlation ρk = (rk − rn )/(1 − rn ), where rk satisfies (2.4) and (2.5) for k = 1, 2, . . . , n. Suppose that {Yn,k , 1  k  n, n  1} is independent of indicator sequence {εn , n  1}. If, further, (1.3) holds with some random variable P ∈ (0, 1], then, for all δ > 0, we have   lim P MnY (ε)  bn − δrn1/2 = 0,. n→∞ Lith. Math. J., 53(3):280–292, 2013..

(8) 286. E. Hashorva, Z. Peng, and Z. Weng. where.  MnY (ε). =. n. max{Yn,j , εj = 1, 1  j  n}. if. −∞. otherwise.. j=1 εj.  1,. Proof. Let {Zn,k , 1  k  n, n  1}, {Wn,k , 1  k  n, n  1} be two triangular arrays of standard Gaussian sequences with correlations defined by   ρi −ρt(n) , 1  i  t(n), ρi , 1  i  t(n), E(Wn,1 Wn,i+1 ) = σi = 1−ρt(n) E(Zn,1 Zn,i+1 ) = ρt(n) , i > t(n), 0, i > t(n), respectively, where t(n) = [n exp(−(ln n)1/2 )]. Suppose that the two Gaussian sequences are independent of the indicator sequence {εn , n  1}. Define MnZ (ε) and MnW (ε) similarly to above, and let η be a standard Gaussian random variable independent of {Wn,k , 1  k  n, n  1}. Using Slepian’s inequality (see, e.g., [12]), we have   P MnY (α)  bn − δrn1/2     1/2  P MnZ (α)  bn − δrn1/2 = P (1 − ρt(n) )1/2 MnW (α) + ρt(n) η  bn − δrn1/2 +∞.    1/2  = P MnW (α)  bn − δrn1/2 − ρt(n) z (1 − ρt(n) )−1/2 dΦ(z) −∞. Φ −. 1/2. δrn. 1/2. 2ρt(n).  . 1/2. δrn W −1/2 + P Mn (α)  bn − (1 − ρt(n) ) . 2. Further, using Berman’s inequality, we obtain     . 1/2. 1/2.   δrn −1/2 ∗ −1/2  P MnW (α)  bn − δrn (1 − ρ ) − P M (α)  b − (1 − ρ ) n t(n) t(n) n   2 2  Cn. t(n)  i=1. 1/2. (bn − δrn /2)2 σi exp − (1 + σi )(1 − ρt(n) ). =: cn .. Hence, by the total probability formula, P. . MnY (ε).  bn −. δrn1/2. . =. k 2 −1. . t=0 α∈{0,1}n. Φ −. 1/2. δrn. 1/2. 2ρt(n).   P MnY (α)  bn − δrn1/2 P(Bt,k,α,n ).  +P. Mn∗ (ε). . 1/2. δrn bn − 2. By properties (2.4) and (2.5) of {rn , n  1}, the useful facts (see [9, p. 9]) rn = ∞, n→∞ ρt(n) lim. lim. n→∞. bn ρt(n) 1/2. rn. =0. (1 − ρt(n) ). −1/2.  + cn ..

(9) On Piterbarg theorem for maxima of stationary Gaussian sequences. 287. imply that. 1/2. δrn lim Φ − 1/2 = 0 n→∞ 2ρt(n) and.  P. Mn∗ (ε). . 1/2. δrn bn − 2. (1 − ρt(n) ). −1/2. .    P Mn∗ (ε)  −an A + bn. for arbitrary positive number A and large n. By Corollary 1 in [5],  . 1/2.   δrn lim sup P Mn∗ (ε)  bn − (1 − ρt(n) )−1/2  E ΛP (−A) . 2 n→∞ Letting A → ∞, we have.   1/2. δrn ∗ −1/2 lim P Mn (ε)  bn − (1 − ρt(n) ) = 0. n→∞ 2 By the arguments of [8, pp. 187–188] we have that limn→∞ cn = 0, and hence the claim follows.  Proof of Theorem 1. For a stationary Gaussian sequence, the conditions D(un , vn ) and D (un ) hold when the correlations satisfy (1.5) with γ = 0; see Lemma 4.4.1 in [7] for details. Hence, according to Theorem 1.1 in [5], we obtain the desired result.  Proof of Theorem 2. Let Ψ (n, x, z) = n(1 − Φ(vn (x, z)) with vn (x, z) = (1 − ρn )−1/2 (un (x) − ρn z). Note that 1/2.  

(10) +∞ k .      PΨ (n, x, z) + (1 − P)Ψ (n, y, z)   1− dΦ(z)  P Mn (ε)  un (x), Mn  un (y) − E   k −∞ s=1. 2 −1 k. .

(11)       E P Mn (α)  un (x), Mn  un (y)  n. t=0 α∈{0,1}. −. +∞ k . . −∞ s=1. PΨ (n, x, z) + (1 − P)Ψ (n, y, z) 1− k.     dΦ(z)I(Bt,k,α,n ) .  E1 + E2 + E3 + E4 , where E1 =. k 2 −1. . t=0 α∈{0,1}.

(12)      E P Mn (α)  un (x), Mn  un (y)  n.   +∞.   ∗   − P Mn (α)  vn (x, z), Mn∗  vn (y, z) dΦ(z)I(Bt,k,α,n ) ,  −∞. Lith. Math. J., 53(3):280–292, 2013..

(13) 288. E. Hashorva, Z. Peng, and Z. Weng. E2 =. k 2 −1. . t=0 α∈{0,1}.

(14) +∞      E P Mn∗ (α)  vn (x, z), Mn∗  vn (y, z)  n −∞.   k    ∗   − P M (Ks , α)  vn (x, z), M ∗ (Ks )  vn (y, z)  dΦ(z) I(Bt,k,α,n ) ,  s=1. E3 =. k 2 −1. . t=0 α∈{0,1}. −. k  s=1.

(15) +∞  k     E P M ∗ (Ks , α)  vn (x, z), M ∗ (Ks )  vn (y, z)   n −∞. s=1. t/2k Ψ (n, x, z) + (1 − t/2k )Ψ (n, y, z) 1− k. .    dΦ(z) I(Bt,k,α,n ) , . and E4 =. k 2 −1. . t=0 α∈{0,1}. −. k  s=1.

(16) +∞  k .  PΨ (n, x, z) + (1 − P)Ψ (n, y, z)  E 1−   k n −∞. s=1. t/2k Ψ (n, x, z) + (1 − t/2k )Ψ (n, y, z) 1− k. .    dΦ(z) I(Bt,k,α,n ) . . Using Lemma 2 and Lemma 6.4.1 in [7], we have E1  Cn. n  i=1. u2 (x) |ri − ρn | exp − n 1 + wi. →0. (3.1). as n → ∞. For E2 , according to Lemma 1, we have l E2  (4k + 2) n. +∞. Ψ (n, x, z) dΦ(z). −∞. √ According to the proof of Theorem 6.5.1 in [7], we have vn (x, z) = un (x + γ − 2γz) + o(an ), and thus,  lim Ψ (n, x, z) = exp(−x − γ + 2γz) =: h(x, z, γ). n→∞. Combined with l = o(n) as n → ∞, the dominated convergence theorem yields lim E2 = 0.. n→∞. Next, using Lemma 3 in [5], we have E3 . k 2 −1. . t=0 α∈{0,1}n.

(17) +∞ k     P M ∗ (Ks , α)  vn (x, z), M ∗ (Ks )  vn (y, z) E  −∞ s=1. (3.2).

(18) 289. On Piterbarg theorem for maxima of stationary Gaussian sequences. .  t/2k Ψ (n, x, z) + (1 − t/2k )Ψ (n, y, z)  − 1−  dΦ(z) I(Bt,k,α,n ) k. . k 2 −1. . t=0 α∈{0,1}. 1 + k. =. +∞. . t=0 s=1. 1 k. −∞ s=1. Ψ (n, x, z). 2. dΦ(z). −∞. k 2 −1  k. +.

(19)  +∞ k      j∈K αj  n(Φ(v (y, z)) − Φ(v (x, z))) t n n s  E − k  dΦ(z) I(Bt,k,α,n )  m k 2 n. +∞   .   ε n(Φ(vn (y, z)) − Φ(vn (x, z))) t j E  − k I(Bt,k ) dΦ(z) m 2 k j∈Ks. +∞. . Ψ (n, x, z). −∞. 2. dΦ(z). −∞. +∞ . k      ε 1 n(Φ(vn (y, z)) − Φ(vn (x, z))) j  E − P  + k dΦ(z) m k 2 s=1. +. . 1 k. j∈Ks. . +. 1 k. Ψ (n, x, z). 2. dΦ(z). −∞. k   s=1. −∞. +∞. +∞ .  S(s−1)m 1 Ssm Ψ (n, x, z) − Ψ (n, y, z) 2(2s − 1) d + k ,P + d ,P dΦ(z) sm (s − 1)m k 2 −∞. +∞. . Ψ (n, x, z). 2. dΦ(z),. −∞. where d(X, Y ) stands for the Ky Fan metric, i.e., d(X, Y ) = inf{: P{|X − Y | > } < }. Since. Ssm lim d , P = 0, m→∞ sm taking the limit as n → ∞ and then as m → ∞, we obtain 1 lim sup E3  k 2 n→∞. +∞.  −∞. . 1 h(x, z, γ) − h(y, z, γ) dΦ(z) + k. +∞. h2 (x, z, γ) dΦ(z). −∞. For E4 , we have E4 . k 2 −1. . t=0 α∈{0,1} Lith. Math. J., 53(3):280–292, 2013..

(20)  +∞ k     Ψ (n, y, z) + Ψ (n, x, z) t P −  E dΦ(z) I(Bt,k,α,n )  k 2k  n −∞ s=1. (3.3).

(21) 290. E. Hashorva, Z. Peng, and Z. Weng +∞. . =. . Ψ (n, y, z) + Ψ (n, x, z) dΦ(z). t=0. −∞ +∞.  −∞. 1 → k 2. k 2 −1.  .  t   E P − k I(Bt,k ) 2. Ψ (n, y, z) + Ψ (n, x, z) dΦ(z) 2k +∞. .  h(x, z, γ) + h(y, z, γ) dΦ(z). (3.4). −∞. as n → ∞. Hence, combining (3.1)–(3.4), we have      lim sup P Mn (ε)  un (x), Mn  un (y) n→∞ 

(22) +∞ −E. 1−. P exp(−x − γ +. √. −∞. . 1 2k−1. 2γz) + (1 − P) exp(−y − γ + k. √. 2γz). k.    dΦ(z)  . +∞ +∞. 1 h(x, z, γ) dΦ(z) + h2 (x, z, γ) dΦ(z). k. −∞. −∞. The claimed result follows by letting k → ∞.  Proof of Theorem 3. We next show that     lim P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  x = Φ(x) n→∞. ∀x ∈ R.. (3.5). Let events Bt,k,α,n be defined as before. Since, by assumption, P > 0 and the indicator random sequence {εn , n  1} is independent of {Xn , n  1} for any x ∈ R, we have P. . rn−1/2. . −1/2. where P (n, α) = P{rn. . . Mn (ε) − (1 − rn ) bn  x = 12. k 2 −1. . P (n, α)P(Bt,k,α,n ),. t=0 α∈{0,1}n. (Mn (α) − (1 − rn )1/2 bn )  x}. Applying Slepian’s inequality, we further have. +∞.   P (n, α) = P MnY (α)  bn + rn12 (1 − rn )−12 (x − z) dΦ(z) −∞ +∞.    P Mn∗ (α)  bn + rn1/2 (1 − rn )−1/2 (x − z) dΦ(z) −∞.    P Mn∗ (α)  bn + rn1/2 (1 − rn )−1/2 δ Φ(x − δ).

(23) On Piterbarg theorem for maxima of stationary Gaussian sequences. for any δ > 0. Since limn→∞ a−1 n rn. 1/2. k 2 −1. . = ∞, there exists sufficiently large A such that, for all n large,. P (n, α)P(Bt,k,α,n ). t=0 α∈{0,1}n.  Φ(x − δ). k 2 −1. . t=0 α∈{0,1}n.   P Mn∗ (α)  bn + rn1/2 (1 − rn )−1/2 δ P(Bt,k,α,n ).     = Φ(x − δ)P Mn∗ (ε)  bn + rn1/2 (1 − rn )−1/2 δ  Φ(x − δ)P Mn∗ (ε)  bn + an Aδ    Φ(x − δ)P Mn∗  bn + an Aδ . Clearly, since.     lim lim P Mn∗  bn + an Aδ = lim exp − exp(−Aδ) = 1,. A→∞ n→∞. we have. A→∞.     lim inf P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  x  Φ(x − δ). n→∞. Next, we derive the upper bound. Note that +∞.   P (n, α)  P MnY (α)  bn + rn1/2 (1 − rn )−1/2 (x − z) dΦ(z) −∞.    Φ(x + δ) + P MnY (α)  bn − rn1/2 (1 − rn )−1/2 δ , implying       P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  x  Φ(x + δ) + P MnY (ε)  bn − rn1/2 (1 − rn )−1/2 δ . Using Lemma 3, we obtain     lim sup P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  x  Φ(x + δ), n→∞. and hence, (3.5) follows by letting δ ↓ 0. Next note that, for any x, y ,       pn (x, y) := P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  x, rn−1/2 Mn − (1 − rn )1/2 bn  y      P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  x and, further, for x < y ,     P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  x          pn (x, y) + P rn−1/2 Mn (ε) − (1 − rn )1/2 bn  y − P rn−1/2 Mn − (1 − rn )1/2 bn  y = pn (x, y) + o(1), Lith. Math. J., 53(3):280–292, 2013.. 291.

(24) 292. E. Hashorva, Z. Peng, and Z. Weng. where the last claim above follows directly from the fact that (see (2.6))     lim P rn−1/2 Mn − (1 − rn )1/2 bn  x = Φ(x). n→∞. −1/2. Consequently, for x < y , we have pn (x, y) = P{rn  proof is complete. . ∀x ∈ R.. (Mn (ε) − (1 − rn )1/2 bn )  x} + o(1), and thus, the. References 1. S.M. Berman, Limit theorems for the maximum term in stationary sequences, Ann. Math. Stat., 35:502–516, 1964. 2. L. Cao and Z. Peng, Asymptotic distributions of maxima of complete and incomplete samples from strongly dependent stationary Gaussian sequences, Appl. Math. Lett., 24(2):243–247, 2011. 3. M. Falk, J. Hüsler, and R.-D. Reiss, Laws of Small Numbers: Extremes and Rare Events, DMV Seminar, Vol. 23, 3rd ed., Birkhäuser, Basel, 2010. 4. J. Hüsler and V.I. Piterbarg, Limit theorem for maximum of the storage process with fractional Brownian motion as input, Stoch. Process. Appl., 114(2):231–250, 2004. 5. T. Krajka, The asymptotic behaviour of maxima of complete and incomplete samples from stationary sequences, Stoch. Process. Appl., 121(8):1705–1719, 2011. 6. A.V. Kudrov and V.I. Piterbarg, On maxima of partial samples in Gaussian sequences with pseudo-stationary trends, Lith. Math. J., 47(1):48–56, 2007. 7. M.R. Leadbetter, G. Lindgren, and H. Rootzén, Extremes and Related Properties of Random Sequences and Processes, Springer Ser. Stat., Vol. 11, Springer-Verlag, New York, Heidelberg, Berlin, 1983. 8. Y. Mittal, Comparison technique for highly dependent stationary Gaussian processes, in J. Tiago de Oliveira (Ed.), Statistical Extremes and Applications, Reidel, Dordrecht, 1984, pp. 181–195. 9. Y. Mittal and D. Ylvisaker, Limit distributions for the maxima of stationary Gaussian processes, Stoch. Process. Appl., 3(1):1–18, 1975. 10. P. Mladenovi´c and V.I. Piterbarg, On asymptotic distribution of maxima of complete and incomplete samples from stationary sequences, Stoch. Process. Appl., 116(12):1977–1991, 2006. 11. Z. Peng, L. Cao, and S. Nadarajah, Asymptotic distributions of maxima of complete and incomplete samples from multivariate stationary Gaussian sequences, J. Multivariate Anal., 101(10):2641–2647, 2010. 12. V.I. Piterbarg, Asymptotic Methods in the Theory of Gaussian Processes and Fields, Transl. Math. Monogr., Vol. 148, Amer. Math. Soc., Providence, RI, 1996. 13. V.I. Piterbarg, Discrete and continuous time extremes of Gaussian processes, Extremes, 7(2):161–177, 2004. 14. Z. Tan and E. Hashorva, Exact tail asymptotics of the supremum of strongly dependent Gaussian processes over a random interval, Lith. Math. J., 53(1):91–102, 2013. 15. Z. Tan and E. Hashorva, On Piterbarg max-discretisation theorem for standardised maximum of stationary Gaussian processes, Methodol. Comput. Appl. Probab., 2013 (in press), doi:10.1007/s11009-012-9305-8. 16. Z. Tan and Y. Wang, Some asymptotic results on extremes of incomplete samples, Extremes, 15(3):319–332, 2012..

(25)

Références

Documents relatifs

A functional large deviations principle for quadratic forms of Gaussian stationary processes.. Toeplitz forms and

The proof relies on the classical perturbed test function (or corrector) method, which is used both to show tightness in path space, and to identify the extracted limit with

ont montré dans une étude portant sur 695 malades ayant eu une colectomie totale avec anastomose iléorectale, que le risque de décès par cancer du rectum était de 12,5 % à l’âge

L’objectif du travail entrepris dans cette thèse est l’étude des complexes à base de quelques hydrazides et de leurs dérivés avec des éventuelles

Key words and phrases: Central limit theorem, spatial processes, m- dependent random fields, weak mixing..

In the context of this work, we consider functionals which act on these clusters of relevant values and we develop useful lemmas in order to simplify the essential step to establish

Thus, establishing the weak convergence of the partial sum process in Hölder spaces requires, even in the independent case, finite moment of order greater than 2 and the moment

Motivated by random evolutions which do not start from equilibrium, in a recent work, Peligrad and Voln´ y (2018) showed that the central limit theorem (CLT) holds for