• Aucun résultat trouvé

On the strong consistency of the kernel estimator of extreme conditional quantiles

N/A
N/A
Protected

Academic year: 2022

Partager "On the strong consistency of the kernel estimator of extreme conditional quantiles"

Copied!
16
0
0

Texte intégral

(1)

HAL Id: hal-00956351

https://hal.inria.fr/hal-00956351v2

Submitted on 26 Aug 2014

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

On the strong consistency of the kernel estimator of extreme conditional quantiles

Stéphane Girard, Sana Louhichi

To cite this version:

Stéphane Girard, Sana Louhichi. On the strong consistency of the kernel estimator of extreme con-

ditional quantiles. Elias Ould Said. Functional Statistics and Applications, Springer, pp.59–77, 2015,

Contributions to Statistics, 978-3-319-22475-6. �10.1007/978-3-319-22476-3_4�. �hal-00956351v2�

(2)

On the strong consistency of the kernel estimator of extreme conditional quantiles

St´ephane Girard

(1,2)

and Sana Louhichi

(2)

(1)Mistis, Inria Grenoble Rhˆone-Alpes, France.

(2)Laboratoire Jean Kuntzmann, France.

Abstract

Nonparametric regression quantiles can be obtained by inverting a kernel esti- mator of the conditional distribution. The asymptotic properties of this estimator are well-known in the case of ordinary quantiles of fixed order. The goal of this paper is to establish the strong consistency of the estimator in case of extreme con- ditional quantiles. In such a case, the probability of exceeding the quantile tends to zero as the sample size increases, and the extreme conditional quantile is thus located in the distribution tails.

1 Introduction

Quantile regression plays a central role in various statistical studies. In particu- lar, nonparametric regression quantiles obtained by inverting a kernel estimator of the conditional distribution function are extensively investigated in the sample case [3, 27, 29, 30]. Extensions to random fields [1], time series [14], functional data [11] and truncated data [25] are also available. However, all these papers are restricted to conditional quantiles having a fixed orderα∈(0,1). In the follow- ing,αdenotes the conditional probability to be larger than the conditional quan- tile. Consequently, the above mentioned asymptotic theories do not apply in the distribution tails, i.e whenα=αn→0 orαn1 as the sample size n goes to in- finity. Motivating applications include for instance environmental studies [16, 28], finance [31], assurance [4] and image analysis [26].

The asymptotics of extreme conditional quantile estimators have been estab- lished in a number of regression models. Chernozhukov [6] and Jureckov´a [22]

considered the extreme quantiles in the linear regression model and derived their asymptotic distributions under various error distributions. Other parametric mod- els are considered in [10, 28]. A semi-parametric approach to modeling trends in extremes has been introduced in [9] basing on local polynomial fitting of the Generalized extreme-value distribution. Hall and Tajvidi [21] suggested a non- parametric estimation of the temporal trend when fitting parametric models to extreme-values. Another semi-parametric method has been developed in [2] us- ing a conditional Pareto-type distribution for the response. Fully nonparametric estimators of extreme conditional quantiles have been discussed in [2, 5] including

(3)

local polynomial maximum likelihood estimation, and spline fitting via maximum penalized likelihood. Recently, [15, 18] proposed, respectively, a moving-window based estimator for the tail index and extreme quantiles of heavy-tailed conditional distributions, and they established their asymptotic properties.

In the kernel-smoothing case, the asymptotic theory for quantile regression in the tails is still in full development. [19, 20] have analyzed the caseαn=1/n in the particular situation where the response Y given X=x is uniformly distributed.

The asymptotic distribution of the kernel estimator of extreme conditional quantile is established by [7, 17] for heavy-tailed conditional distributions. This result is extended to all types of tails in [8].

Here, we focus on the strong consistency of the kernel estimator for extreme conditional quantiles. Our main result is established in Section 2. Some illustrative examples are provided in Section 3. The proofs of the main results are given in Section 4 and the proofs of the auxiliary results are postponed to the Appendix.

2 Main results

Let(Xi,Yi)1inbe independent copies of a random pair(X,Y)∈Rd×R+with density f(X,Y). Let g be the density of X that we suppose strictly positive. The conditional survival function of Y given X=x is denoted by,

F(y¯ |x) =P(Y>y|X=x) = 1 g(x)

Z+ y

f(X,Y)(x,z)dz The kernel estimator of ¯F(y|x)is, for x such thatni=1Kh(x−Xi)6=0,

F¯n(y|x) =ni=1Kh(x−Xi)1IYi>y

ni=1Kh(x−Xi) ,

where h=hn0 as n→∞and Kh(u) = h1dK(uh), the kernel K is a measurable function which satisfies the conditions:

(K1)K is a continuous probability density.

(K2)K is with compact support: ∃R>0 such that K(u) =0 for anykuk ≤R.

Defineκ:=kKk=supxRdK(x)<∞.

Recall that, for a class of functionG,N(ε,G,dQ)denotes the minimal number of balls{g,dQ(g,g)<ε}of dQ-radiusεneeded to coverG and dQis the L2(Q)- metric. LetK be the set of functionsK ={K((x− ·)/h),h>0,x∈Rd}and N(ε,K) =supQN(εkKk,K,dQ)where the supremum is taken over all the probability measure Q onRd×R. Suppose that,

(K3)for some C,ν>1,N(ε,K)≤−νfor anyε∈]0,1[.

A number of sufficient conditions for which(K3)holds are discussed in [13] and the references therein. Finally suppose that

(A1)for allα∈(0,1)there exists an unique q(α|x)∈Rsuch that ¯F(q(α|x)|x) =α.

The conditional quantile q(α|x)is then the inverse of the function ¯F(·|x)at the pointα. Let(αn)be a fixed sequence of levels with values in[0,1]. For any x∈Rd, define q(αn|x)as the unique solution of the equation,

αn=F(q(α¯ n|x)|x), (1)

(4)

whose existence is guaranteed by Assumption(A1). The kernel estimator of the conditional quantiles q(α|x)is:

ˆ

qn(α|x) =inf{y∈R,F¯n(y|x)≤α}. (2) Finally, denote by ˆgn(x)the kernel density estimator of the probability density g i.e.

ˆ gn(x) =1

n

n i=1

Kh(x−Xi).

Our main result is the following.

Theorem 1 Let(Xi,Yi)1in be independent copies of a random pair(X,Y)∈ Rd×R+with density f(X,Y). Let g be the density of X that we suppose bounded and strictly positive. Suppose that assumption(A1)is satisfied. Letn) be a sequence of levels in[0,1]for which

lim sup

n→∞ sup

xRd

ˆ qnn|x)

q(αn|x)Cst, (3)

almost surely. Suppose that the kernel K satisfies Conditions(K1),(K2),(K3).

Define

A(y,z,x,hn) = sup

u:d(u,x)Rhn

¯

¯

¯

¯ F(y¯ |u) F¯(z|x)−1

¯

¯

¯

¯ .

Suppose that for some fixed positiveε0and for z∈ {q(αn|x),(1+ε)q(αn|x)} lim sup

n→∞ sup

xRd,|ε|≤ε0

A((1+ε)q(αn|x),z,x,hn)≤C<∞. (4) If, moreover,

nlim→∞nhdnαn=∞and lim

n→∞

ln(αnhdn∧αn2) nhdnαn

=0, (5)

then there exists a positive constant C, not dependent on x, such that one has for n sufficiently large

¯

¯

¯

¯

1−F(¯ qˆnn|x)|x) F(q(α¯ n|x)|x)

¯

¯

¯

¯ ≤ C sup

|ε|≤ε0

A((1+ε)q(αn|x),(1+ε)q(αn|x),x,hn)

+ C

ˆ gn(x)

s

ln(αn1hnd)∨ln ln n

nhdn ,almost surely.

The first term of the bound can be interpreted as a bias term due to the kernel smoothing. The second term can be seen as a variance term, nαnhdn being the effective number of points used in the estimation. The following proposition gives conditions under which (3) is satisfied.

Proposition 1 Suppose that g is Lipschitzian and bounded above by gmax. Let vd=Rkvk≤1dv be the volume of the unit sphere. If A(q(αn|x),q(αn|x),x,0,h)→0 and there existε>0 such that

n=1

nhdαnexp{−vdgmaxnhdαn(1+ε)}=∞, (6) then,

lim sup

n→∞ sup

xRd

ˆ qnn|x)

q(αn|x) ≤1, almost surely.

Some examples of distributions satisfying condition (4) are provided in the next section.

(5)

3 Examples

Let us first focus on a conditional Pareto distribution defined as

F(y¯ |x) =y−θ(x),for all y>0. (7) Here,θ(x)>0 can be read as the inverse of the conditional extreme-value index.

The above distribution belongs to the so-called Fr´echet maximum domain of at- traction which encompasses all distributions with heavy tails. As a consequence of Theorem 1, we have:

Corollary 1 Let us consider a conditional Pareto distribution (7) such that 0<

θmin≤θ(x)≤θmax for all x∈Rd. Assume thatθ is Lipschitzian. If the se- quencesn)and(hn)are such that hnlogαn0 as n→∞and (5), (6) hold, then ˆqnn|x)/q(αn|x)1 almost surely as n→∞.

Let us now consider a conditional exponential distribution defined as

F¯(y|x) =exp(−θ(x)y),for all y>0, (8) whereθ(x)>0 is the inverse of the conditional expectation of Y given X=x. This distribution belongs to the Gumbel maximum domain of attraction which collects all distributions with a null conditional extreme-value index. These distributions are often referred to as light-tailed distributions. In such a case, Theorem 1 yields a stronger convergence result than in the heavy-tail framework:

Corollary 2 Let us consider a conditional exponential distribution (8) with 0<

θmin≤θ(x)≤θmax for all x∈Rd. Assume thatθ is Lipschitzian. If the se- quencesn)and(hn)are such that hnlogαn0 as n→∞and (5), (6) hold, then(qˆnn|x)q(αn|x))0 almost surely as n→∞.

4 Proofs

4.1 Proof of Theorem 1

Clearly, by (1),

|F¯(q(αn|x)|x)F¯(qˆnn|x)|x)|

F(q(α¯ n|x)|x) ≤ |αnF¯n(qˆnn|x)|x)| F(q(α¯ n|x)|x)

+ |F¯n(qˆnn|x)|x)F(¯ qˆnn|x)|x)| F(q(α¯ n|x)|x) . First, from (2),|αnF¯n(qˆnn|x)|x)|is bounded above by the maximal jump of F¯n(y|x)at some observation point(Xj,Yj):

nF¯n(qˆnn|x)|x)| ≤maxj=1,...,nKh(x−Xj)

ni=1Kh(x−Xi) . It follows from(K2)that

nF¯n(qˆnn|x)|x)|

F¯(q(αn|x)|x) ≤ κ nhdαngˆn(x).

(6)

Let us then focus on the second term:

|F¯n(qˆnn|x)|x)F(¯ qˆnn|x)|x)|

F¯(q(αn|x)|x) =F¯(qˆnn|x)|x) F¯(q(αn|x)|x)

¯

¯

¯

¯

F¯n(qˆnn|x)|x) F¯(qˆnn|x)|x) −1

¯

¯

¯

¯

. (9)

We write ˆqnn|x) = (1+ε)q(αn|x), withε= qˆq(αnn|x)

n|x) −1. Condition (3) allows to deduce that there exists, for n sufficiently large, ε0 not dependent on x such that|ε| ≤ε0. Consequently and taking into account (9) there exists a positive constantε0not dependent on x and n such that (for the sake of simplicity, we write q=q(αn|x))

¯

¯

¯

¯

F(¯ qˆnn|x)|x) F(q(α¯ n|x)|x) −1

¯

¯

¯

¯ ≤ κ

nhdαngˆn(x)

+ sup

|ε|≤ε0

µF(q(1¯ +ε))|x) F(q¯ |x)

¯

¯

¯

¯

F¯n(q(1+ε)|x) F¯(q(1+ε))|x)−1

¯

¯

¯

¯

¶ . (10)

Our purpose now is to control the term

¯

¯

¯

F¯n(q(1+ε)|x) F¯(q(1+ε)|x)−1

¯

¯

¯of (10). For this, write F¯n(y|x) =ni=1Kh(x−Xi)1IYi>y

ni=1Kh(x−Xi) =:ψˆn(y,x) ˆ gn(x) , with

ψˆn(y,x) =1 n

n i=1

Kh(x−Xi)1IYi>y, gˆn(x) =1 n

n i=1

Kh(x−Xi).

We need the following lemma.

Lemma 1 Suppose that Condition (K2) holds. Then, for each n, one has for any y∈Rand x∈Rdfor which ¯F(y|x)gˆn(x)6=0,

¯

¯

¯

¯ F¯n(y|x)

F(y¯ |x) −1

¯

¯

¯

¯

A(y,y,x,hn) + (1+A(y,y,x,hn))|gˆn(x)−Egˆn(x)| ˆ

gn(x) +|ψˆn(y,x)−E(ψˆn(y,x))| F(y¯ |x)gˆn(x) . The proof is postponed to the Appendix. According to Lemma 1, we have to control the two quantitiesE|gˆn(x)−Egˆn(x)|and |ψˆn(y,x)F¯(yE(ψˆn(y,x))|

|x) . This is the purpose of Propositions 2 and 3 below.

Proposition 2 Einmahl-Mason (2005).

Suppose that g is a bounded density onRd, and that the assumptions(K.i),···,(K.iv) of [13] are all satisfied. Then, for any c>0,

lim sup

n→∞ sup

{c ln n/nhdn1}

s

nhdn

ln(hnd)∨ln ln n sup

xRd

|gˆn(x)−Egˆn(x)|=: K(c)<∞, almost surely.

Our task now is to control|ψˆn(y,x)F(y¯E(ψˆn(y,x))|

|x) . Letεbe a fixed real in[−ε00]for some arbitrary positiveε0. The following proposition evaluates the almost sure asymptotic behaviour of

|ψˆn(q(1+ε),x)−E(ψˆn(q(1+ε),x))|

F¯(q|x) .

(7)

Proposition 3 Letn)be a sequence in[0,1]and for x∈Rd, q=q(αn|x)be the conditional quantile as defined by (1). Define the set of functionsFby,

F=

½

(u,v)7−→K µxu

h

Iv>q(1+ε),n∈N,h>0,x∈Rd,|ε| ≤ε0

¾ (11) and suppose thatN(ε,F)≤−ν,for some C,ν>1 and allε∈]0,1[. Suppose also that Condition (4) is satisfied. If nhdnαn→∞and ln(αnhnhddn∧αn2)

nαn0 as n→∞, then there exists a positive constant C1such that

lim sup

n→∞

s

nαnhdn

ln(αn1hnd)∨ln ln n sup

xRd,|ε|≤ε0

|ψˆn(q(1+ε),x)−E(ψˆn(q(1+ε),x))| F(q¯ |x)C1, almost surely.

Proof of Proposition 3. We have, 1

F¯(q|x)(ψˆn(q(1+ε),x)−E(ψˆn(q(1+ε),x)))

= 1

n ¯F(q|x)

n i=1

[Kh(x−Xi)1IYi>q(1+ε)−E(Kh(x−Xi)1IYi>q(1+ε))]

= 1

nhdn

n i=1

(vh,x,ε(Xi,Yi)−E(vh,x,ε(Xi,Yi))) = 1 hdn

nβn(vh,x,ε), where,

vh,x,ε(u,v) =K(xu

h )1Iv>q(1+ε)

F¯(q|x) , βn(g) = 1

n

n i=1

(g(Xi,Yi)−E(g(Xi,Yi))) Define the class of functions:

G :=Gn,h={vh,x,ε,x∈Rd,ε∈[−ε00]} (12) and letkβnkG =supgGn(g)|and

Θn= sup

xRd,ε∈[−ε00]

|ψˆn(q(1+ε),x)−E(ψˆn(q(1+ε),x))|

F(q¯ |x) .

Consequently, for anyγ>0, P(Θn>γ)≤P³√

nnkGnhd´

≤P µ

max

1mn

mmkGnhd

¶ (13) We have then to evaluate max1mn

mmkG. By Talagrand Inequality, (see A.1. in [12]), we have for any t>0 and suitable finite constants A1,A2>0,

P Ã

1maxmn

mmkG >A1

à Ek

n i=1

εig(Xi,Yi)kG+t

!!

≤2 exp(−A2t2/nσ2) +2 exp(−A2t/M),

where(εi)iis a sequence of independent Rademacher random variables indepen- dent of the random vectors(Xi,Yi)1inand

sup

gGkgkM, sup

gG

Var(g(X,Y))≤σ2.

(8)

HerekgkkKαkn =: M, and

Var(v2h,x,ε(X,Y))≤E(v2h,x,ε(X,Y)) = 1

F¯2(q|x)E(K2(xX

h )1IY>q(1+ε))

= 1

F¯(q|x) Z

K2(xu

h )F(q(1¯ +ε)|u) F¯(q|x) g(u)du

hdn αn

sup

xRd,ε∈[−ε00]

(1+A(q(1+ε),q,x,h))kKk22kgk= hdn αn

L=:σ2,(14) for some positive constant L, since for n sufficiently large

sup

xRd,ε∈[−ε00]

A(q(1+ε),q,x,h)cst.

We obtain combining this with the above Talagrand’s Inequality, P

Ã

1maxmn

mmk>A1

à Ek

n i=1

εig(Xi,Yi)kG+t

!!

≤2 exp(−A2t2 αn

nhdnL) +2 exp(−A2t αn

kKk). (15) The last bound together with (13) gives

P Ã

Θn>A1n1hnd Ã

Ek

n i=1

εig(Xi,Yi)kG+t

!!

≤2 exp(−A2t2 αn

nhdnL) +2 exp(−A2t αn

kKk). (16) We have now to evaluateEk∑ni=1εig(Xi,Yi)kG.We will argue as for the proof of Proposition A.1. in [12]. We have by (6.9) of Proposition 6.8 in [24],

Ek

n i=1

εig(Xi,Yi)kG6tn+6E(max

inig(Xi,Yi)kG)≤6tn+6kKk

αn

, (17)

where we have defined tn=inf

( t>0,P

à k

n i=1

εig(Xi,Yi)kG>t

!

≤ 1 24

) .

Our purpose is then to control tn. Define the event Fn=

( n1sup

gG

n j=1

g2(Xj,Yj)≤64σ2 )

,

whereσ2is as in (14). Let g0be a fixed element ofG. We have, E|

n i=1

εig0(Xi,Yi)1IFn| ≤8σ√ n.

By (A8) of [12], we have now to controlN(ε,G,dn,2). Recall thatN(ε,G,dQ) is the minimal number of balls{g,dQ(g,g)<ε}of dQ-radiusεneeded to cover G, dQis the L2(Q)-metric and

dn,2(f,g) =dQn(f,g) = Z

(f(x)g(x))2dQn(x),

(9)

with Qn=1nni=1δ(Xi,Yi). In other words dn,2(f,g) =1

n

n i=1

(f(Xi,Yi)−g(Xi,Yi))2. We note first that, on the event Fn,

N(ε,G,dn,2) =1, wheneverε>16σ.

We will suppose thenε≤16σ. We have

N(ε,G,dn,2) =N(ε,G,dQn)≤N(εαn,F,dQn),

whereFis as defined by (11). Recall thatN(ε,F) =supQN(εkKk,F,dQ) where the supremum is taken over all the probability measure Q onRd×R. We have supposed that,

N(ε,F)≤−ν. for some C,ν>1 and allε∈]0,1[. Consequently,

N(ε,G,dn,2)≤C( αnε

kKk)−ν, (18) as soon asαnε<kKk. Hence, we have almost surely on the event Fn,

Z 0

q

ln(N(ε,G,dn,2))dε = Z16σ

0

q

ln(N(ε,G,dn,2))dε

i=0 Z2−i16σ

2−i−116σ

s

ln(C) +νln(kKk

αnε )dε

i=0

2i116σ s

ln(C) +νln(2i+1kKk

αn16σ )

C2

i=0

i+1 2i+1

s

σ2max(ln(C1),ln( 1 αn2σ2)), for some positive constants C1,C2that depend only on C,νandkKk. We con- clude, using (A8) of [12],

E Ã

k

n i=1

εig(Xi,Yi)kG1IFn

!

C2 s

2max(ln(C1),ln( 1

αn2σ2)). (19) We now use Inequality A2 in [12] (which is due to Gin´e and Zinn), with t= 64√2. We obtain, for m1, since for any g∈G,kgkkKαkn,

P(Fnc) =P Ã

n1sup

gG

n j=1

g2(Xi,Yi)>64σ2

!

≤4P(N(ρn1/4,G,dn,2)≥m) +8m exp(nσ2αn2/kKk2) where n1/4ρ=n1/4min(σn1/4,n1/4) =min(σ,1). Hence by (18),

N(ρn1/4,G,dn,2)≤C(αnmin(σ,1) kKk )−ν,

(10)

Consequently, we have for m= [2C(αnmin(σ,1)

kKk )−ν], P(Fnc)≤16C(αnmin(σ,1)

kKk )−νexp(−nσ2αn2/kKk2) The last bound together with (19) gives,

P Ã

k

n i=1

εig(Xi,Yi)kG>t

!

≤ P(Fnc) +1 tE

à k

n i=1

εig(Xi,Yi)kG1IFn

!

16C( kKk

αnmin(σ,1))νexp(−2αn2/kKk2) + C2

t s

2max(ln(C1),ln( 1

αn2σ2)). (20) Let us control the second term in (20). Recall that by (14),σ2=Lαhdn

n and thus αn2σ2=Lhdnαnfor some positive constant L. This fact together with hdnαn→0 as n→∞allows to deduce that for n sufficiently large,

C2 t

s

2max(ln(C1),ln( 1

αn2σ2))≤Cst t

s nhdn

αn

ln( 1

αnhdn). (21) Our task now is to control the first term in (20). We have,

16C( kKk

αnmin(σ,1))νexp(−nσ2αn2/kKk2)

Lν0/2exp µ

nhdnαn(L1+ ν 2nhdnαn

ln(αnhdn∧αn2))

¶ , which tends to 0 as n→∞as soon as nhdnαn→0 and ln(αnhnhddn∧αn2)

nαn →0. Hence for t48Cst

r

nhdn αn ln(α1

nhdn) =: tn, Inequalities (20) and (21) give for n sufficiently large and for ttn

P Ã

k

n i=1

εig(Xi,Yi)kG >t

!

≤ 1 24. We conclude using this fact together with Inequality (17), Ek

n i=1

εig(Xi,Yi)kGc Ã1

αn

+ s

nhdn αn

ln( 1 αnhdn)

!

=O Ãsnhdn

αn

ln( 1 αnhdn)

!

. (22)

Recalling that

Θn= sup

xRd,ε∈[−ε00]

|ψˆn(q(1+ε),x)−E(ψˆn(q(1+ε),x))|

F(q¯ |x) ,

and collecting (22), (16), yields P

Ã

Θn>A1n1hnd Ã

Ek

n i=1

εig(Xi,Yi)kG+t

!!

≤2

· exp

µ

A2˜L2ln(ln(n)) L

¶ +exp

µ

A2˜L kKk

q

nhdnαnln(ln n)

¶¸

,(23)

(11)

for any t≥ µ

Cst r

nhdn αn ln(α1

nhdn)

˜Lq

nhdn

αn ln(ln n)and some ˜L>0. Now, ln n

nαnhdn+ln(αnhdn∧αn2)

nαnhdn ≤ln(nαnhdn) nαnhdn ,

which tends to 0 as n tends to infinity, since limn→∞nαnhdn=∞.Hence,

nlim→∞

ln n nhdn =0,

which proves that for n sufficiently large nhdnαnln n. We conclude then from (23) that,

P Ã

Θn>A1n1hnd Ã

Ek

n i=1

εig(Xi,Yi)kG+t

!!

≤4(ln n)−ρ,

since we have nhdnαnln n. We choose ˜L in such a way thatρ>1. Proposition 3 is proved thanks to Borel-Cantelli lemma.

We continue the proof of Theorem 1. Inequality (10), together with Lemma 1 and Condition (4), gives for some universal positive constant C

¯

¯

¯

¯

F(¯ qˆnn|x)|x) F(q(α¯ n|x)|x) −1

¯

¯

¯

¯≤ κ

nhdαngˆn(x) +C sup

|ε|≤ε0

A(q(1+ε),q(1+ε),x,hn) +C sup

|ε|≤ε0

(1+A(q(1+ε),q(1+ε),x,hn))|gˆn(x)−Egˆn(x)| ˆ

gn(x) +C sup

|ε|≤ε0

|ψˆn(q(1+ε),x)−E(ψˆn(q(1+ε),x))|

F(q¯ |x)gˆn(x) . (24) We first use Einmahl and Mason’s result (cf. Proposition 2 above). All the require- ments of Einmahl and Mason result are satisfied from that of Theorem 1. This gives that, for c ln n/nhdn≤1,

s

nhdn

ln(hnd)∨ln ln n sup

xRd

|gˆn(x)−Egˆn(x)|<C, (25) almost surely. Our task now is to apply Proposition 3. We first claim that Lemma 2 Under Condition(K3), the class of functionFdefined by(11)satisfies N(ε,F)≤−ν, for C>0,ε>0,ν>1.

Proof of Lemma 2. Define the set of functionF=K I, where the set of func- tionsK isK =©

u7−→K¡xu

h

¢,x∈Rd,h>0ª

,andI={v7−→1Iv>q(1+ε),x∈ Rd,n∈N,|ε| ≤ε0}.The proof of Lemma 2 follows from Lemma A.1 of [12] since N(ε,{v7−→1Iv>y,y∈R})≤Cεν˜ with ˜ν>0 and C>0.

(12)

Consequently, all the requirements of Proposition 3 are satisfied from that of The- orem 1. The conclusion of Proposition 3 together with (25), (24) and the facts that

s

ln(hnd)∨ln ln n

nhdn

sln(αn1hnd)∨ln ln n nhdnαn

, 1

nhdnαn

sln(αn1hnd)∨ln ln n nαnhdn complete the proof of Theorem 1.

4.2 Proof of Proposition 1

Let us introduce Zi(n)(x)for i=1, . . . ,n a triangular array of i.i.d. random variables defined by Z(n)i (x) =YiIkxXik≤h. Their common survival distribution function can be expanded as:

Ψ¯n(t,x) = P(Z1(n)(x)>t) = Z

kxuk≤h

F¯(t|u)g(u)du

= hd Z

kvk≤1

F(t¯ |xhv)g(xhv)dv, or equivalently,

Ψ¯n(t,x) hdF(t¯ |x)g(x) =

Z

kvk≤1dv +

Z

kvk≤1

µF¯(t|xhv) F(t¯ |x) −1

g(xhv) g(x) dv

+ Z

kvk≤1

µg(xhv) g(x) −1

dv.

Letting vd=Rkvk≤1dv the volume of the unit sphere, and assuming that g is Lips- chitzian, it follows,

Ψ¯n(t,x)

hdF(t¯ |x)g(x)=vd+o(1) +O(A(t,t,x,0,h)) and introducingβn(x) =n ¯Ψn(q(αn|x)|x), we obtain

βn(x) =vdg(x)nhdαn(1+o(1))

under condition A(q(αn|x),q(αn|x),x,0,h)0 as n→∞. We now need the fol- lowing lemma (also available to triangular arrays).

Lemma 3 (Klass, 1985) [23] Let Z,Z1,Z2, . . . be a sequence of i.i.d. random vectors and define Mn=max{Z1, . . . ,Zn}. Suppose that(bn)is nondecreasing, P(Z>bn)→0 and nP(Z>bn)→∞as n→∞. If, moreover,

n=1

P(Z>bn)exp{−nP(Z>bn)}=∞, then lim supn→∞Mn/bn<1 a.s.

(13)

From Lemma 3, a sufficient condition for lim sup

n→∞

max1inZi(n)(x)

q(αn|x) <1 a.s. (26)

is

n=1

βn(x)

n exp{−βn(x)}=∞, which is fulfilled under (6). Finally,

ˆ qnn|x)

q(αn|x) ≤max1inZ(n)i (x) q(αn|x) and the conclusion follows from (26).

4.3 Proofs of corollaries

Proof of Corollary 1. For allτ∈[0,1]and(u,x)such that d(u,x)Rhn, we have F(q(α¯ n|x)(1+ε)|u)

F(q(¯ αn|x)(1+ε)τ|x) = q(αn|x)θ(x)−θ(u)(1+ε)τθ(x)−θ(u)

= exp

½θ(u)−θ(x)

θ(x) logαn+ (τθ(x)−θ(u))log(1+ε)

¾

= exp{O(hnlogαn) +O(τθ(x)−θ(u))}

= (1+O(hnlogαn))exp{O(τθ(x)−θ(u))}. (27) Assuming that hnlogαn0 as n→∞, it follows that (27) is bounded above for all τ∈[0,1]and(u,x)such that d(u,x)Rhnand therefore condition (4) is fulfilled.

If, moreover,τ=1 then O(θ(x)−θ(u)) =O(hn)and thus (27) tends to zero as n goes to infinity. Proposition 1 then entails that assumption (3) holds. Theorem 1 implies that

¯

¯

¯

¯

1−F(¯ qˆnn|x)|x) F(q(α¯ n|x)|x)

¯

¯

¯

¯

=

¯

¯

¯

¯

¯ 1−

µqˆnn|x) q(αn|x)

−θ(x)¯

¯

¯

¯

¯

→0 almost surely as n→∞. The conclusion follows.

Proof of Corollary 2. For allτ∈[0,1]and(u,x)such that d(u,x)Rhn, we have F(q(α¯ n|x)(1+ε)|u)

F(q(α¯ n|x)(1+ε)τ|x) = exp

½

(1+ε)log(αn)

µθ(u)−θ(x)

θ(x) +1−(1+ε)τ−1¶¾

= exp{O(hnlogαn)}expn

(1+ε)log(αn

1−(1+ε)τ−1´o

= (1+O(hnlogαn)o(1) =o(1). (28)

Assuming that hnlogαn0 as n→∞, it follows that (28) tends to zero as n goes to infinity. Assumptions (3) and (4) both hold. Theorem 1 implies that

¯

¯

¯

¯

1−F¯(qˆnn|x)|x) F¯(q(αn|x)|x)

¯

¯

¯

¯

=|1−exp((q(αn|x)qˆnn|x))θ(x))| →0 almost surely as n→∞. The conclusion follows.

(14)

Appendix: proof of auxiliary results

Proof of Lemma 1. Clearly,

¯

¯

¯

¯ F¯n(y|x)

F(y¯ |x) −1

¯

¯

¯

¯≤

¯

¯

¯

¯ F¯n(y|x)

F(y¯ |x) − E(ψˆn(y,x)) F(y¯ |x)E(gˆn(x))

¯

¯

¯

¯ +

¯

¯

¯

¯

E(ψˆn(y,x)) F(y¯ |x)E(gˆn(x))−1

¯

¯

¯

¯ (29) We have,

E(ψˆn(y,x)) =1 n

n i=1

E(Kh(x−Xi)1IYi>y) =E(Kh(x−X1)1IY1>y)

=E(Kh(x−X1)P(Y1>y|X1)) = Z

Kh(x−z)P(Y1>y|X1=z)g(z)dz

= Z

Kh(x−z)F(y¯ |z)g(z)dz,

andE(gˆn(x)) =1nni=1E(Kh(x−Xi)) =E(Kh(x−X1)). Consequently, E(ψˆn(y,x))

F¯(y|x)E(gˆn(x))−1=

= 1

E(Kh(x−X1)) µZ

Kh(x−z)

·F¯(y|z) F(y¯ |x)−1

¸ g(z)dz

= 1

E(Kh(x−X1)) µZ

Kh(u)

·F¯(y|xu) F(y¯ |x) −1

¸

g(xu)du

¶ .

We conclude, since the kernel K is compactly supported, that for some R>0,

¯

¯

¯

¯

E(ψˆn(y,x)) F¯(y|x)E(gˆn(x))−1

¯

¯

¯

¯≤ sup

{x,d(x,x)hR}

¯

¯

¯

¯ F(y¯ |x) F¯(y|x)−1

¯

¯

¯

¯

=A(y,y,x,h). (30) Now,

¯

¯

¯

¯ F¯n(y|x)

F(y¯ |x) − E(ψˆn(y,x)) F(y¯ |x)E(gˆn(x))

¯

¯

¯

¯

≤|ψˆn(y,x)−E(ψˆn(y,x))|

F(y¯ |x)gˆn(x) +E(ψˆn(y,x))|gˆn(x)−Egˆn(x)| F¯(y|x)gˆn(x)Egˆn(x)

≤|ψˆn(y,x)−E(ψˆn(y,x))|

F(y¯ |x)gˆn(x) + (1+A(y,y,x,h))E|gˆn(x)−Egˆn(x)| ˆ

gn(x) , by (30). The last bound together with (30) and (29) prove Lemma 1.

(15)

References

[1] Abdi, S, Abdi, A, Dabo-Niang, S. and Diop, A. (2010). Consistency of a nonparametric conditional quantile estimator for random fields. Mathemati- cal Methods and Statistics, 2010, Vol. 19, No. 1, 1-21.

[2] Beirlant, J., and Goegebeur, Y. (2004). Local polynomial maximum likeli- hood estimation for Pareto-type distributions. Journal of Multivariate Anal- ysis, 89, 97–118.

[3] Berlinet, A., Gannoun, A. and Matzner-Lober, E. (2001). Asymptotic nor- mality of convergent estimates of conditional quantiles. Statistics, 35, 139–

169.

[4] Beirlant, J., Goegebeur, Y., Segers, J., and Teugels J. (2004). Statistics of extremes: Theory and applications, Wiley.

[5] Chavez-Demoulin, V., and Davison, A.C. (2005). Generalized additive mod- elling of sample extremes. Journal of the Royal Statistical Society, series C, 54, 207–222.

[6] Chernozhukov, V. (2005). Extremal quantile regression, The Annals of Statistics, 33(2), 806–839.

[7] Daouia, A., Gardes, L., Girard, S. and Lekina, A. (2011). Kernel estimators of extreme level curves. Test, 20, 311–333.

[8] Daouia, A., Gardes, L. and Girard, S. (2013). On kernel smoothing for ex- tremal quantile regression”, Bernoulli, 19, 2557–2589.

[9] Davison, A.C., and Ramesh, N.I. (2000). Local likelihood smoothing of sample extremes. Journal of the Royal Statistical Society, series B, 62, 191–

208.

[10] Davison, A.C., and Smith, R.L (1990). Models for exceedances over high thresholds. Journal of the Royal Statistical Society, series B, 52, 393–442.

[11] Dabo-Niang, S. and Laksaci, A. (2012). Nonparametric Quantile Regression Estimation for Functional Dependent Data. Comm. Statist. Theory Methods, 41-7: 1254-1268.

[12] Einmahl, U and Mason, D. M. (2000). An empirical process approach to the uniform consistency of kernel type function estimators. J. Theor. Probab. 13, 1-37.

[13] Einmahl, U and Mason, D. M. (2005). Uniform in bandwidth consistency of kernel type function estimators. Ann. Statist. Volume 33, Number 3, 1380- 1403.

[14] Ezzahrioui, M. and Ould-Sa¨ıd, E. (2008). Asymptotic results of a non- parametric conditional quantile estimator for functional time series. Comm.

Statist. Theory Methods, 37(16-17):2735-2759.

[15] Gardes, L., and Girard, S. (2008). A moving window approach for non- parametric estimation of the conditional tail index. Journal of Multivariate Analysis, 99, 2368–2388.

[16] Gardes, L., and Girard, S. (2010). Conditional extremes from heavy-tailed distributions: An application to the estimation of extreme rainfall return lev- els. Extremes, 13, 177–204.

(16)

[17] Gardes, L., and Girard, S. (2012). Functional kernel estimators of large con- ditional quantiles. Electronic Journal of Statistics, 6, 1715–1744.

[18] Gardes, L., Girard, S., and Lekina, A. (2010). Functional nonparametric es- timation of conditional extreme quantiles. Journal of Multivariate Analysis, 101, 419–433.

[19] Girard, S., and Jacob, P. (2008). Frontier estimation via kernel regression on high power-transformed data. Journal of Multivariate Analysis, 99, 403–420.

[20] Girard, S., and Menneteau, L. (2005). Central limit theorems for smoothed extreme value estimates of Poisson point processes boundaries. Journal of Statistical Planning and Inference, 135, 433–460.

[21] Hall, P., and Tajvidi, N. (2000). Nonparametric analysis of temporal trend when fitting parametric models to extreme-value data. Statistical Science, 15, 153–167.

[22] Jureckov´a, J. (2007). Remark on extreme regression quantile. Sankhya, 69, Part 1, 87–100.

[23] Klass, M. (1985). The Robbins-Siegmund series criterion for partial max- ima. Ann. Probab. Volume 13, Number 4, 1369-1370.

[24] Ledoux, M., Talagrand, M. (1991) Probability in Banach spaces Ergebnisse der Mathematik. Springer.

[25] Ould-Sa¨ıd, E., Yahia, D., and Necir, A. (2009). A strong uniform conver- gence rate of a kernel conditional quantile estimator under random left- truncation and dependent data. Electron. J. Statist. Volume 3, 426-445.

[26] Park, B.U. (2001). On nonparametric estimation of data edges. Journal of the Korean Statistical Society, 30, 265–280.

[27] Samanta, T. (1989). Non-parametric estimation of conditional quantiles.

Statistics and Probability Letters, 7, 407–412.

[28] Smith, R.L. (1989). Extreme value analysis of environmental time series:

An application to trend detection in ground-level ozone (with discussion).

Statistical Science, 4, 367–393.

[29] Stone, C.J. (1977). Consistent nonparametric regression (with discussion).

The Annals of Statistics, 5, 595–645.

[30] Stute, W. (1986). Conditional empirical processes. The Annals of Statistics, 14, 638–647.

[31] Tsay, R.S. (2002). Analysis of financial time series. Wiley, New York.

Références

Documents relatifs

Weissman extrapolation device for estimating extreme quantiles from heavy-tailed distribu- tion is based on two estimators: an order statistic to estimate an intermediate quantile

As it exhibits some bias, we propose a reduced-bias estimator whose expansion is formulated in Theorem 2 and its asymptotic normality is given in Corollary 2.. 2.1 Asymptotic

Estimating the Conditional Tail Moment permits to estimate all risk measures based on conditional moments such as Conditional Tail Expectation, Conditional Value-at- Risk or

However, in the no-retention case (R = 0), the technique of estimation of π(ρ) is different from the case R large and has been only recently studied in the literature by Centeno

Keywords: Weibull tail-distributions, Pareto-type distributions, extreme quantile, maxi- mum domain of attraction, asymptotic normality..

Estimating the conditional tail index with an integrated conditional log-quantile estimator in the random..

When analyzing the extremes of a random variable, a central issue is that the straightforward empirical estimator of the quantile function is not consistent at extreme levels; in

From the Table 2, the moment and UH estimators of the conditional extreme quantile appear to be biased, even when the sample size is large, and much less robust to censoring than