• Aucun résultat trouvé

Estimating an endpoint with high order moments Stéphane Girard, Armelle Guillou, Gilles Stupfler

N/A
N/A
Protected

Academic year: 2022

Partager "Estimating an endpoint with high order moments Stéphane Girard, Armelle Guillou, Gilles Stupfler"

Copied!
31
0
0

Texte intégral

(1)

HAL Id: inria-00596979

https://hal.inria.fr/inria-00596979

Submitted on 30 May 2011

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Estimating an endpoint with high order moments

Stéphane Girard, Armelle Guillou, Gilles Stupfler

To cite this version:

Stéphane Girard, Armelle Guillou, Gilles Stupfler. Estimating an endpoint with high order mo- ments. Test, Spanish Society of Statistics and Operations Research/Springer, 2012, 21 (4), pp.697-729.

�10.1007/s11749-011-0277-8�. �inria-00596979�

(2)

Stéphane Girard

(1)

, Armelle Guillou

(2)

& GillesStuper

(2)

(1)

TeamMistis,INRIARhône-Alpes&LJK,Inovallée,655, av. del'Europe,

Montbonnot,38334 Saint-Ismiercedex,France

(2)

UniversitédeStrasbourg&CNRS,IRMA,UMR7501,7rueRenéDescartes,

67084Strasbourgcedex,France

Abstract. Wepresentanewmethodforestimatingtheendpointofaunidimensionalsample

whenthedistributionfunctiondecreasesatapolynomialratetozerointheneighborhoodofthe

endpoint. Theestimatoris basedon theuseofhigh ordermomentsofthevariable ofinterest.

Itis assumedthat theorderofthemomentsgoesto innity,andwegiveconditionsonitsrate

of divergence to getthe asymptotic normalityof the estimator. The good performanceof the

estimatorisillustratedonsomenite samplesituations.

AMSSubject Classications: 62G32,62G05.

Keywords: Endpointestimation,high ordermoments,consistency,asymptoticnormality.

1 Introduction

Let

(X 1 , . . . , X n )

be

n

independentcopiesofarandomvariable

X

,withboundedsupport

[0, θ]

,

where

θ > 0

is unknown. In this paper, we address the problem of estimating the (right)

endpoint

θ

of the survivalfunction

F

of

X

. Pioneering work on endpoint estimation includes

Quenouille (1949) who introduced a jackknife estimate of the endpoint based on the naive

maximumestimator. ThisapproachwasfurtherstudiedbyMiller(1964),RobsonandWhitlock

(1964),Cooke(1979)anddeHaan(1981),tonameafew. Awell-knownreferenceonendpoint

estimation is Hall (1982), recently improved by Li and Peng (2009), in which a maximum

likelihoodmethod isusedwhen

F

belongsto theHallmodel, seeforinstance Section5. Hall's

work gave a start to the study of general threshold-based methods, together with the use of

theapproximationof excessesbyGeneralizedParetoDistributions,see forinstance Smithand

Weissman(1985)andSmith(1987). Ageneralconstructionofestimatorsoftheendpointusing

athreshold is givenin deHaan andFerreira(2006, p. 146). Somepopularestimators in this

framework,called Peaks OverThreshold (POT) approach, are probability weightedmoments

(3)

likelihoodestimators(Dreesetal.,2003).

Other studies include Loh (1984) and Athreyaand Fukuchi (1997) with abootstrapmethod,

Halland Wang(1999)foraminimal-distance method, Goldenshlugerand Tsybakov(2004)for

endpoint estimation in presence of random errors, and Hall and Wang (2005) for a Bayesian

likelihoodapproach. Asfarasdetectingthepresenceofaniteendpointisconcerned,seeNeves

andPereira(2010).

Inthispaper,weintroduceanestimatorusinghighmomentsofthevariableofinterest

X

. More

precisely,theestimatorisgivenby

1 b θ n

= 1 ap n

((a + 1)p n + 1) b µ (a+1)p n

b

µ (a+1)p n +1

− (p n + 1) µ b p n

b µ p n +1

(1)

where

(p n )

isanonrandomsequencesuch that

p n → ∞

,

a > 0

and

b

µ p n = 1 n

X n i=1

X i p n

is the classicalmoment estimator of

µ p n := E (X p n )

. From a practical point of view, taking

high order momentsgives moreweight to observations closeto

θ

. Froma theoreticalpointof view,theestimator

θ b n

convergesinprobabilityto

θ

withoutanyparametricassumptiononthe

distribution of

X

, see Section 3. Theasymptotic normalityof theestimator is established in Section 4under asemi-parametric assumption. Some examplesof distributions satisfyingthis

assumptionareprovidedin Section5. Somesimulationsareproposed inSection 6to illustrate

the eciency of ourestimator, and to compare it with estimators of the endpoint estimation

literature. Auxiliaryresultsarepostponedto AppendixAandprovedinAppendixB.

2 Construction of the estimator

Tojustifytheintroductionofourestimator(1), letrst

Y

bearandomvariable withsurvival

function

G

dened by

G(y) = (1 − y/θ) α

forall

y ∈ [0, θ]

. Wegetforall

p ≥ 1

,

M p := E (Y p ) = p

Z +∞

0

y p−1 G(y) dy = α θ p B(p + 1, α)

(2)

where

B(x, y) = Z 1

0

t x−1 (1 − t) y−1 dt

istheBetafunction. Thisyields forall

p ≥ 1

,

M p

M p+1 = 1 θ

1 + α

p + 1

(3)

leadingto,forallarbitrarysequences

(p n )

andall

a > 0 1

θ = 1 ap n

((a + 1)p n + 1) M (a+1)p n

M (a+1)p n +1

− (p n + 1) M p n

M p n +1

.

(4)

Usingtheaboveideas,weshallthendeneourestimatorintwosteps. First,themoment

M p

is

replacedbythetruemoment

µ p

,andweset

1 Θ n

:= 1 ap n

((a + 1)p n + 1) µ (a+1)p n

µ (a+1)p n +1

− (p n + 1) µ p n

µ p n +1

.

Second,

µ p n

is estimated by the corresponding empirical moment

µ b p n

; plugging

µ b p n

in

1/Θ n

yieldstheestimator(1)of

1/θ

.

3 Consistency

Inthissection,westateandprovetheconsistencyofourestimatorinanon-parametriccontext.

Theonlyhypothesisis

(A 0 ) X

ispositiveandtheendpoint

θ = sup{x ≥ 0 | F (x) < 1}

of

X

isnite.

Tothisend, therststepistoprovearesultsimilarto(3)for

µ p n

.

Proposition1. Under

(A 0 )

,

µ p n /µ p n +1 −→ 1/θ

as

n → ∞

.

This resultisastraightforwardconsequenceofLemma 1. Thesecond stepconsistsin showing

that

µ p n

can be replaced by its empirical counterpart

µ b p n

. Dening

µ 1, p n = µ p n /θ p n

as in

Appendix A,wehavethefollowingresult:

Proposition2. Assumethat

(A 0 )

holds. If

n µ 1, p n −→ ∞

,then

µ b p n /µ p n

−→ P 1

as

n → ∞

.

Proof. Let

Y nj := [X j /θ] p n

and

Z nj := Y nj /(n µ 1, p n )

for

1 ≤ j ≤ n

. The desired result is

thentantamountto

P n

j=1 Z nj → 1

inprobability. Noticenextthatforall

n

,the

(Z nj ) 1≤j≤n

are

positiveindependentrandomvariables,and

P n

j=1 E (Z nj ) = 1

. AccordingtoChowand Teicher

(1997,Corollary2p. 358),itisenoughto showthat

∀ ε > 0,

X n j=1

E (Z nj 1l {Z nj ≥ε} ) → 0

as

n → ∞

. The

(Z nj ) 1≤j≤n

beingidenticallydistributed,itisequivalenttoprovethat

∀ ε > 0, 1 µ 1, p n

E (Y n1 1l {Y n 1 ≥εn µ 1, pn } ) → 0.

Since

Y n1 ∈ [0, 1]

almostsurelyand

n µ 1, p n → ∞

,weget,forsucientlylarge

n 1

µ 1, p n

E (Y n1 1l {Y n 1 ≥εn µ 1, pn } ) = 0

andtheresultisproved.

Theorem1. Suppose

(A 0 )

holds. If

n µ 1, (a+1)p n → ∞

then

b θ n

−→ P θ

as

n → ∞

.

(5)

Proof. Remark rstthat

µ 1, (a+1)p n ≤ (a + 1)µ 1, p n

so that

n µ 1, p n → ∞

. Second,Lemma 1

entails

n µ 1, p n +1 → ∞

and

n µ 1,(a+1)p n +1 → ∞

as

n → ∞

. Wecanthen apply Proposition 2 torewritethefrontierestimatoras

1 θ b n

= 1 ap n

((a + 1)p n + 1) µ (a+1)p n

µ (a+1)p n +1

(1 + o P (1)) − (p n + 1) µ p n

µ p n +1

(1 + o P (1))

.

UsingonceagainLemma 1yields

µ p n /µ p n +1 → 1/θ

and

µ (a+1)p n /µ (a+1)p n +1 → 1/θ

as

n → ∞

.

Replacingin theaboveequality,theconclusionfollows.

4 Asymptotic normality

Wenowexaminetheasymptoticnormalityofourestimator. Tothisend,additionalassumptions

areintroduced:

(A 1 ) ∀ x ∈ [0, θ]

,

F (x) = (1 − x/θ) α L((1 − x/θ) −1 )

where

θ, α > 0

and

L

isaslowlyvarying

function atinnity,i.e. suchthat

L(ty)/L(y) → 1

as

y → ∞

forall

t > 0

.

(A 2 ) ∀ x ≥ 1

,

L(x) = exp R x

1 η(t) t −1 dt

, where

η

is aBorelboundedfunction tendingto

0

at innity,continuouslydierentiableon

(1, +∞)

, ultimatelymonotonicandnonidentically

0

.

Besides,there exists

ν ≤ 0

suchthat

x η 0 (x)/η(x) → ν

as

x → +∞

.

In the general context of extreme-value theory,

(A 1 )

entails that the distribution belongs to the Weibullmax-domain of attraction with extreme-valueindex

−1/α

, we referthe reader to

de Haan and Ferreira (2006). Regarding

(A 2 )

,

L(x) = exp R x

1 η(t) t −1 dt

is the Karamata

representationfornormalizedslowlyvaryingfunctions,seeBinghametal. (1987),p.15. Under

(A 2 )

, thefunction

|η|

isultimately non-increasingand regularlyvarying at innitywith index

ν

,seeBinghametal. (1987),paragraph1.4.2. Intheextreme-valueframework,

ν

isreferredto

asthe second order parameterand

(A 2 )

is asecond order condition. Finally, let us note that

(A 2 )

impliesthat

x η 0 (x) = O(η(x))

,so that

x η 0 (x) → 0

as

x → +∞

.

Werstshowthat(3)stillholds, uptoanerrorterm,when

M p

isreplacedby

µ p

.

Proposition3. Assumethat

(A 1 )

and

(A 2 )

hold. Then,

µ p

µ p+1

= M p

M p+1

+ O |η(p)|

p

.

Proof. Consideringthechangeofvariables

y = (1 − x/θ) −1

in (2)yields

M p = p −α θ p Γ(α + 1) R M (p)

with

Γ(x) = Z +∞

0

t x−1 e −t dt

theGammafunction and

R M (p) = 1 + I 1 E 1 (p) + I 2 E 2 (p)

Γ(α + 1) ,

(6)

where

I 1

,

I 2

,

E 1 (p)

and

E 2 (p)

aredenedin Lemma7by

E 1 (p) = 1

I 1

Z 1 0

f p (x)x −α−2 dx − 1, I 1 =

Z +∞

1

y α e −y dy, E 2 (p) = 1

I 2

Z +∞

1

g p (x)x −α−2 dx − 1, I 2 = Z 1

0

y α e −y dy,

andwhere

f p

,

g p

arethefunctions introducedinLemma6:

∀ x ∈ (0, 1], f p (x) =

1 − 1 p

−α−1

1 + 1

(p − 1)x

−α−2

1 − 1

(p − 1)x + 1 p−1

,

∀ x ∈ [1, +∞), g p (x) =

1 − 1 px

p−1

.

Similarly,thesamechangeofvariablesyields

µ p = p −α θ p L(p) Γ(α + 1) [R M (p) + R δ (p)]

(4)

with

R δ (p) = I 1 δ 1 (p) + I 2 δ 2 (p) Γ(α + 1)

where

δ 1 (p)

and

δ 2 (p)

aredened inLemma 7by

δ 1 (p) = 1

I 1

Z 1 0

f p (x)

L 1 ((p − 1)x) L 1 (p − 1) − x

x −α−3 dx, L 1 (x) = xL(x + 1), δ 2 (p) = 1

I 2

Z +∞

1

g p (x)

L 2 (px) L 2 (p) − 1

x

x −α−1 dx, L 2 (x) = L(x)/x.

Since

Z p+1 p

η(t) t dt = O

|η(p)|

p

,oneclearlyhas

µ p

µ p+1

− M p

M p+1 = 1 θ

1 − 1

p + 1 −α

R M (p) + R δ (p)

R M (p + 1) + R δ (p + 1) − R M (p) R M (p + 1)

+ O

|η(p)|

p

,

(5)

anditisstraightforwardthat

R M (p) + R δ (p)

R M (p + 1) + R δ (p + 1) − R M (p) R M (p + 1)

= R δ (p) R M (p + 1) − R δ (p + 1) R M (p) [R M (p + 1) + R δ (p + 1)] R M (p + 1)

= [R δ (p) − R δ (p + 1)] R M (p + 1) − R δ (p + 1)[R M (p) − R M (p + 1)]

[R M (p + 1) + R δ (p + 1)] R M (p + 1) .

Lemma7entailsthat

R M → 1

and

R δ → 0

as

p → ∞

and

R δ (p + 1) = O (|η(p)| (1 + L(p))) , R M (p) − R M (p + 1) = O 1/p 2

, R δ (p) − R δ (p + 1) = O (|η(p)|/p) ,

where

L

isslowlyvaryingatinnity. Consequently,

R M (p) + R δ (p)

R M (p + 1) + R δ (p + 1) − R M (p) R M (p + 1) = O

|η(p)|

p + |η(p)| (1 + L(p)) p 2

= O

|η(p)|

p

,

andreplacingin (5)yieldsthedesiredresult.

(7)

ApplyingProposition3enablesustocontrolthebiastermintroducedwhen

M p n

isreplacedby

µ p n

:

1 Θ n

= 1 θ + O

|η(p n )|

p n

.

(6)

Wenowturnto therandomterm:

Theorem2. Assumethat

(A 1 )

and

(A 2 )

hold. If

n p −α n L(p n ) → ∞

then

v n

b θ n

Θ n

− 1

!

−→ N d (0, V (α, a))

as

n → ∞,

with

v n = p

n L(p n ) p −α/2+1 n

and

V (α, a) = α + 1 a 2 Γ(α)

2 −α−2 − 2 (a + 1) α+1

(a + 2) α+2 + 2 −α−2 (a + 1) α

.

Proof. Ourgoalistoprovethatthesequenceofrandomvariables

(ξ n )

dened by

ξ n = θ

p V (α, a) v n

1 b θ n

− 1 Θ n

converges in distribution to a standard Gaussian random variable, Theorem 2 then being a

simpleconsequenceofthisresult.

Therststepconsistsin usingLemma9inordertolinearize

ξ n

:

ξ n = u n, a

ζ n (1) +

µ p n +1

b µ p n +1

− 1

ζ n (2) +

1 + ap n

p n + 1

µ (a+1)p n +1

b

µ (a+1)p n +1

− 1

ζ n (3)

(1 + o(1))

= u n, a

h ζ n (1) + o Pn (2) ) + o Pn (3) ) i

(1 + o(1)),

inviewofProposition2. Thus,toconcludetheproof,itisenoughtoshowthat

u n, a ζ n (1) −→ N d (0, 1),

(7a)

u n, a ζ n (2) −→ N d (0, C 2 ),

(7b)

u n, a ζ n (3) −→ N d (0, C 3 ),

(7c)

where

C 2

and

C 3

aresuitableconstants. Letusthenwrite

ζ n (1) = P n

k=1 S n, k (1)

,where

S n, k (1) = 1

n A t n h

X k p n , X k p n +1 , X k (a+1)p n , X k (a+1)p n +1 i t

, A n = h

a (1) n, 0 , a (1) n, 1 , a (1) n, 2 , a (1) n, 3 i t

, a (1) n, 0 = −1,

a (1) n, 1 = µ p n

µ p n +1

, a (1) n, 2 =

1 + ap n

p n + 1

µ p n +1

µ (a+1)p n +1

, a (1) n, 3 = −

1 + ap n

p n + 1

µ p n +1 µ (a+1)p n

µ 2 (a+1)p n +1 ,

(8)

with

A t

standing for the transposed matrix of

A

. In order to use Lyapounov's central limit theorem(seee.g. Billingsley,1979,p. 312),itremainstoprovethat

1 [Var(ζ n (1) )] 3/2

X n k=1

E |S n, k (1) | 3 → 0

(8)

as

n → ∞

,which requirestocontrol

Var(ζ n (1) )

and

E |S n, (1) 1 | 3

.

Tocomputeanequivalentfor

Var(ζ n (1) )

,remarkthat

Var(ζ n (1) ) = 1

n A t n M n A n

with

M n =

 

 

 

µ 2p n µ 2p n +1 µ (a+2)p n µ (a+2)p n +1

µ 2p n +1 µ 2p n +2 µ (a+2)p n +1 µ (a+2)p n +2

µ (a+2)p n µ (a+2)p n +1 µ (2a+2)p n µ (2a+2)p n +1

µ (a+2)p n +1 µ (a+2)p n +2 µ (2a+2)p n +1 µ (2a+2)p n +2

 

 

 

 .

Letusnowrewritethatas

Var(ζ n (1) ) = 1 n

w(p n , p n ) − 2

1 + ap n

p n + 1

µ p n +1

µ (a+1)p n +1

w(p n , (a + 1)p n ) +

1 + ap n

p n + 1

2 µ 2 p n +1

µ 2 (a+1)p n +1 w((a + 1)p n , (a + 1)p n )

#

where

w(up n , vp n ) =

−1, µ up n

µ up n +1

 µ (u+v)p n µ (u+v)p n +1

µ (u+v)p n +1 µ (u+v)p n +2

−1, µ vp n

µ vp n +1

t

.

WenowapplyProposition3,anduse(4)togetherwithLemma 7toobtain,aftersomecumber-

somebut elementarycomputations,

w(up n , vp n ) = Γ(α + 1) α(α + 1)

(u + v) α+2 θ (u+v)p n p −α−2 n L(p n ) (1 + o(1)).

Takingintoaccountthat

1 + ap n

p n + 1

µ p n +1

µ (a+1)p n +1

= (a + 1) α+1

θ ap n (1 + o(1))

(9)

weget

Var(ζ n (1) ) = a 2 Γ 2 (α + 1)V (α, a) 1

n θ 2p n p −α−2 n L(p n ) (1 + o(1)).

(10)

Toshow(8),itthensucestoprovethat

E |S n, (1) 1 | 3 = O(n −3 θ 3p n p −α−3 n L(p n )).

Tothisaim,let

us introduce

Y 1 = X 1 /θ

and the associated survival function

F 1 (x) = (1 − x) α L((1 − x) −1 )

,

∀ x ∈ [0, 1]

. Hölder'sinequalityleadsto

E |S n, (1) 1 | 3

n −3 θ 3p n ≤ 4 E |Y 1 p n (a (1) n, 0 + a (1) n, 1 θ Y 1 )| 3 + 4 E |Y 1 (a+1)p n (a (1) n, 2 θ ap n + a (1) n, 3 θ ap n +1 Y 1 )| 3 .

(9)

H n, (1) 0 (u) = −1, H n, (1) 1 (u) = αu, H n, (1) 2 (u) =

1 + ap n

p n + 1

θ ap n µ p n +1

µ (a+1)p n +1

, H n, (1) 3 (u) = −

1 + ap n

p n + 1

θ ap n µ p n +1

µ (a+1)p n +1

· αu a + 1 ,

somemorestraightforwardalbeitburdensomecomputationsshowthatthereexisttwosequences

of Borel functions

(1,1) n )

and

(1, n 2) )

uniformly converging to

0

on

[0, 1]

such that for all

u ∈ [0, 1]

,

a (1) n,0 + a (1) n, 1 θu = H n, (1) 0 (u)(1 − u) + H n, (1) 1 (u) + χ (1, n 1) (u)

p n ,

a (1) n, 2 θ ap n + a (1) n, 3 θ ap n +1 u = H n, (1) 2 (u)(1 − u) + H n, (1) 3 (u) + χ (1, n 2) (u) p n

.

Recalling(9), weobtainthat

H n, j (1)

areBoreluniformlybounded functionson

[0, 1]

,sothat we

canuseLemma10twicetoobtainthedesiredboundfor

E |S n, (1) 1 | 3

. Finally,applyingLyapounov's central limittheoremandusing thecondition

n p −α n L(p n ) → ∞

concludes theproofof(7a).

Proofsof(7b)and(7c)arecompletelysimilarsince

ζ n (2)

and

ζ n (3)

canberewrittenas

ζ n (2) = X n k=1

S n, k (2)

with

S n, k (2) = 1 n

h a (2) n, 0 , a (2) n, 1 i h

X k p n , X k p n +1 i t

,

ζ n (3) = X n k=1

S n, k (3)

with

S n, k (3) = 1 n

h

a (3) n, 0 , a (3) n, 1 i h

X k (a+1)p n , X k (a+1)p n +1 i t

withcleardenitionsofthesequences

a (j) n, i

,

i = 0, 1

,

j = 2, 3

. ApplyingLemma10with

H n, (2) 0 (u) = −1,

H n, (2) 1 (u) = αu,

H n, (3) 0 (u) = θ ap n µ p n +1

µ (a+1)p n +1

, H n, (3) 1 (u) = −θ ap n µ p n +1

µ (a+1)p n +1

· αu a + 1

yields

E |S n, (j) 1 | 3 = O(n −3 θ 3p n p −α−3 n L(p n ))

,

j = 2, 3

. UsingLyapounov'scentral limittheorem thenallowsustocompletetheproofofTheorem2.

Noticingthat

θ b n − θ = Θ n

"

θ b n

Θ n

− 1

#

+ [Θ n − θ]

,theasymptoticnormalityof

θ b n

centeredonthe

trueendpoint

θ

isaconsequenceof(6)andTheorem2.

Theorem3. Assumethat

(A 1 )

and

(A 2 )

hold. If

n p −α n L(p n ) → ∞

and

n p −α n L(p n )η 2 (p n ) → 0

,

then

v n

b θ n

θ − 1

!

−→ N d (0, V (α, a))

as

n → ∞,

(10)

Inviewof Theorem3,it maybe interestingtoestimate theunknownparameter

α

. From (3),

thefollowingestimatorisconsidered:

b

α n = (p n + 1)

θ b n µ b p n

b µ p n +1

− 1

.

Proposition4. Under theassumptions of Theorem 3,

α b n = α + O P (p n /v n )

.

Proof. Letusintroduce

α n = (p n + 1)

Θ n

µ p n

µ p n +1

− 1

andfocusrstontherandomterm

v n

p n

( α b n − α n ) = v n

" h

θ b n − Θ n i b µ p n

b µ p n +1

− Θ n

µ p n +1

b µ p n +1

· ζ n (2)

µ p n +1

#

(1 + o(1))

withnotationsof Lemma9. Recallthat, from Proposition1,

µ p n /µ p n +1 → 1/θ

, fromProposi-

tion2,

µ p n / µ b p n

−→ P 1

andfrom (6),

Θ n → θ

as

n → ∞

sothat

v n

p n ( α b n − α n ) = v n

θ b n − Θ n

1

θ + o P (1)

− θv n

ζ n (2)

µ p n +1 (1 + o P (1)).

Besides,applyingTheorem2yields

v n

θ b n − Θ n

= O P (1)

. Now,

v n

ζ n (2)

µ p n +1

= v n

µ p n +1 u n, a

u n, a ζ n (2) = O P (1),

from Lemma 8 and since

u n, a ζ n (2)

is asymptotically Gaussian (see (7b)). As a preliminary conclusion,wehave

v n

p n

( α b n − α n ) = O P (1).

Turningtothebiasterm,(6)andProposition3yield

α n = α + (p n + 1) O

|η(p n )|

p n

= α + o p n

v n

,

whichcompletestheproof.

Byplugging

α b n

intheasymptoticvarianceofTheorem 3,classicalargumentsthusyield:

Corollary 1. Underthe assumptions ofTheorem 3,

v n

s 1 V ( α b n , a)

θ b n

θ − 1

!

−→ N(0, d 1)

as

n → ∞.

Condence intervalsfor

θ

maythenbebuiltusingthisresult.

5 Examples

Inthissection, wehighlightsomecaseswhereourhypotheseshold. Since

η(x) = xL 0 (x)/L(x)

,

onecanseethat

(A 1 )

and

(A 2 )

aresatisedinthegeneralcontextof:

(11)

1. TheHallmodel(seeHall,1982),namely

L(x) = C+Dx −β (1+δ(x))

forallsucientlylarge

x

,where

C, β > 0

,

D ∈ R \ {0}

and

δ

isaBorelbounded twicecontinuouslydierentiable function on

(1, +∞)

suchthat

δ(x) → 0

,

0 (x) → 0

and

x 2 δ 00 (x) → 0

as

x → +∞

. Here,

ν = −β < 0

.

2. Thecasewhere

L(x) = f (ln x)

,where

f

isarationalfunction. Here,

ν = 0

.

Letusnowfocusontwoparticulardistributionsthatarealsousedforthenumericalexperiments

ofSection 6. Bothofthemhaveanendpoint

θ = 1

. Therstdistributionhassurvivalfunction

F (x) =

"

1 + 1

x − 1

−τ 1 # −τ 2

, x ∈ (0, 1),

(11)

with

τ 1 , τ 2 > 0

. Remark that, if

X

is distributed from (11), then it can be rewritten as

X = 1 − 1/(1 + Y )

where

Y

is Burr(

1, τ 1 , τ 2

) distributed, namely,

Y

has survival function

G(y) = (1 + y τ 1 ) −τ 2

for

y ≥ 0

. Itcanbeshownthat

(A 1 )

isveriedwith

α = τ 1 τ 2

and

∀ y ≥ 1, L(y) =

y τ 1 1 + (y − 1) τ 1

τ 2

. L

isclearly

C

on

(1, +∞)

andonereadilyobtains

∀ y > 1, η(y) := y L 0 (y)

L(y) = τ 1 τ 2 1 − (y − 1) τ 1 −1 1 + (y − 1) τ 1 .

As aresult,

η

iscontinuously dierentiableon

(1, +∞)

, ultimatelymonotonic andnon identi-

cally

0

. Besides,

y η 0 (y)

η(y) = −y

(τ 1 − 1) (y − 1) τ 1 −2 1 − (y − 1) τ 1 −1 + τ 1

(y − 1) τ 1 −1 1 + (y − 1) τ 1

→ − min(τ 1 , 1) < 0,

as

y → +∞

and thus

(A 2 )

holds with

ν = − min(τ 1 , 1)

. Note that onecanalso show that

L

belongstotheHallclass. Thesecond considereddistributionhassurvivalfunction

F (x) = 1 Γ(b)

Z ∞

− ln(1−x)

(λt) b−1 λe −λt dt, x ∈ (0, 1),

(12)

with

b ≥ 1

and

λ > 0

. Here,when

X

isdistributedfrom(12),itcanberewrittenas

X = 1 −e −Y

where

Y

isGamma(

b, λ

)distributed. Notethat,if

b = 1

,then

X

hassurvivalfunction

F (x) = (1 − x) λ

, namely

L ≡ 1

,and

(A 1 )

,

(A 2 )

straightforwardlyhold. If

b > 1

,then

(A 1 )

holdswith

α = λ

,

L(x) = λ b−1

Γ(b) ln b−1 (x)[1 + δ(x)]

δ(x) = 1

x −λ λ b−1 ln b−1 (x) Z ∞

ln x

(λt) b−1 λe −λt dt

− 1 = (b − 1) Z ∞

1

u b−2 e −λ(u−1) ln x du.

(12)

Notethat

δ

is

C

on

(1, +∞)

andgoesto

0

at innity. Therefore,

L

isslowlyvarying and

C

on

(1, +∞)

. Now

η(x) := x L 0 (x)

L(x) = b − 1

ln x + xδ 0 (x)(1 + o(1))

= b − 1

ln x − λ(b − 1) Z ∞

1

(u − 1)u b−2 e −λ(u−1) ln x du(1 + o(1))

= b − 1

ln x + o(1/ ln x),

sothat

η

isslowlyvaryingandpositivein aneighborhoodof

+∞

. Finally,notingthat

d

dx

h xδ 0 (x) i

= λ 2 (b − 1) x

Z ∞ 1

(u − 1) 2 u b−2 e −λ(u−1) ln x du = o 1

x ln 2 x

it followsthat

η 0 (x) = (1 − b)

x ln 2 x (1 + o(1))

entailingthat

η

isultimately non-increasingand that

x η 0 (x)/η(x) → 0

as

x → +∞

. Asaconclusion,

(A 2 )

holdswith

ν = 0

.

6 Numerical experiments

Inthissection,weshallexaminetheperformancesofourestimatoronsampleswithsize

n = 500

oneightsituationsobtainedbyconsideringthemodels(11)and(12)withfourdierentsetsof

parameters, see the rst column of Table 1. We choose

p n = n 1/α / ln ln n

in order to satisfy

the assumptions in Theorem 3and aset

A = {0.2, 0.6, 1.0, . . . , 21}

of dierent valuesof

a

is

tested. Ineach ofthe eightsituations,

N = 1000

replications ofthesample aregenerated and theaverage

L 1

erroriscomputed:

E(a) = 1 N

X N j=1

|ε(j, a)| ,

where

ε(j, a) = θ b (j, a) − θ

with

θ b (j, a)

beingtheestimatorcomputedonthe

j−

threplicationwith

a ∈ A

and

θ = 1

. Then,

theoptimal valueof

a

isretained:

a ? = argmin{E(a), a ∈ A}.

Forthesakeofcomparison,the same procedure hasbeen applied to the extreme-valuemoment estimator, see for instance de

HaanandFerreira(2006,Remark4.5.5),whichdepends onaparameter

k ∈ {2, 3, . . . , n − 1}

.

Thenaivemaximumestimatorhasalsobeenconsidered. Notethat,sincethemaximumestima-

tordoesnotdependonanyparameter,theassociatedfunction

E

isconstant. Numericalresults

aresummarizedinTable1,where

E(a ? )

isdisplayed. Intheupperpartofthetable,itappears

that, for the distribution (11), performance of all these estimators decrease as

|ν|

decreases.

This phenomenon canbe explained since

ν

drives the bias of most extreme-valueestimators.

Forinstance, when

|ν |

is small,

η

convergesslowlyto 0and Proposition 3 showsthat theap- proximationerrorof

µ p /µ p+1

by

M p /M p+1

is large. Besides, the lowerpartof Table 1shows

that, forthe distribution(12), when

α

increases, performance of all these estimatorsdecrease aswell, sincethe simulatedpoints are gettingmoreand moredistantfrom the endpoint. Let

(13)

resultsthanthemaximumestimatorandtheextreme-valuemomentestimator.

To further compare the behavior of the estimators in the optimal case, boxplots of the

associatederrors

ε(j, a )

aredisplayedonFigure1andFigure2. Clearly,themaximumaswell

asourestimatorunderestimatetheendpoint. However,theerrorassociatedtoourestimatoris

smallerthantheerrorofthemaximum. Besides,ourestimatorhasasmallervariancethanboth

themaximumestimatorandtheextreme-valuemomentestimator.

Agraphicalcomparisononbothmodelsofthefunctions

E

associatedtothethreeestimators

isproposedonFigure36. Onmodel(12),theshapeofthecurvesassociatedto ourestimator

and to the extreme-value moment estimator are similar, see Figure 5 and Figure 6. On the

contrary,itappearsonFigure3andFigure4that,onmodel(11),thefunctions

E

associatedto

theextreme-valuemomentestimatorandourestimatorhaveverydierentshapes,eventhough

theyhavesimilarminima. Theerrorassociatedto theextreme-valuemomentestimatorisvery

sensitivetothechoiceoftheparameter

k

whereastheerrorassociatedtoourestimatorisstable

foralargepanelof

a

values.

References

Athreya,K.B., Fukuchi,J. (1997). Condence intervalsfor endpoints ofac.d.f viabootstrap.

J. Statist. Plann. Inference58:299320.

Billingsley,P. (1979). Probability andmeasure,JohnWileyandSons.

Bingham,N.H.,Goldie,C.M.,Teugels,J.L.(1987). RegularVariation,Cambridge,U.K.: Cam-

bridgeUniversityPress.

Chow,Y.S.,Teicher,H.(1997). Probability Theory,Springer.

Cooke,P. (1979). Statisticalinferenceforbounds ofrandomvariables. Biometrika66:367374.

Dekkers,A.L.M.,Einmahl,J.H.J.,deHaan,L.(1989). Amomentestimatorfortheindex ofan

extreme-valuedistribution. Ann. Statist. 17:18331855.

Drees,H., Ferreira,A., deHaan,L.(2003). Onmaximumlikelihoodestimationoftheextreme

valueindex. Ann. Appl. Probab. 14:11791201.

Goldenshluger,A.,Tsybakov,A.(2004). Estimatingtheendpointofadistributionin thepres-

enceofadditiveobservationerrors. Statist. Probab. Lett. 68:3949.

de Haan,L.(1981). Estimationof theminimumof afunctionusing orderstatistics. J. Amer.

Statist. Assoc. 76:467469.

deHaan,L.,Ferreira,A. (2006). ExtremeValue Theory,Springer.

(14)

Hall, P., Wang, J.Z. (1999). Estimating the end-point of a probability distribution using

minimum-distancemethods. Bernoulli 5(1):177189.

Hall, P., Wang, J.Z. (2005). Bayesian likelihood methods for estimating the end point of a

distribution. J. Roy. Statist. Soc. Ser. B 67(5):717729.

Hosking, J.R.M., Wallis, J.R. (1987). Parameterand quantile estimation for the generalized

Paretodistribution. Technometrics29:339349.

Li,D.,Peng,L.(2009). Doesbiasreductionwithexternalestimatorofsecondorder parameter

workforendpoint? J.Statist. Plann. Inference139:19371952.

Loh, W.Y. (1984). Estimating an endpoint of adistribution with resampling methods. Ann.

Statist. 12(4):15431550.

Miller,R.G.(1964). Atrustworthyjackknife. Ann. Math. Statist. 35:15941605.

Neves,C.,Pereira,A.(2010). Detectingnitenessintherightendpointoflight-taileddistribu-

tions. Statist. Probab. Lett. 80:437444.

Quenouille,M.H.(1949). Approximatetestsofcorrelationin times-series. J. Roy. Statist. Soc.

Ser. B 11:6884.

Robson,D.S.,Whitlock,J.H.(1964). Estimationofatruncationpoint. Biometrika51:3339.

Smith, R.L. (1987). Estimating tailsof probability distributions. Ann. Statist. 15(3):1174

1207.

Smith,R.L.,Weissman, I.(1985). Maximumlikelihood estimationofthelowertailofaproba-

bilitydistribution. J. Roy. Statist. Soc. Ser. B 47:285298.

(15)

Letusset

F 1 (y) := F(θy)

and

µ 1, p n := µ p n /θ p n

. Therstresultdealswiththebehaviorofthe

moment

µ 1, p n

.

Lemma1. If

(A 0 )

holds,then

µ 1, p n /µ 1, p n +1 → 1

as

n → ∞

.

As it has been mentionedbefore,

(A 2 )

implies that

x η 0 (x) → 0

as

x → ∞

. Thenext lemma

establishessomeconsequencesofthisproperty.

Lemma 2. Let

ϕ

be acontinuously dierentiable function on

(1, +∞)

suchthat

x ϕ 0 (x) → 0

as

x → +∞

. Then,

(i)

t sup

x≥1

|ϕ(tx) − ϕ((t + 1)x)| → 0

as

t → ∞

.

(ii)Forall

q > 0

,

t sup

x∈(0,1]

x q |ϕ(tx) − ϕ((t + 1)x)| → 0

as

t → ∞

.

Before proceeding, letus introduce somemore notations. For all

k ∈ R

, let

P k

be the set of

collectionsof Borelfunctions

(f p ) p≥1

on

(0, 1]

suchthat

1.

∃ p k ≥ 1, ∃ C k ≥ 0, ∀ p ≥ p k , ∀ x ∈ (0, 1], |f p (x)| ≤ C k x k

,

2.

∃ p k ≥ 1, ∃ C k ≥ 0, ∀ p ≥ p k , ∀ x ∈ (0, 1], p 2 |f p+1 − f p |(x) ≤ C k x k

,

3.

∀ x ∈ (0, 1], p 2 |f p+2 − 2f p+1 + f p |(x) → 0

as

p → ∞

.

Let

P = \

k≥0

P k

. Besides, let

U

be theset ofcollectionsofBorelfunctions

(f p ) p≥1

on

[1, +∞)

suchthat

1.

sup

x≥1

|f p (x)| = O(1)

as

p → ∞

,

2.

p 2 sup

x≥1

|f p+1 − f p |(x) = O(1)

as

p → ∞

,

3.

p 2 sup

x≥1

|f p+2 − 2f p+1 + f p |(x) → 0

as

p → ∞

.

These sets will reveal useful to study the asymptotic properties of

θ b n

since this estimator is

basedonincrementsofsequencesoffunctions. Astabilitypropertyof theset

P

is giveninthe

nextlemma.

Lemma 3. Let

(f p )

,

(g p )

betwo collections of Borelfunctions. If for some

k ∈ R

,

(f p ) ∈ P k

and

(g p ) ∈ P

,then

(f p g p ) ∈ P

.

Wenowgiveacontinuitypropertyofsomeintegraltransformsdened on

P

and

U

.

Lemma4. Let

(f p ) ∈ P

,

(g p ) ∈ U

and

(u p )

,

(v p )

betwocollectionsofBorelfunctionssuchthat

f p (x) → f (x)

for all

x ∈ (0, 1]

,

sup

x≥1

|g p (x) − g(x)| → 0, sup

0<x≤1

|u p (x) − u(x)| → 0

and

sup

x≥1

|v p (x) − v(x)| → 0

as

p → ∞,

(16)

where

f, g, u, v

arefourBorelfunctionssuchthat

f

and

u

(resp.

g

and

v

) aredenedon

(0, 1]

(resp.

[1, +∞)

). Assumefurtherthat

u

and

v

arebounded. Then, for all

k > 1

,

Z 1

0

x −k f p (x) u p (x) dx → Z 1

0

x −k f (x) u(x) dx, Z +∞

1

x −k g p (x) v p (x) dx →

Z +∞

1

x −k g(x) v(x) dx

as

p → ∞

.

Thefollowinglemma providessucient conditionsoncollectionsoffunctions to belong tothe

previoussets.

Lemma 5. Let

(f p )

,

(g p )

be twocollections of Borel functions. Assume that thereexist Borel functions

F i

andBorelboundedfunctions

G i

,

0 ≤ i ≤ 2

,suchthat

∀ x ∈ (0, 1], p 2 f p (x) −

X 2 k=0

p −k F k (x)

→ 0

as

p → ∞, p 2 sup

x≥1

g p (x) −

X 2 k=0

p −k G k (x)

→ 0

as

p → ∞.

Then, for all

x ∈ (0, 1]

,

p 2 |f p+2 − 2f p+1 + f p |(x) → 0

as

p → ∞

,and

(g p ) ∈ U

.

Wearenowinpositionto exhibittwoparticularelementsof

P

and

U

:

Lemma6. Let

(f p )

and

(g p )

,

p ≥ 1

betwocollectionsof Borelfunctionsdenedby

∀ x ∈ (0, 1], f p (x) =

1 − 1 p

−α−1

1 + 1

(p − 1)x

−α−2

1 − 1

(p − 1)x + 1 p−1

,

∀ x ∈ [1, +∞), g p (x) =

1 − 1 px

p−1

.

Then

(f p ) ∈ P

,

(g p ) ∈ U

and

∀ x ∈ (0, 1], f p (x) → e −1/x

and

sup

x≥1

g p (x) − e −1/x → 0

as

p → ∞.

(13)

Lemma 7isthekeytoolforestablishingpreciseexpansionsofthemoments

µ p

and

M p

.

Lemma7. Let

(f p ) ∈ P

and

(g p ) ∈ U

suchthat (13)holdsanddene

E 1 (p) = 1

I 1

Z 1 0

f p (x)x −α−2 dx − 1, I 1 = Z +∞

1

y α e −y dy, E 2 (p) = 1

I 2

Z +∞

1

g p (x)x −α−2 dx − 1, I 2 = Z 1

0

y α e −y dy, δ 1 (p) = 1

I 1

Z 1 0

f p (x)

L 1 ((p − 1)x) L 1 (p − 1) − x

x −α−3 dx, L 1 (x) = xL(x + 1), δ 2 (p) = 1

I 2

Z +∞

1

g p (x)

L 2 (px) L 2 (p) − 1

x

x −α−1 dx, L 2 (x) = L(x)/x,

where

L

isaslowlyvarying function atinnity. Then,for all

i = 1, 2

,

(17)

(i)

E i (p) → 0

as

p → ∞

,

(ii)

p 2 (E i (p + 1) − E i (p)) = O(1)

,

(iii)

p 2 (E i (p + 2) − 2 E i (p + 1) + E i (p)) → 0

as

p → ∞

,

(iv)

δ i (p) → 0

as

p → ∞

.

Moreover,if

L

satises

(A 2 )

,then

(v)Thereexistsaslowly varyingfunction

L

suchthat

δ 1 (p) = O (|η(p)| L(p))

,

(vi)

δ 2 (p) = O (|η(p)|)

,

(vii)Forall

i = 1, 2

,

δ i (p + 1) − δ i (p) = O (|η(p)|/p)

,

(viii)For all

i = 1, 2

,

p 2 (δ i (p + 2) − 2 δ i (p + 1) + δ i (p)) → 0

as

p → ∞

.

Sometimes,arstorderexpansionofthemoment

µ p

issucient:

Lemma8. If

(A 1 )

holdsthen,as

p → ∞

,

µ p = p −α θ p L(p) Γ(α + 1)(1 + o(1)).

Thenextresultconsistsin linearizingthequantity

ξ n

appearingin theproofof Theorem2:

Lemma9. Let

p n → ∞

and

ν p = µ b p − µ p

. If

(A 1 )

issatised, then

ξ n = u n, a

ζ n (1) +

µ p n +1

b µ p n +1

− 1

ζ n (2) +

1 + ap n

p n + 1

µ (a+1)p n +1

b

µ (a+1)p n +1

− 1

ζ n (3)

(1 + o(1)),

where

ζ n (1) = ζ n (2) +

1 + ap n

p n + 1

ζ n (3) ,

with

ζ n (2) = −ν p n + µ p n

µ p n +1 ν p n +1 , ζ n (3) = µ p n +1

µ (a+1)p n +1

ν (a+1)p n − µ (a+1)p n

µ (a+1)p n +1

ν (a+1)p n +1

and

u n, a = 1 a Γ(α + 1)

s 1 V (α, a)

p α n v n

θ p n L(p n ) .

Thenal lemma ofthis sectionprovidesanasymptotic bound of thethird-ordermomentsap-

pearingintheproofofTheorem2.

Lemma10. Let

k ∈ N

and

p n → ∞

. Let

(H n, j ) 0≤j≤k

besequencesofBoreluniformlybounded

functionson

[0, 1]

and

∀ u ∈ [0, 1], h n (u) = X k j=0

H n, j (u) p j n

(1 − u) k−j .

If

Y

isarandom variable withsurvivalfunction

G(x) = (1 − x) α L((1 − x) −1 )

where

α > 0

and

L

isaBorelslowlyvaryingfunction atinnity, then

E |Y p n h n (Y )| 3 = O(p −α−3k n L(p n )).

(18)

Proof ofLemma1. Let

I p n := µ 1, p n /p n

and

ε > 0

. Theintegral

I p n

isexpandedas

I p n = Z 1

1−ε

y p n −1 F 1 (y) dy

 

  1 + Z 1−ε

0

y p n −1 F 1 (y) dy Z 1

1−ε

y p n −1 F 1 (y) dy

 

 

where

0 ≤ Z 1−ε

0

y p n −1 F 1 (y) dy Z 1

1−ε

y p n −1 F 1 (y) dy

≤ 1 − ε

Z 1 1−ε

y 1 − ε

p n −1

F 1 (y) dy

≤ 1 − ε

1 − ε/2 1 − ε

p n −1 Z 1 1−ε/2

F 1 (y) dy .

Since

1 − ε/2 1 − ε

p n −1

→ ∞

as

n → ∞

,itfollowsthat

I p n = Z 1

1−ε

y p n −1 F 1 (y) dy (1 + o(1)).

(14)

Inviewof

1 ≤ Z 1

1−ε

y p n −1 F 1 (y) dy Z 1

1−ε

y p n F 1 (y) dy ≤ 1 1 − ε

and(14),onethushas

I p n /I p n +1 → 1

as

n → ∞

andLemma1isproved.

Proof of Lemma 2. If

ϕ 0

isidentically

0

, then

ϕ

isconstanton

[1, +∞)

and theresultsare

straightforward. Otherwise,letusconsider(i)and(ii)separately.

(i)Let

t, x ≥ 1

. Themeanvaluetheoremshowsthatthere exists

h 1 (t, x) ∈ (0, 1)

such that

t |ϕ(tx) − ϕ((t + 1)x)| = t

t + h 1 (t, x) |[(t + h 1 (t, x))x] ϕ 0 [(t + h 1 (t, x))x]|

≤ sup

y≥t

|y ϕ 0 (y)| → 0

uniformlyin

x ≥ 1

,as

t → +∞

.

(ii)Let

t ≥ 1

and

x ∈ (0, 1]

,

q > 0

,

ε > 0

and

c(ε) :=

ε

2 · 1

sup y>1 |y ϕ 0 (y)|

1/q

.

Applyingthemeanvaluetheoremagainshowsthatthere exists

h 2 (t, x) ∈ (0, 1)

such that

tx q |ϕ(tx) − ϕ((t + 1)x)| = x q t

t + h 2 (t, x) |[(t + h 2 (t, x))x] ϕ 0 [(t + h 2 (t, x))x]|

≤ x q sup

y>1

|y ϕ 0 (y)|1l {0<x<c(ε)} + sup

y≥t c(ε)

|y ϕ 0 (y)|1l {c(ε)≤x≤1}

≤ ε 2 + ε

2 = ε

forall

t

largeenough,uniformlyin

x ∈ (0, 1]

,which concludestheproofofLemma2.

(19)

(f g) p+1 − (f g) p = f p+1 (g p+1 − g p ) + g p (f p+1 − f p ),

(f g) p+2 − 2(f g) p+1 + (f g) p = (f p+2 − 2f p+1 + f p ) g p+2 + (f p+1 − f p ) (g p+2 − g p ) + f p+1 (g p+2 − 2g p+1 + g p ),

andfromthepropertiesof

(f p )

and

(g p )

.

Proof ofLemma4. Remarkthat,for

p

largeenough,

∀ x ∈ (0, 1], x −k |f p (x)| |u p (x)| ≤ C k

|u(x)| + r(x)

where

r

is abounded Borel function on

(0, 1]

. The upperbound is an integrablefunction on

(0, 1]

,sothatthedominatedconvergencetheoremyields

Z 1 0

x −k f p (x) u p (x) dx → Z 1

0

x −k f (x) u(x) dx

as

p → ∞

,whichprovestherstpartofthelemma.

Since

v

is bounded on

[1, +∞)

,

(g p v p )

converges uniformly to

gv

on

[1, +∞)

. The function

x 7→ x −k

beingintegrableon

[1, +∞)

,thedominatedconvergencetheoremyields

Z +∞

1

x −k g p (x) v p (x) dx → Z +∞

1

x −k g(x) v(x) dx

as

p → ∞

,whichconcludestheproofofLemma4.

Proof ofLemma5. Remarkthat

1 p + 1 − 1

p = O 1

p 2

and

1

p + 2 − 2 p + 1 + 1

p = O 1

p 3

toobtaintheresult.

Proof ofLemma6. Itisclearthatforall

x ∈ (0, 1]

,

f p (x) → e −1/x

as

p → ∞

.

Inorderto provethat

(f p ) ∈ P

,letusrewrite

f p (x)

as

f p (x) = σ p ϕ p (x) ψ p (x)

where

σ p =

1 − 1

p −α−1

, ϕ p (x) =

1 + 1

(p − 1)x −α−2

, ψ p (x) =

1 − 1

(p − 1)x + 1 p−1

,

forall

x ∈ (0, 1]

,andprovethat

(σ p ) ∈ P 0

,

(ϕ p ) ∈ P −1

and

(ψ p ) ∈ P

. First,notethat

σ p = 1 + α + 1

p + (α + 1)(α + 2) 2

1 p 2 + o

1 p 2

sothat thecollectionofconstantfunctions

(σ p )

liesin

P 0

. Second,wehave

∀ p > 1, ∀x ∈ (0, 1], |ϕ p (x)| ≤ 1 ≤ x −1 .

(15)

(20)

Moreover,

[ϕ p+1 − ϕ p ](x) = ϕ p (x)

"

1 − 1 p

−α−2

1 − x px + 1

α+2

− 1

# ,

andsince

∀ x ∈ (0, 1]

,

x/(px + 1) ≤ 1/p

,Taylorexpansionsyield,uniformlyin

x ∈ (0, 1]

,

[ϕ p+1 − ϕ p ](x) = ϕ p (x)

α + 2 p(px + 1) + O

1 p 2

.

Itfollowsthat thereexistsapositiveconstant

C (1)

such thatfor

p

largeenough,

p 2 |ϕ p+1 − ϕ p |(x) ≤ C (1) x −1 .

(16)

Third,let

x ∈ (0, 1]

,andconsiderapointwiseTaylorexpansionof

ϕ p

toget

ϕ p (x) = 1 − α + 2

px + α + 2 p 2 x

−1 + α + 3 2x

+ o

1 p 2

.

Using(15),(16)andapplyingLemma 5thereforeshowsthat

(ϕ p ) ∈ P −1

.

Let

x ∈ (0, 1]

,

k ≥ 0

,

Ψ x (p) = (1 − 1/(px + 1)) p

,sothat

ψ p (x) = Ψ x (p−1)

. Routinecalculations showthat

Ψ x (p)

isapositivenon-increasingfunctionof

p

. Consequently,forallsucientlylarge

p

andforall

x ∈ (0, 1]

,

ψ p (x) ≤ ψ k+1 (x)

. Remarkingthat

ψ k+1 (x) ≤ k k x k

forall

x ∈ (0, 1]

,it

followsthat

∀ k ≥ 0, ∃ p k ≥ 1, ∃ C k ≥ 0, ∀ p ≥ p k , ∀ x ∈ (0, 1], |ψ p (x)| ≤ C k x k .

(17)

Recallthat

Ψ x

isnon-increasingandwrite

|ψ p+1 − ψ p |(x) = ψ p (x)

"

1 −

1 − 1

px + 1 1 + 1 p − 1

p−1

1 − x px + 1

p−1 # .

Taylorexpansionsofthelogarithmfunctionat

1

andoftheexponentialfunctionat

0

implythat,

uniformlyin

x ∈ (0, 1]

,

e

1 − x px + 1

p−1

= exp 1

px + 1 "

1 + x px + 1 − p

2 x

px + 1 2

+ O 1

p 2

# .

Since forall

x ∈ (0, 1]

,

0 ≤ 1/(px + 1) ≤ 1

, applyingthe meanvaluetheorem to thefunction

h 7→ (1 − h)e h

gives

1 − 1 px + 1

exp

1 px + 1

− 1

≤ e (px + 1) 2 .

ATaylorexpansionof

1 + 1

p − 1 p−1

thenyields,uniformlyin

x ∈ (0, 1]

,

|ψ p+1 − ψ p |(x) ≤ ψ p (x)

e + 1 2p

1

(px + 1) 2 + O 1

p 2

.

Therefore,thereexists

C (3) ≥ 0

suchthat, forall

p

largeenough,

p 2 |ψ p+1 − ψ p | (x) ≤ ψ p (x) C (3) x −2 .

Références

Documents relatifs

(2011), who used a fixed number of extreme conditional quantile estimators to estimate the conditional tail index, for instance using the Hill (Hill, 1975) and Pickands (Pickands,

We compare these reference values with the estimates obtained with m = 100 runs of the single-loop Monte Carlo simulation and the nonparametric importance sampling based method with

Weissman extrapolation device for estimating extreme quantiles from heavy-tailed distribu- tion is based on two estimators: an order statistic to estimate an intermediate quantile

For more extended edge profiles (several pixels), the effect on the variance estimate decreases, because the slope corresponding to the extending edge profile is smaller and the

To demonstrate the generality, applicability and efficiency of our approach, we apply our results to three specific integer valued ODMs, namely, the log-linear Poisson GARCH(p,

Lounici [15] derived sup-norm convergence rates and sign consistency of the Lasso and Dantzig estimators in a high-dimensional linear regression model with the quadratic loss under

In the field of asymptotic performance characterization of Conditional Maximum Likelihood (CML) estimator, asymptotic generally refers to either the number of samples or the Signal

• If the measurement error is assumed to be distributed quasi-uniformly, OMNE / GOMNE is a Maximum Likelihood Estimator. • Maximum Likelihood Estimation in an interval