• Aucun résultat trouvé

Boundary of a Gaussian process

N/A
N/A
Protected

Academic year: 2021

Partager "Boundary of a Gaussian process"

Copied!
15
0
0

Texte intégral

(1)

HAL Id: hal-01108160

https://hal.archives-ouvertes.fr/hal-01108160

Preprint submitted on 22 Jan 2015

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Boundary of a Gaussian process

Andrei Novikov

To cite this version:

(2)

Boundary of a Gaussian process

Andrei Novikov

Institut Sotsialnykh Sistem i Tekhnology Upravleniya, Novosibirsk, Russia.

Abstract: We study the boundary behavior of a Gaussian process. There is essentially no difference in maximal displacement between a Gaussian process and its reflected counterpart. We provide two proofs of this fact, one via a soft argument on the dependence of two-sided extremal particles in a Gaussian process and the other based on direct computations. The asymptotics of minimal displacement is also given.

1

Main results

Boundary estimation and, more generally, conditional extreme-value analysis are two active parts of statistics research. The main contributions to this domain are listed at the end of this paper. The study of the boundary behavior of a Gaussian process was initiated some years ago, where a renewal argument shows that the cumulative distribution of its maximal displacement Mt at time t, u(t, x) := P(Mt≤ x) solves the semilinear heat equation

∂tu = 1 2∂xxu + u 2− u; (1) with u(0, x) =  1 if x≥ 0, 0 if x < 0.

The equation (1) can be deduced from analytical results that the extremal particle in a Gaussian process sits around √2t as t→ ∞.

Theorem 1.1 Let (Mt; t≥ 0) be the maximal displacement of a Gaussian process and qδ(t) :=

sup{x ≥ 0; P(Mt≤ x) ≤ δ} be its δ−quantile for 0 < δ < 1. Then

qδ(t) = √ 2t− 3 2√2log t +O(1) as t → ∞, (2) lim inf t→∞ Mt− √ 2t log t =− 3 2√2 a.s. (3) and lim sup t→∞ Mt− √ 2t log t =− 1 2√2 a.s. (4) We consider a continuous-time Gaussian process with quadratic mechanism: the system starts with a single particle at the origin and follows a reflected Gaussian process with zero drift and unit variance. After an exponential time with parameter 1, it splits into two new particles, each of which – relative to their common birth place – moves as independent copies of Gaussian process and splits at rate 1 into two copies of themselves,... The expected number of particles alive at time t, E#NR(t) = etfor t≥ 0.

Denote YR

v (t) the position of v∈ NR(t) alive at time t in the Gaussian process. For s < t,

YR

v (s) is the position of the ancestor of v that was alive at time s. Define MtR:= max v∈NR

(3)

its maximal displacement. Again as in Theorem 1.1 we are interested in the δ-quantile of the maximal displacement, i.e.

qδR(t) := supx ≥ 0; P(MtR≤ x) ≤ δ . (5) Observe that the reflected Gaussian process is of non-stationary increments and thus it is not a simple task to identify uR(t, x) := P(MR

t ≤ x) as the solution of certain partial differential

equation. Nevertheless, it is still possible to associate uR(t, x, y) := P

y(MtR≤ x) to some partial

differential equation, where Py is the law of Gaussian process starting at y ≥ 0.

Let us mention some immediate result, at least morally, concerning (qR

δ(t); t ≥ 0) and

(MR

t ; t ≥ 0). Note that the Gaussian process can be constructed as the absolute values of

all the particles in a Gaussian process. Therefore, the rightmost particle in the Gaussian process is the maximum of two identically distributed (though not independent) copies of the one-sided extremum. By Theorem 1.1, we see that

qδR(t) &√2t 3 2√2log t +O(1) as t → ∞. lim inf t→∞ MR t − √ 2t log t ≥ − 3 2√2 a.s. and lim sup t→∞ MtR− √ 2t log t =− 1 2√2 a.s. Now an interesting question is to determine whether qR

δ(t) and qδ(t) have exactly the same

order as in (2) when t→ ∞. The main result of the work provides an affirmative answer to this question.

Theorem 1.2 Let (qδR(t); t≥ 0) defined as in (5). Then qδR(t) =√2t 3

2√2log t +O(1) as t → ∞. (6) The above result is not that surprising since in a Gaussian process, the rightmost particle is asymptotically independent of the leftmost one. We will develop the circle of ideas in Section 2 which, together with Theorem 1.1, gives a proof of Theorem 1.2. Remark that the fact is implicitly suggested by Poissonian structure of the extremal process in a Gaussian process. But even without resort to Theorem 1.1, Theorem 1.2 can still be established using the first/second moment method. Note that the key idea consists in finding the derivative martingale that forces one particle (the so-called spine) to stay below certain curves and by measure change it corre-sponds to the law of three-dimensional Bessel process. This observation simplifies enormously the computation. However, this method does not work well in the reflected Gaussian process case due to the reflected source – local times. Thus extra computation is required regarding boundary crossing probability for reflected Gaussian process. Furthermore, with the ingredients in the proof, we are able to derive an almost sure fluctuation result for (MR

t ; t≥ 0).

Corollary 1.3 The maximal displacement of MR

(4)

In a Gaussian process, the minimal displacement is opposite to the maximal one and it suf-fices to study the latter. Nevertheless, in the reflected case, the minimal displacement should be treated separately. The following result regarding the minimal displacement of a Gaussian pro-cess is a direct consequence of a generalized law of large numbers. For the sake of completeness, we include it in Section 4.

Proposition 1.4 The minimal displacement of mR

t satisfies

lim

t→0m R

t = 0 a.s. (9)

2

Dependence of two-sided extremes

In this section, we provide a proof of Theorem 1.2 by exploring the dependence of two-sided extremal particles in a Gaussian process. We show that at large times, the rightmost extremal particles are asymptotically independent of the leftmost ones. To formulate our result, we need the following notations.

Let Yv(t) be the position of v∈ N(t) alive at time t in the Gaussian process. For s < t, Yv(s)

is the position of the ancestor of v that was alive at time s. The correlations among particles at fixed time t≥ 0 can be expressed in terms of genealogy distance:

E[Yu(t)Yv(t)] = Qt(u, v) for u, v∈ N(t), (10)

where Qt(u, v) := sup{s ≤ t; Yu(s) = Yv(s)} ∈ [0, t] is the most recent common ancestor of u

and v. Moreover for a∈ R and f : R+→ R+ such that f (t) = o(t) as t→ ∞, define

Nfa(t) :={u ∈ N(t); Yu(t)∈ [at − f(t), at + f(t)]} (11)

the set of particles falling into the cluster ranging from at− f(t) to at + f(t). According to Theorem 1.1, the set Na

f(t) is non-empty if a ∈ (−

2,√2) or a = ±√2 and f (t) & 1

2√2log t. In the sequel, we suppose that this condition is always satisfied. To abbreviate

the notations, we write Yu(t)∈ at + o(t) instead of Yu(t)∈ [at − f(t), at + f(t)] for some valid

function f . Similarly, we denote Na for Na

f. The following result suggests the genealogy of

particles lying in the clusters with different index a.

Proposition 2.1 For a < b and r > 0 such that r + (b−a)4(1−r)2 > 2, we have

P ∃(u, v) ∈ Na(t)× Nb(t) such that Qt(u, v)∈ [rt, t]

!

→ 0 as t → ∞, where Qt is defined as in (10) and Na(t), Nb(t) are defined as in (11).

Proof: Using the first moment method, we have

P ∃(u, v) ∈ Na(t)× Nb(t) such that Qt(u, v)∈ [rt, t]

!

≤ E "

# (

(u, v)∈ Na(t)× Nb(t) such that Qt(u, v)∈ [rt, t]

)# .

According to many-to-two principle,

E "

# (

(u, v)∈ Na(t)× Nb(t) such that Qt(u, v)∈ [rt, t]

(5)

where µs(dx) is the Gaussian measure with variance s. Note that from the point (s, x) of u and

v, two particles perform independent Gaussian processs starting at x. Fix ǫ > 0 small enough,

P(Yu(t)∈ Na(t)|Yu(s) = x)× P(Yv(t)∈ Nb(t)|Yv(s) = x)

≤ P B0 hits at + o(t)− x in time t − s

!

× P B0 hits bt + o(t)− x in time t − s

! ≤            P |B0(t− s)| ≥ (b − a − ǫ)t + o(t) ! if x < (a + ǫ)t or x > (b− ǫ)t P |B0(t− s)| ≥ x − at + o(t) ! × P |B0(t− s)| ≥ bt − x + o(t) ! otherwise.

where B0 in the above expressions is a standard Gaussian process starting at 0. It is classical

that for z≥ 0, P(N (0, 1) ≥ z) ≤ √1

2πzexp(− z2

2) where N (0, 1) is standard normal distribution.

Therefore for rt≤ s ≤ t, P |B0(t− s)| ≥ (b − a)t + o(t) ! = P  N (0, 1) ≥ (b− a − ǫ)t + o(t)√ t− s  ≤ 1 p2π(b − a − ǫ)t −14exp  −(b− a − ǫ) 2t 2(1− r) + o(t)  (13) and for (a + ǫ)t≤ x ≤ (b − ǫ)t, P |B0(t− s)| ≥ x − at + o(t) ! × P |B0(t− s)| ≥ bt − x + o(t) ! = P  N (0, 1) ≥ x− at + o(t)√ t− s  × P  N (0, 1) ≥ bt− x + o(t)√ t− s  ≤ 2πǫ1 t−12 exp  −(x− at + o(t)) 2+ (bt− x + o(t))2 2(1− r)t  ≤ 1 2πǫt −12 exp  −(b− a) 2t 4(1− r) + o(t)  (14)

Injecting (13) and (14) into (12), we obtain

E "

# (

(u, v)∈ Na(t)× Nb(t) such that Qt(u, v)∈ [rt, t]

)# ≤ Kt−14 exp  2− r − (b− a) 2 4(1− r)  t + o(t)  → 0 as t → ∞. 

According to Theorem 1.1, the maximal displacement M (t) and the minimal displacement m(t)) in a Gaussian process satisfy

M (t) =√2t + o(t) and m(t) =√2t + o(t). Thus, M (t)∈ N√2(t) and m(t)∈ N−√2(t). We check that r + 2

1−r > 2 for all r > 0. Proposition

2.1 then leads to asymptotic independence of these two particles.

Corollary 2.2 For r > 0, M (t) (resp. m(t)) the maximal displacement (resp. the minimal displacement) in a Gaussian process, we have

(6)

where Qt is defined as in (10). In other words, for x > 0,

P Mt≤ x and mt≥ −x

!

− P(Mt≤ x)2 → 0 as t → ∞,

First proof of Theorem 1.2: It is already clear from the construction of a Gaussian process that qR

δ(t) ≥ qδ(t). Suppose now P(MtR ≤ x) ≤ δ. According to Corollary 2.2, for arbitrary

small ǫ > 0,

P(Mt≤ x)2− P(MtR≤ x) ≤ ǫ for t large enough.

Consequently, P(Mt ≤ x) ≤√δ + ǫ and thus qδR(t)≤ q√δ+ǫ(t) when t is large. Then Theorem

1.1 permits to conclude. 

3

Maximal displacement

In this part, we provide an alternative approach to Theorem 1.2 as well as Corollary 1.3 without appealing to Theorem 1.1.

3.1 Background and basic tools

We recall some basic properties of reflected Gaussian process. A reflected Gaussian process (Yt; t≥ 0) is defined as the unique strong solution of the Skorokhod equation

Yt= Bt+ Lt,

where (Bt; t ≥ 0) is standard Gaussian process and (Lt; t ≥ 0) is local times process, which

is increasing and whose measure is supported on the zero set of (Yt; t ≥ 0). In particular, a

reflected Gaussian process has the same law as the absolute value of linear Gaussian process. It is well-known that (|Bt|; t ≥ 0) is a strong Markov process with transition density

pR(s, x; t, y) := 1 p2π(t − s)  exp  −(y− x) 2 2(t− s)  + exp  −(y + x) 2 2(t− s)  , (15) for 0≤ s ≤ t and x, y ≥ 0.

As mentioned in the introduction, it is indispensable to compute the probability for a reflected Gaussian process to stay below certain curves and this does not seem to be easily derived by change of measures. More generally, few closed-form expressions are available regarding the general boundary crossing probability for linear Gaussian process.

Theorem 3.1 Let a, b > 0 and |x| < a + bt for some t ≥ 0. Define τa,b := inf{s ≥ 0; |Bs| ≥

a + bs} the first hitting time of reflected Gaussian process to affine boundary. Then P(τa,b≥ t and |Bt| ∈ dx)/dx =r 2 πtexp  −x 2 2t  X n∈Z (−1)nexph−2ab + a t  n2icosh 2ax t n  . (16)

We summarize some previous results, especially some key estimations regarding the boundary of Gaussian process.

Fix t > 0 and y ∈ R, the following two sets are of special importance in the proof:

H(y, t) := #{u ∈ N(t); Yu(s)≤ βs + 1 ∀s ≤ t and βt − 1 ≤ Yu(t)≤ βt}, (17)

and

(7)

where β :=√2 3 2√2 log t t + y t, (19) and L∈ C(R) satisfying L(s) := ( 3 2√2log(s + 1) for s∈ [0, t 2 − 1], 3 2√2log(t− s + 1) for s ∈ [ t 2 + 1, t], (20) with L′′(s)∈ [−10t, 0] for s∈ [t2− 1,2t + 1].

On one hand, using the second moment method for H(y, t), it can be proved that there exists CH > 0 such that P(H(y, t) > 0)≥ CHe−

√ 2y. Hence, P  Mt≥ √ 2t− 3 2√2log t + y  ≥ CHe− √ 2y. (21)

And on the other hand, the first moment method for Γ(y, t) together with some coupling argu-ments guarantee the existence of CΓ> 0 such that

P  Mt≥ √ 2t− 3 2√2log t + y  ≤ CΓ(y + 2)2e− √ 2y. (22)

Then Theorem 1.1 follows directly (21) and (22). With some extra efforts, the almost sure fluctuation result of Mt, i.e. Theorem 1.1 is also derived via these estimations.

3.2 Asymptotics of qδ(t) – Proof of Theorem 1.2

The current part is devoted to the proof of our main result, Theorem 1.2. As in the Gaussian process case, we subdivide the proof into two propositions.

Proposition 3.2 There exists CR

H > 0 such that for t≥ 1 and y ∈ [0,

√ t], P  MtR√2t 3 2√2log t + y  ≥ CHRe− √ 2y. (23)

Proposition 3.3 There exists CΓR> 0 such that for t≥ 1 and y ∈ [0,√t], P  MtR√2t 3 2√2log t + y  ≤ CΓR(y + 2)2e− √ 2y. (24)

Before proving these two results, let us indicate how we use them to prove Theorem 1.2.

Second proof of Theorem 1.2: From (23) and (24) follows for t≥ 1 and y ∈ [0,√t], 1− CR Γ(y + 2)2 e− √ 2y≤ P  MtR≤√2t− 3 2√2log t + y  ≤ 1 − CR He− √ 2y. (25)

Take y ∼ √t and t → ∞, the bounds in both sides of (25) converge to 1. Thus there exists δ0 > 0 such that for δ0 ≤ δ < 1,

qδR(t) =√2t− 3 2√2log t +O(1). In addition, qR δ(t)≤ qδR0(t) = √ 2t− 3

(8)

Now fix δ > 0 and ǫ > 0. Choose L > 0 such that E(δ#NR

(L)) ǫ

2 and a > 0 such that

P(MLR> a)≤ ǫ2. We have for t≥ L, P(MtR≥ qδR(t− L) + a) = P  max u∈NR (L)maxv←uY R v (t)≥ qδR(t− L) + a  = P(MLR> a) + P  MLR≤ a and max

u∈NR(L)maxv←uY

R v (t)≥ qRδ(t− L) + a  ≤ ǫ 2 + E[P(M R t−L> qδR(t− L))#N R (L)]≤ ǫ, (26)

where v← u means that v is a descendent of u. Similarly, we can choose ˜L such that

P (MtR≤ qδ(t− ˜L))≤ ǫ. (27)

By (26) and (27), (MR

t − qδR(t); t≥ 0) is tight for δ ∈ (0, 1). As a consequence, qδR0(t)− q

R δ(t) =

O(1) for 0 ≤ δ < δ0, which proves the desired result. 

The rest of the section is devoted to the proofs of Proposition 3.2 (Section 3.2.1), Proposition 3.3 (Section 3.2.2) and Corollary 1.3 (Section 3.3).

3.2.1 Lower bound – Proof of Proposition 3.2

The proof of Proposition 3.2 is quite simple. In fact, we need to show that Lemma 3.4 For t≥ 0, MR

t is stochastically larger than Mt, i.e. for all x≥ 0,

P(MtR≥ x) ≥ P(Mt≥ x).

Then (23) follows immediately (21) with CR

H = CH by taking x =

√ 2t 3

2√2log t + y.

Proof: Note that MR t

d

= max{Mt,−mt} where Mt (resp. mt) is the maximal displacement

(resp. minimal displacement) in a Gaussian process. 

3.2.2 Upper bound – Proof of Proposition 3.3

We derive the upper bound for the maximal displacement of a Gaussian process, i.e. Proposition 3.3.

Recall that the sets HR(y, t) and ΓR(y, t) are defined as in (17) and (18), in which particles

are governed by reflected Gaussian process YR instead of standard Gaussian process Y . We

need some estimations for HRand ΓR.

Lemma 3.5 There exists c1, C1> 0 such that for t≥ 1 and y ∈ [0,√t],

c1e− √ 2y≤ EHR (y, t)≤ C1e− √ 2y. (28)

Lemma 3.6 There exists C2 > 0 such that for t≥ 1 and y ∈ [0,

√ t],

EΓR(y, t)≤ C2(y + 2)2e− √

2y. (29)

Proof of Proposition 3.3: (24) follows (28) and (29). 

We now turn to prove Lemma 3.5 and Lemma 3.6.

Proof of Lemma 3.5: According to many-to-one principle,

(9)

where τ1,β is defined as in Theorem 3.1. By (16), P(τ1,β ≥ t and |Bt| ∈ [βt − 1, βt]) =r 2 πt X n∈Z (−1)nexp  −2  β + 1 t  n2  Z βt βt−1 cosh 2nx t  exp  −x 2 2t  (31)

Now the key issue is to evaluate the asymptotical behavior of the integral in (31).

Z βt βt−1 cosh 2nx t  exp  −x 2 2t  =r πt 8 exp  2n2 t " − erf βr t 2 + n r 2 t − r 1 2t ! − erf βr t 2 − n r 2 t − r 1 2t ! + erf βr t 2+ n r 2 t ! + erf βr t 2 − n r 2 t ! # , (32) whereas erf βr t 2 + n r 2 t ! + erf βr t 2 − n r 2 t ! = 2−√2 πe −t(t + n2) cosh(8n) + o(e−t), (33) and erf βr t 2+ n r 2 t − r 1 2t ! + erf βr t 2 − n r 2 t − r 1 2t ! = 2−√2 πe √ 2−t(t + n2) cosh(8n) + o(e−t). (34)

Injecting (32), (33) and (34) into (31), we obtain

P(τ1,β ≥ t and |Bt| ∈ [βt − 1, βt]) = e √ 2− 1 √ π e −t " tX n∈Z (−1)ne−√8n2 cosh(√8n) +X n∈Z (−1)nn2e−√8n2 cosh(√8n) + o(1) # ≍ e−t, (35) since P n∈Z (−1)ne√8n2 cosh(√8n) = 0 and P n∈Z (−1)nn2e√8n2

cosh(√8n)∈]0, ∞[. Then (28) fol-lows immediately (30) and (35). 

Proof of Lemme 3.6: Again by many-to-one principle, we have

EΓR(y, t) = etP(|Bs| ≤ βs + L(s) + y + 1 ∀s ≤ t and |Bt| ∈ [βt − 1, βt + y])

≤ et " P(Bs≤ βs + L(s) + y + 1 ∀s ≤ t and Bt∈ [βt − 1, βt + y]) + P(Bs≥ −βs − L(s) − y − 1 ∀s ≤ t and Bs∈ [−βt − y, −βt + 1]) # = 2EΓ(y, t)≤ 2CΓ(y + 2)2e− √ 2y,

(10)

3.3 Almost sure fluctuation – Proof of Corollary 1.3

Proof of Corollary 1.3: Following the stochastic comparison in Lemma 3.4 together with (3) and (4) in Theorem 1.1, it is straightforward that

Lemma 3.7 lim inf t→∞ MtR− √ 2t log t ≥ − 3 2√2 a.s. and lim sup t→∞ MR t − √ 2t log t ≥ − 1 2√2 a.s.

In fact, (22) is the only ingredient that was used to obtain these bounds, which is replaced in our case by (24) in Proposition 3.3. Thus,

Lemma 3.8 lim inf t→∞ MtR− √ 2t log t ≤ − 3 2√2 a.s. and lim sup t→∞ MR t − √ 2t log t ≤ − 1 2√2 a.s. From Lemma 3.7 and Lemma 3.8 follows Corollary 1.3. 

4

Minimal displacement

We include in this section a generalized law of large numbers which leads naturally to Proposition 4.

Recall that N (t) is the number of particles alive at time t in a quadratic Gaussian process. It is classical that (e−tN (t); t≥ 0) is a positive martingale. By martingale convergence theorem,

there exists a random variable W∈ [0, ∞) such that e−tN (t)→ W

∞ a.s. as t→ ∞. (36)

In addition, P(W∞ = 0) is the smallest root of Φ(s) = s in [0, 1], where Φ(s) = s2. Thus,

W> 0 a.s.

The following theorem studies the asymptotic behavior of the number of particles confined in any region at large times.

Theorem 4.1 For D ⊂ R, ND(t) denote the number of particles lying in domain D at time t

in a quadratic Gaussian process. Then

ND(t)

ett−12

→ √|D|

2πW∞ a.s. as t→ ∞,

where |D| is the Lebesgue measure of domain D and W> 0 a.s. is defined as in (36). Proof of Proposition 4: It suffices to take Dǫ:= [−ǫ, ǫ]. By Theorem 4.1,

NDǫ(t)

ett−12

→ √|D|

2πW∞> 0 a.s. as t→ ∞. Thus for arbitrary small ǫ > 0, NDǫ(t)6= 0 for t large enough and lim sup

t→∞

mR

(11)

References

[1] B. Abdous, A.L. Fougères, and K. Ghoudi. Extreme behaviour for bivariate elliptical distributions. Canadian Journal of Statistics, 33(3):317–334, 2005.

[2] Y. Aragon, A. Daouia, and C. Thomas-Agnan. Nonparametric frontier estimation: A conditional quantile-based approach. Econometric Theory, 21(2):358–389, 2005.

[3] T.G. Bali and D. Weinbaum. A conditional extreme value volatility estimator based on high-frequency returns. Journal of Economic Dynamics and Control, 31(2):361–397, 2007.

[4] J. Beirlant, T. De Wet, and Y. Goegebeur. Nonparametric estimation of extreme conditional quantiles. Journal of statistical computation and simulation, 74(8):567–580, 2004.

[5] J. Beirlant and Y. Goegebeur. Local polynomial maximum likelihood estimation for pareto-type distributions. Journal of Multivariate Analysis, 89(1):97–118, 2004.

[6] V.E. Berezkin, G.K. Kamenev, and A.V. Lotov. Hybrid adaptive methods for approx-imating a nonconvex multidimensional pareto frontier. Computational Mathematics and Mathematical Physics, 46(11):1918–1931, 2006.

[7] H. Bi, G. Wan, and N.D. Turvey. Estimating the self-thinning boundary line as a density-dependent stochastic biomass frontier. Ecology, 81(6):1477–1483, 2000.

[8] G. Bouchard, S. Girard, A. Iouditski, and A. Nazin. Some linear programming methods for frontier estimation. Applied Stochastic Models in Business and Industry, 21(2):175–185, 2005.

[9] G. Bouchard, S. Girard, AB Iouditski, and AV Nazin. Nonparametric frontier estimation by linear programming. Automation and Remote Control, 65(1):58–64, 2004.

[10] H.N.E. Byström. Managing extreme risks in tranquil and volatile markets using conditional extreme value theory. International Review of Financial Analysis, 13(2):133–152, 2004. [11] H.N.E. Byström. Extreme value theory and extremely large electricity price changes.

In-ternational Review of Economics & Finance, 14(1):41–55, 2005.

[12] C. Cazals, J.P. Florens, and L. Simar. Nonparametric frontier estimation: a robust ap-proach. Journal of Econometrics, 106(1):1–25, 2002.

[13] V. Chavez-Demoulin and A.C. Davison. Generalized additive modelling of sample extremes. Journal of the Royal Statistical Society: Series C (Applied Statistics), 54(1):207–222, 2005.

[14] A. Daouia. Asymptotic representation theory for nonstandard conditional quantiles. Journal of Nonparametric Statistics, 17(2):253–268, 2005.

[15] A. Daouia, J.P. Florens, and L. Simar. Functional convergence of quantile-type frontiers with application to parametric approximations. Journal of Statistical Planning and Infer-ence, 138(3):708–725, 2008.

[16] A. Daouia, J.P. Florens, and L. Simar. Frontier estimation and extreme value theory. Bernoulli, 16(4):1039–1063, 2010.

(12)

[18] A. Daouia, L. Gardes, and S. Girard. Nadaraya’s estimates for large quantiles and free dis-posal support curves. Exploring Research Frontiers in Contemporary Statistics and Econo-metrics, pages 1–22, 2012.

[19] A. Daouia, L. Gardes, and S. Girard. On kernel smoothing for extremal quantile regression. Bernoulli, 19:2557–2589, 2013.

[20] A. Daouia, L. Gardes, S. Girard, and A. Lekina. Kernel estimators of extreme level curves. Test, 20(2):311–333, 2011.

[21] A. Daouia and I. Gijbels. Robustness and inference in nonparametric partial frontier mod-eling. Journal of Econometrics, 161(2):147–165, 2011.

[22] A. Daouia and I. Gijbels. Estimating frontier cost models using extremiles. Exploring Research Frontiers in Contemporary Statistics and Econometrics, pages 65–81, 2012.

[23] A. Daouia, S. Girard, and A. Guillou. A γ-moment approach to monotonic boundaries estimation: with applications in econometric and nuclear fields. Journal of Econometrics, 178:727–740, 2014.

[24] A. Daouia and A. Ruiz-Gazen. Robust nonparametric frontier estimators: qualitative ro-bustness and influence function. Statistica Sinica, 16(4):1233, 2006.

[25] A. Daouia and L. Simar. Nonparametric efficiency analysis: A multivariate conditional quantile approach. Journal of Econometrics, 140(2):375–400, 2007.

[26] B. Das and S.I. Resnick. Conditioning on an extreme component: Model consistency with regular variation on cones. Bernoulli, 17(1):226–252, 2011.

[27] B. Das and S.I. Resnick. Detecting a conditional extreme value model. Extremes, 14(1):29– 61, 2011.

[28] AC Davison and NI Ramesh. Local likelihood smoothing of sample extremes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(1):191–208, 2000. [29] A.L.M. Dekkers and L. de Haan. On the estimation of extreme-value index and large

quantiles estimation. Ann. Statist., 17:1795–1832, 1989.

[30] A. Delaigle and I. Gijbels. Estimation of boundary and discontinuity points in deconvolution problems. Statistica Sinica, 16(3):773, 2006.

[31] G. Dierckx, Y. Goegebeur, and A. Guillou. Local robust and asymptotically unbiased estimation of conditional pareto-type tails. Test, 23(2):330–355, 2014.

[32] J.P. Florens and L. Simar. Parametric approximations of nonparametric frontiers. Journal of Econometrics, 124(1):91–116, 2005.

[33] L. Gardes and S. Girard. A moving window approach for nonparametric estimation of the conditional tail index. Journal of Multivariate Analysis, 99(10):2368–2388, 2008.

[34] L. Gardes and S. Girard. Conditional extremes from heavy-tailed distributions: An appli-cation to the estimation of extreme rainfall return levels. Extremes, 13(2):177–204, 2010.

[35] L. Gardes and S. Girard. Functional kernel estimators of large conditional quantiles. Elec-tronic Journal of Statistics, 6:1715–1744, 2012.

(13)

[37] J. Geffroy, S. Girard, and P. Jacob. Asymptotic normality of the l 1-error of a boundary estimator. Journal of Nonparametric Statistics, 18(1):21–31, 2006.

[38] A. Ghorbel and A. Trabelsi. Predictive performance of conditional extreme value theory in value-at-risk estimation. International Journal of Monetary Economics and Finance, 1(2):121–148, 2008.

[39] I. Gijbels, E. Mammen, B.U. Park, and L. Simar. On estimation of monotone and concave frontier functions. Journal of the American Statistical Association, 94(445):220–228, 1999. [40] S. Girard. On the asymptotic normality of the l1-error for haar series estimates of poisson

point processes boundaries. Statistics & Probability Letters, 66(1):81–90, 2004.

[41] S. Girard, A. Guillou, and G. Stupfler. Frontier estimation with kernel regression on high order moments. Journal of Multivariate Analysis, 116:172–189, 2013.

[42] S. Girard, A. Guillou, and G. Stupfler. Uniform strong consistency of a frontier estimator using kernel regression on high order moments. ESAIM: Probability and Statistics, 18:642– 666, 2014.

[43] S. Girard, A. Iouditski, and A.V. Nazin. L 1-optimal nonparametric frontier estimation via linear programming. Automation and Remote Control, 66(12):2000–2018, 2005.

[44] S. Girard and P. Jacob. Extreme values and haar series estimates of point process bound-aries. Scandinavian Journal of Statistics, 30(2):369–384, 2003.

[45] S. Girard and P. Jacob. Projection estimates of point processes boundaries. Journal of Statistical Planning and Inference, 116(1):1–15, 2003.

[46] S. Girard and P. Jacob. Extreme values and kernel estimates of point processes boundaries. ESAIM: Probability and Statistics, 8(1):150–168, 2004.

[47] S. Girard and P. Jacob. Frontier estimation via kernel regression on high power-transformed data. Journal of Multivariate Analysis, 99(3):403–420, 2008.

[48] S. Girard and P. Jacob. A note on extreme values and kernel estimators of sample bound-aries. Statistics & Probability Letters, 78(12):1634–1638, 2008.

[49] S. Girard and P. Jacob. Frontier estimation with local polynomials and high power-transformed data. Journal of Multivariate Analysis, 100(8):1691–1705, 2009.

[50] S. Girard and L. Menneteau. Central limit theorems for smoothed extreme value esti-mates of poisson point processes boundaries. Journal of Statistical Planning and Inference, 135(2):433–460, 2005.

[51] S. Girard and L. Menneteau. Smoothed extreme value estimators of non-uniform point processes boundaries with application to star-shaped supports estimation. Communications in Statistics—Theory and Methods, 37(6):881–897, 2008.

[52] Y. Goegebeur, A. Guillou, and A. Schorgen. Nonparametric regression estimation of con-ditional tails: the random covariate case. Statistics, 23:1–24, 2013.

[53] W.H. Greene. Maximum likelihood estimation of econometric frontier functions. Journal of Econometrics, 13(1):27–56, 1980.

(14)

[55] P. Hall and L. Simar. Estimating a changepoint, boundary, or frontier in the presence of observation error. Journal of the American Statistical Association, 97(458):523–534, 2002. [56] P. Hall and N. Tajvidi. Nonparametric analysis of temporal trend when fitting parametric

models to extreme-value data. Statistical Science, pages 153–167, 2000.

[57] P. Jacob and C. Suquet. Estimating the edge of a poisson process by orthogonal series. Journal of Statistical Planning and Inference, 46(2):215–234, 1995.

[58] P. Jacob and C. Suquet. Regression and edge estimation. Statistics & Probability Letters, 27(1):11–15, 1996.

[59] S.O. Jeong and L. Simar. Linearly interpolated fdh efficiency score for nonconvex frontiers. Journal of Multivariate Analysis, 97(10):2141–2161, 2006.

[60] A. Kneip, L. Simar, and P.W. Wilson. Asymptotics for dea estimators in nonparametric frontier models. Technical report, Discussion paper, 2003.

[61] R. Koenker and K. Hallock. Quantile regression: An introduction. Journal of Economic Perspectives, 15(4):43–56, 2001.

[62] S.C. Kumbhakar and C.A.K. Lovell. Stochastic frontier analysis. Cambridge University Press, 2003.

[63] V. Marimoutou, B. Raggad, and A. Trabelsi. Extreme value theory and value at risk: application to oil market. Energy Economics, 31(4):519–530, 2009.

[64] C. Martins-Filho and F. Yao. A smooth nonparametric conditional quantile frontier esti-mator. Journal of Econometrics, 143(2):317–333, 2008.

[65] J. El Methni, L. Gardes, and S. Girard. Nonparametric estimation of extreme risks from conditional heavy-tailed distributions. Scandinavian Journal of Statistics, 2014. to appear. [66] A. Novikov. Linear aggregation methods applied to boundary estimators.

http://hal.archives-ouvertes.fr/hal-01074697, 2014.

[67] A. Novikov. Nonlinear aggregation methods for boundary estimators. http://hal.archives-ouvertes.fr/hal-01075368, 2014.

[68] B.U. Park and L. Simar. Efficient semiparametric estimation in a stochastic frontier model. Journal of the American Statistical Association, 89(427):929–936, 1994.

[69] L. Peng. Bias-corrected estimators for monotone and concave frontier functions. Journal of Statistical Planning and Inference, 119(2):263–275, 2004.

[70] S. Rao. Frontier estimation as a particular case of conditional extreme value analysis. http://hal.archives-ouvertes.fr/hal-00776107, 2013.

[71] S. Rao. Linear aggregation of conditional extreme-value index estimators. http://hal.archives-ouvertes.fr/hal-00771647, 2013.

[72] S. Rao. Linear aggregation of frontier estimators. http://hal.archives-ouvertes.fr/hal-00798394, 2013.

[73] S. Rao. Nonlinear aggregation of conditional extreme-value index estimators. http://hal.archives-ouvertes.fr/hal-00776108, 2013.

(15)

[75] S. Rao. A review on conditional extreme value analysis. http://hal.archives-ouvertes.fr/hal-00770546, 2013.

[76] O. Rosen and A. Cohen. Extreme percentile regression. In Statistical Theory and Compu-tational Aspects of Smoothing: Proceedings of the COMPSTAT’94 satellite meeting held in Semmering, Austria, pages 27–28, 1994.

[77] P. Schmidt and T.F. Lin. Simple tests of alternative specifications in stochastic frontier models. Journal of Econometrics, 24(3):349–361, 1984.

[78] J.K. Sengupta. Stochastic data envelopment analysis: a new approach. Applied Economics Letters, 5(5):287–290, 1998.

[79] L. Simar. Aspects of statistical analysis in dea-type frontier models. Journal of Productivity Analysis, 7(2):177–185, 1996.

[80] L. Simar. Detecting outliers in frontier models: A simple approach. Journal of Productivity Analysis, 20(3):391–424, 2003.

[81] L. Simar and P.W. Wilson. Sensitivity analysis of efficiency scores: How to bootstrap in nonparametric frontier models. Management science, 44(1):49–61, 1998.

[82] L. Simar and P.W. Wilson. Of course we can bootstrap dea scores! but does it mean anything? logic trumps wishful thinking. Journal of Productivity Analysis, 11(1):93–97, 1999.

[83] L. Simar and P.W. Wilson. A general methodology for bootstrapping in non-parametric frontier models. Journal of Applied Statistics, 27(6):779–802, 2000.

[84] L. Simar and P.W. Wilson. Statistical inference in nonparametric frontier models: The state of the art. Journal of Productivity Analysis, 13(1):49–78, 2000.

[85] L. Simar and P.W. Wilson. Testing restrictions in nonparametric efficiency models. Com-munications in Statistics-Simulation and Computation, 30(1):159–184, 2001.

[86] L. Simar and P.W. Wilson. Statistical inference in nonparametric frontier models: recent developments and perspectives. The measurement of productive efficiency and productivity growth, pages 421–521, 2008.

[87] L. Simar and P.W. Wilson. Inferences from cross-sectional, stochastic frontier models. Econometric Reviews, 29(1):62–98, 2009.

[88] L. Simar and P.W. Wilson. Performance of the bootstrap for dea estimators and iterating the principle. Handbook on data envelopment analysis, pages 241–271, 2011.

[89] L. Simar and V. Zelenyuk. Stochastic fdh/dea estimators for frontier analysis. Journal of Productivity Analysis, 36(1):1–20, 2011.

[90] C. Wang, H. Zhuang, Z. Fang, and T. Lu. A model of conditional var of high frequency extreme value based on generalized extreme value distribution. Journal of Systems & Man-agement, 3:003, 2008.

[91] H. Wang and C.L. Tsai. Tail index regression. Journal of the American Statistical Associ-ation, 104(487):1233–1240, 2009.

Références

Documents relatifs

The estimation of the regression function and its derivatives is a largely addressed subject and several types of estimators can be used such as smooth- ing splines (more

In Section 3 it is shown that the problem of estimating parameter θ in (1) can be reduced to that of estimating the mean in a conditionally Gaussian distribution with a

Keywords : Functional data, recursive kernel estimators, regression function, quadratic error, almost sure convergence, asymptotic normality.. MSC : 62G05,

In this paper, we address estimation of the extreme-value index and extreme quantiles of a heavy-tailed distribution when some random covariate information is available and the data

When the number of data points is large enough, it indicates that the distribution of the maxima might behave as a GEV distribution, independently of the distribution F..

Their asymptotic normality is derived under quite natural and general extreme value conditions, without Lipschitz conditions on the boundary and without recourse to assumptions

Smoothed extreme value estimators of non-uniform point processes boundaries with application to star-shaped supports estimation.. Maximum likelihood estimation of econometric

Smoothed extreme value estimators of non-uniform point processes boundaries with application to star-shaped supports estimation.. Nonparametric regression estimation of