• Aucun résultat trouvé

On the adaptive estimation of a multiplicative separable regression function

N/A
N/A
Protected

Academic year: 2021

Partager "On the adaptive estimation of a multiplicative separable regression function"

Copied!
14
0
0

Texte intégral

(1)

HAL Id: hal-00853587

https://hal.archives-ouvertes.fr/hal-00853587

Preprint submitted on 22 Aug 2013

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

On the adaptive estimation of a multiplicative separable regression function

Christophe Chesneau

To cite this version:

Christophe Chesneau. On the adaptive estimation of a multiplicative separable regression function.

2013. �hal-00853587�

(2)

(will be inserted by the editor)

On the adaptive estimation of a multiplicative separable regression function

Christophe Chesneau

Received:

Abstract We investigate the estimation of a multiplicative separable regres- sion function from a bi-dimensional nonparametric regression model with ran- dom design. We present a general estimator for this problem and study its mean integrated squared error (MISE) properties. A wavelet version of this estimator is developed. In some situations, we prove that it attains the stan- dard unidimensional rate of convergence under the MISE over Besov balls.

Keywords Nonparametric regression· Multiplicative separable regression function·Wavelet methods.

2000 Mathematics Subject Classification62G08, 62G20.

1 Motivations

We consider the bi-dimensional nonparametric regression model with random design described as follows. Let (Yi, Ui, Vi)i∈Z be a stochastic process defined on a probability space (Ω,A,P), where

Yi=h(Ui, Vi) +ξi, i∈Z, (1) (ξi)i∈Z is a strictly stationary stochastic process, (Ui, Vi)i∈Z is a strictly sta- tionary stochastic process with support in [0,1]2 and h : [0,1]2 → R is an unknown bivariate regression function. It is assumed that E(ξ1) = 0, E(ξ12) exists, (Ui, Vi)i∈Zare independent, (ξi)i∈Z are independent and, for anyi∈Z, (Ui, Vi) and ξi are independent. In this study, we focus our attention on the case wherehis a multiplicative separable regression function: there exist two functionsf : [0,1]→Randg: [0,1]→Rsuch that

h(x, y) =f(x)g(y). (2)

Laboratoire de Math´ematiques Nicolas Oresme, Universit´e de Caen Basse-Normandie, Cam- pus II, Science 3, 14032 Caen, France. E-mail: chesneau@math.unicaen.fr

(3)

We aim to estimatehfrom thenrandom variables: (Y1, U1, V1), . . . ,(Yn, Un, Vn).

This problem is plausible in many practical situations as in utility, production, and cost function applications. See, e.g., Linton and Nielsen (1995), Yatchew and Bos (1997), Pinske (2000), Lewbel and Linton (2007) and Jacho-Ch´avez et al.(2010).

In this paper, we provide a theoretical contribution to the subject by in- troducing a new general estimation method forh. A sharp upper bound for its associated mean integrated squared error (MISE) is proved. Then we adapt our methodology to propose an efficient and adaptive wavelet procedure. It is based on two wavelet thresholding estimators having the features to be adap- tive for a wide class of unknown functions and enjoying nice MISE properties.

Further details on such wavelet estimators can be found in, e.g., Antoniadis (1997), Vidakovic (1999) and H¨ardleet al.(1998). Despite the so-called ”curse of dimension“ coming from the bi-dimensionality of (1), we prove that our wavelet estimator attains the standard unidimensional rate of convergence un- der the MISE over Besov balls (for both the homogeneous and inhomogeneous zones). It completes asymptotic results proved by Linton and Nielsen (1995) via non adaptive kernel methods for the structured nonparametric regression model.

The paper is organized as follows. Assumptions on (1) and some notations are introduced in Section 2. Section 3 presents our general MISE result. Section 4 is devoted to our wavelet estimator and its performances in terms of rate of convergence under the MISE over Besov balls. Technical proofs are collected in Section 5.

2 Assumptions and notations

For anyp≥1, we set Lp([0,1]) =

(

v: [0,1]→R; ||v||p= Z 1

0

|v(x)|pdx 1/p

<∞ )

. We set

eo= Z 1

0

f(x)dx, e= Z 1

0

g(x)dx, (provided that they exist).

We formulate the following assumptions.

(H1) There exists a known constantC1>0 such that sup

x∈[0,1]

|f(x)| ≤C1. (H2) There exists a known constantC2>0 such that

sup

x∈[0,1]

|g(x)| ≤C2.

(4)

(H3) The density of (U1, V1), denoted by q, is known and there exist two constants c3>0 andC3>0 such that

c3≤ inf

(x,y)∈[0,1]2q(x, y), sup

(x,y)∈[0,1]2

q(x, y)≤C3. (H4) There exists a known constantω >0 such that

|eoe| ≥ω.

The assumptions (H1) and(H2), involving the boundedness ofh, are stan- dard in nonparametric regression models. The knowledge of q discussed in (H3) is restrictive but plausible in some situations, the most common case being (U1, V1)∼ U([0,1]2) (the uniform distribution on [0,1]2). Finally, men- tion that(H4)is just a technical assumption more realistic to the knowledge ofeo ande(depending on f andgrespectively).

3 MISE result

Theorem 1 presents an estimator forhand shows an upper bound for its MISE.

Theorem 1 We consider (1)under (H1)-(H4). We introduce the following estimator for h(2):

ˆh(x, y) =

f˜(x)˜g(y)

˜

e 1{|˜e|≥ω/2}, (3)

where f˜ denotes an arbitrary estimator for f e in L2([0,1]), g˜ denotes an arbitrary estimator forgeo inL2([0,1]),1denotes the indicator function,

˜ e= 1

n

n

X

i=1

Yi

q(Ui, Vi) andω refers to (H4).

Then there exists a constant C >0 such that E

Z 1 0

Z 1 0

(ˆh(x, y)−h(x, y))2dxdy

≤C

E(||˜g−geo||22) +E(||f˜−f e||22) +E(||˜g−geo||22||f˜−f e||22) +1 n

. The form of ˜h(3) is derived to the multiplicative separable structure ofh(2) and a ratio-type normalization. Other results about such ratio-type estimators in a general statistical context can be found in Vasiliev (2012).

Based on Theorem 1, ˆhis efficient forhif and only if ˜f is efficient forf e and ˜g is efficient for geo in terms of MISE. This result motivates the inves- tigation of wavelet methods enjoying adaptivity for a wide class of unknown functions and having optimal properties under the MISE. For details on the in- terests of wavelet methods in nonparametric statistics, we refer to Antoniadis (1997), Vidakovic (1999) and H¨ardleet al.(1998).

(5)

4 Adaptive wavelet estimation

Before introducing our wavelet estimators, let us present some basics on wavelets.

4.1 Wavelet basis on [0,1]

Let us briefly recall the construction of wavelet basis on the interval [0,1]

introduced by Cohen et al. (1993). Let N be a positive integer, φ and ψ be the initial wavelets of the Daubechies orthogonal waveletsdb2N. We set

φj,k(x) = 2j/2φ(2jx−k), ψj,k(x) = 2j/2ψ(2jx−k).

With appropriate treatments at the boundaries, there exists an integerτ sat- isfying 2τ ≥ 2N such that the collection S = {φτ,k(.), k ∈ {0, . . . ,2τ − 1}; ψj,k(.); j ∈ N− {0, . . . , τ−1}, k ∈ {0, . . . ,2j−1}}, is an orthonormal basis ofL2([0,1]).

For any integer`≥τ, any v∈L2([0,1]) can be expanded onS as

v(x) =

2`−1

X

k=0

α`,kφ`,k(x) +

X

j=`

2j−1

X

k=0

βj,kψj,k(x), x∈[0,1],

whereαj,k andβj,k are the wavelet coefficients ofv defined by αj,k=

Z 1 0

v(x)φj,k(x)dx, βj,k= Z 1

0

v(x)ψj,k(x)dx. (4)

4.2 Besov balls

For the sake of simplicity, we consider the sequential version of Besov balls defined as follows. LetM >0,s >0,p≥1 andr≥1. A functionv belongs to Bp,rs (M) if and only if there exists a constant M >0 (depending onM) such that the associated wavelet coefficients (4) satisfy

2τ(1/2−1/p)

2τ−1

X

k=0

τ,k|p

!1/p +

X

j=τ

2j(s+1/2−1/p)

2j−1

X

k=0

j,k|p

1/p

r

1/r

≤M.

In this expression,sis a smoothness parameter andpandrare norm param- eters. For a particular choice ofs, pandr, Bp,rs (M) contains the H¨older and Sobolev balls. See, e.g., Devore and Popov (1988), Meyer (1992) and H¨ardle et al.(1998).

(6)

4.3 Hard thresholding estimators

In the sequel, we consider (1) under(H1)-(H4).

We focus our attention on wavelet hard thresholding estimators for ˜fand ˜g in (3). They are based on a term-by-term selection of estimators of the wavelet coefficients of the unknown function. Those which are greater to a threshold are kept, the other are removed. This selection is the key to the adaptivity and the good performances of the hard wavelet estimators. See, e.g., Donoho et al.(1996), Delyon and Juditsky (1996) and H¨ardleet al. (1998).

Estimatorf˜forf e.We define the hard thresholding estimator ˜f by f˜(x) =

2τ−1

X

k=0

ˆ

ατ,kφτ,k(x) +

j1

X

j=τ 2j−1

X

k=0

βˆj,k1{|βˆj,k|≥κCλnj,k(x), (5)

where

ˆ

ατ,k= 1 an

an

X

i=1

Yi

q(Ui, Viτ,k(Ui), an is the integer part ofn/2,

βˆj,k= 1 an

an

X

i=1

Wi,j,k1{|Wi,j,k|≤Cn}, Wi,j,k= Yi

q(Ui, Vij,k(Ui), j1is the integer satisfying

1 2

an

lnan

<2j1 ≤ an

lnan

, κis a large enough constant,C=p

2(C3/c23)(C12C22+E(ξ21)) and λn=

rlnan

an

.

Estimatorg˜forgeo.We define the hard thresholding estimator ˜gby

˜ g(x) =

2τ−1

X

k=0

ˆ

υτ,kφτ,k(x) +

j2

X

j=τ 2j−1

X

k=0

θˆj,k1{|θˆj,k|≥κCηnj,k(x), (6)

where

ˆ υτ,k= 1

bn bn

X

i=1

Yan+i

q(Uan+i, Van+iτ,k(Van+i), an is the integer part ofn/2,bn=n−an,

θˆj,k= 1 bn

bn

X

i=1

Zan+i,j,k1{|Zan+i,j,k|≤Cn}, Zan+i,j,k= Yan+i

q(Uan+i, Van+ij,k(Van+i),

(7)

j2is the integer satisfying 1 2

bn

lnbn

<2j2 ≤ bn

lnbn

, κ is a large enough constant,C=p

2(C3/c23)(C12C22+E(ξ21)) and ηn=

rlnbn

bn .

Estimator forh: From ˜f (5) and ˜g(6), we consider the following estimator for h(2):

ˆh(x, y) =

f˜(x)˜g(y)

˜

e 1{|˜e|≥ω/2}, (7)

where

˜ e= 1

n

n

X

i=1

Yi

q(Ui, Vi) andω refers to(H4).

Let us mention that ˜his adaptive in the sense that it does not depend on f or gin its construction.

Remark 1 Since ˜f is defined with (Y1, U1, V1), . . . ,(Yan, Uan, Van) and ˜gis de- fined with (Yan+1, Uan+1, Van+1), . . . ,(Yn, Un, Vn), thanks to the independence of (Y1, U1, V1), . . . ,(Yn, Un, Vn), ˜f and ˜gare independent.

Remark 2 The calibration of the parameters in ˜f and ˜gis based on theoretical considerations; thus defined, ˜fand ˜gcan attain a fast rate of convergence under the MISE over Besov balls. See (Chaubeyet al. 2013, Theorem 6.1). Further details are given in the proof of Theorem 2.

4.4 Rate of convergence

Theorem 2 investigates the rate of convergence attains by ˆhunder the MISE over Besov balls.

Theorem 2 We consider (1) under (H1)-(H4). Let ˆhbe (7) andh be (2).

Suppose that

– f ∈ Bps11,r1(M1) with M1 > 0, r1 ≥ 1, either {p1 ≥ 2 and s1 > 0} or {p1∈[1,2)ands1>1/p1},

– g ∈ Bps22,r2(M2) with M2 > 0, r2 ≥ 1, either {p2 ≥ 2 and s2 > 0} or {p2∈[1,2)ands2>1/p2}.

(8)

Then there exists a constant C >0 such that E

Z 1 0

Z 1 0

(ˆh(x, y)−h(x, y))2dxdy

≤C lnn

n

2s/(2s+1)

, wheres= min(s1, s2).

The rate of convergence (lnn/n)2s/(2s+1)is the near optimal one in the min- imax sense for the unidimensional regression model with random design under the MISE over Besov ballsBp,rs(M). See, e.g., Tsybakov (2004) and H¨ardleet al.(1998). In this sense, Theorem 2 proves that our estimator escapes to the so-called “curse of dimension”. Such a result is not possible with the standard bi-dimensional hard thresholding estimator attaining the rate of convergence (lnn/n)2s/(2s+d) withd= 2 under the MISE over bi-dimensional Besov balls defined withsas smoothness parameter. See Delyon and Juditsky (1996).

Theorem 2 completes asymptotic results proved by Linton and Nielsen (1995) investigating this problem for the structured nonparametric regression model via another estimation method based on non adaptive kernels.

Remark 3 In Theorem 2, we take into account both the homogeneous zone of Besov balls, i.e., {p1 ≥2 and s1 >0}, and the inhomogeneous zone, i.e., {p1 ∈ [1,2) and s1 > 1/p1}, for the case f ∈ Bps11,r1(M1), and the same for g∈Bps22,r2(M2). This has the advantage to cover a very rich class of unknown regression functionsh.

Remark 4 Note that Theorem 2 does not require the knowledge of the distri- bution ofξ1;{E(ξ1) = 0 and the existence ofE(ξ12)} is enough.

Remark 5 Our study can be extended to the multidimensional case considered by Yatchew and Bos (1997), i.e.,f : [0,1]q1 →Randg : [0,1]q2 →R,q1 and q2 denoting two positive integer. In this case, adapting our framework to the multidimensional case (q1 dimensional Besov balls,q1dimensional (tensorial) wavelet basis,q1 dimensional wavelet hard thresholding estimator,. . . see, for instance, Delyon and Juditsky (1996)), one can prove that (3) attains the rate of convergence (lnn/n)2s/(2s+q), where s = min(s1, s2) and q = max(q1, q2).

5 Proofs

In this section, for the sake of simplicity, C denotes a generic constant; its value may change from one term to another.

Proof of Theorem 1.Observe that ˆh(x, y)−h(x, y) =

f˜(x)˜g(y)

˜

e 1{|˜e|≥ω/2}−f(x)g(y)

= 1

˜

e( ˜f(x)˜g(y)−f(x)g(y)˜e)1{|˜e|≥ω/2}−f(x)g(y)1{|˜e|<ω/2}.

(9)

Therefore, using the triangular inequality, the Markov inequality,(H1),(H2), (H4),{|˜e|< ω/2} ∩ {|eeo| ≥ω} ⊆ {|˜e−eeo| ≥ω/2}and again the Markov inequality, we get

|ˆh(x, y)−h(x, y)| ≤ 2

ω|f˜(x)˜g(y)−f(x)g(y)˜e|+|f(x)||g(y)|1{|˜e|<ω/2}

≤C

|f˜(x)˜g(y)−f(x)g(y)˜e|+1{|˜e−eeo|≥ω/2}

≤C

|f˜(x)˜g(y)−f(x)g(y)˜e|+|˜e−eeo|

. (8)

On the other hand, we have the decomposition f˜(x)˜g(y)−f(x)g(y)˜e

=f(x)e(˜g(y)−g(y)eo) +g(y)eo( ˜f(x)−f(x)e) + (˜g(y)−g(y)eo)( ˜f(x)−f(x)e) +f(x)g(y)(eeo−e).˜

Owing to the triangular inequality,(H1) and(H2), we have

|f˜(x)˜g(y)−f(x)g(y)˜e|

≤C |˜g(y)−g(y)eo|+|f˜(x)−f(x)e|+|˜g(y)−g(y)eo||f˜(x)−f(x)e| +|˜e−eeo|

. (9)

Putting (8) and (9) together, we obtain

|ˆh(x, y)−h(x, y)|

≤C |˜g(y)−g(y)eo|+|f˜(x)−f(x)e|+|˜g(y)−g(y)eo||f˜(x)−f(x)e| +|˜e−eeo|

.

Therefore, by the elementary inequality: (a+b+c+d)2≤8(a2+b2+c2+d2), (a, b, c, d)∈R4, an integration over [0,1]2and taking the expectation, it comes

E Z 1

0

Z 1 0

(ˆh(x, y)−h(x, y))2dxdy

≤C E(||˜g−geo||22) +E(||f˜−f e||22) +E(||˜g−geo||22||f˜−f e||22) +E((˜e−eeo)2)

. (10)

Now observe that, owing to the independence of (Ui, Vi)i∈Z, the independence between (U1, V1) andξ1, andE(ξ1) = 0, we obtain

E(˜e) =E

Y1

q(U1, V1)

=E

h(U1, V1) q(U1, V1)

+E(ξ1)E 1

q(U1, V1)

= Z 1

0

Z 1 0

f(x)g(y)

q(x, y) q(x, y)dxdy= Z 1

0

f(x)dx Z 1

0

g(y)dy

=eeo. (11)

(10)

Then, using similar arguments to (11), (a+b)2≤2(a2+b2), (a, b)∈R2,(H1), (H2), (H3)andE(ξ12)<∞, we have

E((˜e−eeo)2) =V(˜e) = 1 nV

Y1

q(U1, V1)

≤ 1 nE

Y1

q(U1, V1) 2!

≤ 2 nE

(h(U1, V1))212 (q(U1, V1))2

≤ 2

c23 C12C22+E(ξ12)1 n

=C1

n. (12)

Equations (10) and (12) yield the desired inequality:

E Z 1

0

Z 1 0

(ˆh(x, y)−h(x, y))2dxdy

≤C

E(||˜g−geo||22) +E(||f˜−f e||22) +E(||˜g−geo||22||f˜−f e||22) +1 n

.

Proof of Theorem 2.We aim to apply Theorem 1 by investigating the rate of convergence attains by ˜f and ˜g under the MISE over Besov balls.

First of all, remark that, for γ ∈ {φ, ψ}, any integer j ≥ τ and any k ∈ {0, . . . ,2j−1},

– using similar arguments to (11), we obtain

E 1 an

an

X

i=1

Yi

q(Ui, Vij,k(Ui)

!

=E

Y1

q(U1, V1j,k(U1)

=E

h(U1, V1)

q(U1, V1j,k(U1)

+E(ξ1)E

γj,k(U1) q(U1, V1)

= Z 1

0

Z 1 0

f(x)g(y)

q(x, y) γj,k(x)q(x, y)dxdy

= Z 1

0

f(x)γj,k(x)dx Z 1

0

g(y)dy

= Z 1

0

(f(x)ej,k(x)dx.

(11)

– using similar arguments to (12) and||γj,k||22= 1, we have

an

X

i=1

E

Yi

q(Ui, Vij,k(Ui) 2!

=E

Y1

q(U1, V1j,k(U1) 2!

an

≤2E

(h(U1, V1))221

(q(U1, V1))2j,k(U1))2

an

≤ 2

c23 C12C22+E(ξ12)

E (γj,k(U1))2 an

= 2

c23 C12C22+E(ξ12) Z 1

0

j,k(x))2 Z 1

0

q(x, y)dy

dxan

≤2C3

c23 C12C22+E(ξ12)

||γj,k||22an =C2an, withC2= 2(C3/c23)(C12C22+E(ξ12)).

Applying (Chaubeyet al.2013, Theorem 6.1) (see Appendix) with ”n=µn= υn =an“, ”δ = 0“, ”θγ =C“, and f ∈Bps1

1,r1(M1) (so f e ∈Bps1

1,r1(M1e)) withM1>0,r1≥1, either{p1≥2 ands1>0}or{p1∈[1,2) ands1>1/p1}, we prove the existence of a constantC >0 such that

E(||f˜−f e||22)≤C lnan

an

2s1/(2s1+1)

≤C lnn

n

2s1/(2s1+1)

, (13) fornlarge enough.

The MISE of ˜g can be investigated in a similar way: for γ ∈ {φ, ψ}, any integerj ≥τ and anyk∈ {0, . . . ,2j−1},

– we show that E

1 bn

bn

X

i=1

Yan+i

q(Uan+i, Van+ij,k(Van+i)

!

= Z 1

0

(g(x)eoj,k(x)dx.

– we show that

bn

X

i=1

E

Yan+i

q(Uan+i, Van+ij,k(Van+i) 2!

≤C2bn, with alwaysC2= 2(C3/c23)(C12C22+E(ξ12)).

Applying again (Chaubeyet al.2013, Theorem 6.1) (see Appendix) with ”n= µnn =bn“, ”δ= 0“, ”θγ =C“ andg∈Bps2

2,r2(M2) withM2>0,r2≥1, either {p2 ≥ 2 and s2 > 0} or {p2 ∈ [1,2) and s2 > 1/p2}, we prove the existence of a constantC >0 such that

E(||˜g−geo||22)≤C lnbn

bn

2s2/(2s2+1)

≤C lnn

n

2s2/(2s2+1)

, (14)

(12)

fornlarge enough.

Using the independence between ˜f and ˜g (see Remark 1), it follows from (13) and (14) that

E(||˜g−geo||22||f˜−f e||22) =E(||˜g−geo||22)E(||f˜−f e||22)

≤C lnn

n

4s1s2/(2s1+1)(2s2+1)

. (15)

Owing to Theorem 1, (13), (14) and (15), we get E

Z 1 0

Z 1 0

(ˆh(x, y)−h(x, y))2dxdy

≤C

E(||˜g−geo||22) +E(||f˜−f e||22) +E(||˜g−geo||22||f˜−f e||22) +1 n

≤C

lnn n

2s2/(2s2+1)

+ lnn

n

2s1/(2s1+1)

+ lnn

n

4s1s2/(2s1+1)(2s2+1)

+1 n

!

≤C lnn

n

2s/(2s+1)

, withs= min(s1, s2).

Theorem 2 is proved.

Appendix

Let us now present in details (Chaubey et al. 2013, Theorem 6.1) used two times in the proof of Theorem 2.

We consider a general form of the hard thresholding estimator denoted by fˆH for estimating an unknown function f ∈ L2([0,1]) from n independent random variablesW1, . . . , Wn:

fbH(x) =

2τ−1

X

k=0

αbτ,kφτ,k(x) +

j1

X

j=τ 2j−1

X

k=0

βbj,k1{|bβj,k|≥κϑjj,k(x), (16) where

αbj,k= 1 υn

n

X

i=1

qij,k, Wi),

βbj,k= 1 υn

n

X

i=1

qij,k, Wi)1{|qij,k,Wi)|≤ςj},

(13)

ςjψ2δj υn

√µnlnµn

, ϑjψ2δj s

lnµn

µn

, κ≥2 + 8/3 + 2p

4 + 16/9andj1 is the integer satisfying 1

1/(2δ+1)n <2j1 ≤µ1/(2δ+1)n . Here, we suppose that there exist

– n functionsq1, . . . , qn with qi:L2([0,1])×R→R for anyi∈ {1, . . . , n}, – two sequences of real numbers(υn)n∈Nand(µn)n∈Nsatisfyinglimn→∞υn =

∞ andlimn→∞µn=∞ such that, forγ∈ {φ, ψ},

(A1). any integerj ≥τ and any k∈ {0, . . . ,2j−1}, E 1

υn

n

X

i=1

qij,k, Wi)

!

= Z 1

0

f(x)γj,k(x)dx.

(A2). there exist two constants,θγ>0 andδ≥0, such that, for any integer j ≥τ and any k∈ {0, . . . ,2j−1},

n

X

i=1

E

(qij,k, Wi))2

≤θγ222δjυ2n µn

.

Let fˆH be (16)under (A1) and (A2). Suppose that f ∈Bsp,r(M) withr ≥1, {p ≥2 ands ∈ (0, N)} or {p ∈[1,2) and s∈ ((2δ+ 1)/p, N)}. Then there exists a constantC >0 such that

E

kfbH−fk22

≤C lnµn

µn

2s/(2s+2δ+1)

.

References

Antoniadis, A. (1997). Wavelets in statistics: a review (with discussion), Journal of the Italian Statistical Society Series B, 6, 97-144.

Chaubey, Y.P., Chesneau, C. and Doosti, H. (2013). Adaptive wavelet estimation of a density from mixtures under multiplicative censoring, in Revision toStatistics.

Cohen, A., Daubechies, I., Jawerth, B. and Vial, P. (1993). Wavelets on the interval and fast wavelet transforms.Applied and Computational Harmonic Analysis, 24,1, 54–81.

Delyon, B. and Juditsky, A. (1996). On minimax wavelet estimators,Applied Computational Harmonic Analysis, 3, 215–228.

DeVore, R. and Popov, V. (1988). Interpolation of Besov spaces,Trans. Amer. Math. Soc., 305, 397-414.

Donoho, D.L., Johnstone, I.M., Kerkyacharian, G. and Picard, D. (1996). Density estimation by wavelet thresholding,The Annals of Statistics, 24, 508–539.

(14)

ardle, W., Kerkyacharian, G., Picard, D. and Tsybakov, A. (1998).Wavelet, Approxima- tion and Statistical Applications, Lectures Notes in Statistics New York 129, Springer Verlag.

Jacho-Ch´avez, D., Lewbel, A. and Linton, O. (2010). Identification And Nonparametric Estimation Of A Transformed Additively Separable Model,Journal of Econometrics, 156(2), 392-407.

Lewbel, A. and Linton, O. (2007). Nonparametric Matching and Efficient Estimators of Homothetically Separable Functions,Econometrica, 75, 1209-1228.

Linton, O. B. and Nielsen, J.P. (1995). A Kernel Model of Estimating Structured Nonpara- metric Regression Based on Marginal Integration,Biometrika, 82, 93-100.

Meyer, Y. (1992). Wavelets and Operators.Cambridge University Press, Cambridge.

Pinske, J. (2000). Feasible Multivariate Nonparametric Regression Estimation Using Weak Separability. Preprint, University of British Columbia, Canada

Tsybakov, A.B. (2004).Introduction `a l’estimation non-param´etrique, Springer.

Vasiliev, V.A. (2012). One investigation method of a ratios type estimators. 16th IFAC Symposium on System Identification, Brussels, Belgium, July 11-13, pages 1-6, 2012 (in progress).

Vidakovic, B. (1999).Statistical Modeling by Wavelets, John Wiley & Sons, Inc., New York, 384 pp.

Yatchew, A. and Bos, L. (1997). Nonparametric Least Squares Estimation and Testing of Economic Models,Journal of Quantitative Economics, 13, 81-131.

Références

Documents relatifs

One could clearly hope for stronger results, but the above are already strong indications that when the data lie on a high-dimensional and curved manifold, local learning methods

An entropy estimate based on a kernel density estimation, In Limit theorems in probability and Kernel-type estimators of Shannon’s entropy statistics (P´ ecs, 1989), volume 57

Wavelet block thresholding for samples with random design: a minimax approach under the L p risk, Electronic Journal of Statistics, 1, 331-346..

Keywords and phrases: Nonparametric regression, Derivatives function estimation, Wavelets, Besov balls, Hard

To the best of our knowledge, f b ν h is the first adaptive wavelet estimator proposed for the density estimation from mixtures under multiplicative censoring.... The variability of

The main contributions of this result is to provide more flexibility on the choices of j 0 and j 1 (which will be crucial in our dependent framework) and to clarified the

Keywords and phrases: Nonparametric regression, Biased data, Deriva- tives function estimation, Wavelets, Besov balls..

Contributions In this paper, we design and study an adaptive wavelet esti- mator for f ν that relies on the hard thresholding rule in the wavelet domain.. It has the originality