• Aucun résultat trouvé

Proof of Theorem 5 . Note that RSS 1

n =2(1 +Op(n;1=2) +Op(h;1)): Then it follows from the denition that

; n(A0)2=;rn2Xn

k=1"kfXn

i=1"i

X

i;(Uk);1g

X

kK((Ui;u0)=h) +12rn4Xn

k=1 n

X

i=1 n

X

j=1"i"j

X

i;(Uk);1

X

k

X

k

X

j;(Uk);1K((Ui;Uk)=h)K((Uj;Uk)=h)

;Rn1+Rn2+Rn3+Op( 1nh2): Applying Lemmas 7.2, 7.3 and 7.4, we get

; n(A0) =;n+d1n;W(n)h;1=2=2 +op(h;1=2) where

W(n) =

ph n2

X

j6=l"j"l2Kh(Uj;Ul);KhKh(Uj;Ul)]

X

j;(Ul);1

X

l: It remains to show that

W(n);L!N(0v) withv= 2jj2K;KKjj22pEf;1(U).

DeneWjl= pnh bn(jl)"j"l=2 (j < l), wherebn(jl) is written in a symmetric form bn(jl) =a1(jl) +a2(jl);a3(jl);a4(jl)

with

a1(jl) = 2Kh(Uj;Ul)

X

j;(Ul);1

X

l a2(jl) =a1(lj) a3(jl) =KhKh(Uj;Ul)

X

j;(Ul);1

X

l a4(jl) =a3(lj): ThenW(n) =Pj<lWjl. To apply Proposition 3.2 in de Jong (1987), we need to check :

(1) W(n) is clean see de Jong (1987) for the denition]

(2) var(W(n))!v

(3) GI is of smaller order than var(W(n)) (4) GII is of smaller order than var(W(n))

(5) GIV is of smaller order than var(W(n)), where

GI = X

1i<jnEWij4 GII = X

1i<j<kn(EWij2Wik2 +EWji2Wjk2 +EWki2Wkj2)

GIV = X

1i<j<k<ln(EWijWikWljWlk+EWijWilWkjWkl+EWikWilWjkWjl): We now check each of the following conditions. Condition (1) follows directly from the denition.

To prove (2), we note that

var(W(n)) =X

j<lEWjl2:

DenoteK(tm) =K K(t) as them;thconvolution ofK( ) att form= 12 . It can be shown by straightforward calculations that

Ea21(jl)"2j"2l = 44

h K(02)pEf;1(U)(1 +O(h)) Ea1(jl)a3(jl)"2j"2l = 24

h K(03)pEf;1(U)(1 +O(h)) Ea23(jl)"2j"2l = 4

h K(04)pEf;1(U)(1 +O(h)): Thus, it follows that

Eb2n(jl)"2j"2l = 4

h 16K(02);16K(03) + 4K(04)]pEf;1(U)(1 +O(h)) which entails

v= 2

Z

2K(x);KK(x)]2dx pEf;1(U): Condition (3) is proved by noting that

Ea1(12)"1"2]4=O(h;3) Ea3(12)"1"2]4=O(h;2) which implies thatEW124 =hn24O(h;3) =O(n;4h;1). HenceGI =O(n;2h;1) =o(1).

Condition (4) is proved by the following calculation:

EW122W132 =O(EW124) =O(n;4h;1) which implies thatGII =O(1=(nh)) =o(1).

To prove (5), it suces to calculate the term EW12W23W34W41. By straightforward calculations, Ea1(12)a1(23)a1(34)a1(41)"21"22"23"24 = O(h;1)

Ea1(12)a1(23)a1(34)a3(41)"21"22"23"24 = O(h;1) Ea1(12)a1(23)a3(34)a3(41)"21"22"23"24 = O(h;1) Ea1(12)a3(23)a3(34)a3(41)"21"22"23"24 = O(h;1) Ea3(12)a3(23)a3(34)a3(41)"21"22"23"24 = O(h;1)

and similarly for the other terms. So

EW12W23W34W41=n;4h2O(h;1) =O(n;4h) which yields

GIV =O(h) =o(1): The proof is completed.

Proof of Theorem 6

. Analogously to the arguments for ^A, we get (Ae2(u0);A2(u0)) = rn2;;122(u0)Xn Similarly to the proof of Theorem 5, underH0u, we have

n2(A20jA10)2 = rn2Xn

K((Uj;Uk)=h)g

X

(2)j +op(h;1=2)

= ;r2nX

ki "k"i(

X

(1)i ;;12(Uk);;122(Uk)Xi(2));;1112

(Uk)(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k )K((Ui;Uk)=h) +r4n

2

X

ij "i"jXn

k=1(

X

(1)i ;;12(Uk);;122(Uk)

X

(2)i )

;;1112(Uk)(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k )

(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k );;1112(Uk)

(

X

(1)j ;;12(Uk);;122(Uk)

X

(2)j ) +Rn4+Rn5+op(h;1=2) +d1n;d1n

where

Rn4 = rn4 2

n

X

ij "i"jXn

k=1(

X

(1)i ;;12(Uk);;122(Uk)

X

(2)i );;1112(Uk)

(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k )

X

(2)k ;;122(Uk)

X

(2)j

K((Ui;Uk)=h)K((Uj;Uk)=h) Rn5 = rn4

2

n

X

ij "i"jXn

k=1(

X

(1)j ;;12(Uk);;122(Uk)

X

(2)j );;1112(Uk)

(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k )

X

(2)k ;;122(Uk)

X

(2)i

K((Ui;Uk)=h)K((Uj;Uk)=h): A simple calculation shows that asnh3=2!1,

ER2n4=O( 1n2h4) =o(h;1)

which yieldsRn4=op(h;1=2). Similarly, we can showRn5=op(h;1=2). Therefore,

; nu(A10)2 = ;r2nX

ki "k"i(

X

(1)i ;;12(Uk);;122(Uk)

X

(2)i );;1112(Uk)

(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k )K((Ui;Uk)=h) +op(h;1=2) +r4n

2

X

ij "i"jXn

k=1(

X

(1)i ;;12(Uk);;122(Uk)

X

(2)i );;1112(Uk)

(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k )

(

X

(1)k ;;12(Uk);;122(Uk)

X

(2)k );;1112(Uk)(

X

(1)j ;;12(Uk);;122(Uk)

X

(2)j )

K((Ui;Uk)=h)K((Uj;Uk)=h) +d1nu+op(h;1=2): The remaining proof follows the same lines as those in the proof of Theorem 5.

Proof of Theorem 7

. UnderHn1 and Condition

(B)

, applying Theorem 5, we have

; n(A0) =;n+vn+v2n;d2n;W(n)h;1=2=2 +Xn

k=1cnGn(Uk)

X

k"k=2] +op(h;1=2)

whereW(n) is dened in the proof of Theorem 5. The rest of the proof is similar to the proof of Theorem 5. The details are omitted. The proof is completed.

Proof of Theorem 8

. For brevity, we only present case I in Remark 3.5. To begin with, we note that underH1n:A=A0+Gnand under Condition

(C)

, it follows from the Chebychev inequality that uniformly forh!0,nh3=2!1,

; n(A0)2 = ;(n+W(n)h;1=2=2)2+op(1)h;1=2;Xn

k=1Gn(uk)

X

k"k

;

12

n

X

k=1Gn(Uk)

X

k

X

kGn(Uk);EGn(U)

XX

Gn(U)]

;

n2EGn(U)

XX

Gn(U);Rn1+Rn2+Rn3+op(h;1=2)

= ;n2;2W(n)h;1=2=2;pnEGn(U)

XX

Gn(U)Op(1)

;

n2EGn(U)

XX

Gn(U)(1 +op(1));Rn1+Rn2+Rn3

wheren,W(n),Rni,i= 123 are dened in the proof of Theorem 5 and its associated lemmas, andop(1) andOp(1) are uniform inGn2Gn in a sense similar to that in Lemma 7.2. Thus,

(Gn) = Pfn;1(; n(A0) +n)c()g

= Pfn;1;W(n)h;1=2=2;(Rn1;Rn2;Rn3+n

2EGn(U)

XX

Gn(U)

(1 +op(1)))=2]c()g

= P1n+P2n with

P1n = Pfn;1(;W(n)h;1=2=2) +n1=2h5=2b1n+nh9=2b2n;nh1=2b3n c()

jb1njMjb2njMg

P2n = Pfn;1(;W(n)h;1=2=2) +n1=2h5=2b1n+nh9=2b2n;nh1=2b3n c()

jb1nj> Mjb2nj> Mg and

b1n = (n1=2h5=2n2);1(;Rn1+Rn2) b2n = (nh9=2n2);1Rn3

b3n = (h1=2n2);11

2EGn(U)

XX

Gn(U)(1 +op(1)) Whenhc;10 =2n;1=4, we have

n1=2h5=2c0nh9=2 n1=2h5=2!0 nh9=2!0:

Thus forh!0 and nh!1, it follows from Lemma 7.2 that ()!0 only when nh1=22! ;1. It implies that2n =n;1h;1=2and the possible minimum value ofn in this setting isn;7=16. Whennh4!1,

for any >0, applying Lemma 7.2, we nd a constantM >0 such thatP2n < =2 uniformly inGn 2Gn.

Then ()=2 +P1n:

Note that supGn()P1n ! 0 only when B(h) = nh9=2M;nh1=22 ! ;1. B(h) attains the minimum value;89(9M);1=8n9=4ath= (2=(9M))1=4. Now it is easily shown that in this setting the corresponding minimum value ofn isn;4=9withh=cn;2=9for some constantc. This completes the proof.

Proof of Theorem 9

. Letcdenote a generic constant. Then, underH0, RSS0;RSS1=Xn The proof will be completed by showing the following four steps.

(1) D1=Op(1),

which implies (1). The proofs of (2) and (3) are the same as the proof of Theorem 5. The details are omitted.

The last step follows from RSS1=Pni=1"2i +D2. Using the inequality 1+xx log(1 +x)xforx >;1, we

Before proving Theorem 10, we introduce the following lemma.

Lemma 7.5

Under Condition (A1){(A3) and (B1) { (B3), n(;1)=hc0(logn) and >(;1)=(;2) we have

A^(u0);A(u0) =r2ne;(u0);1Xn

i=1q1(A(Ui)

X

iYi)

X

iK((Ui;u0)=h)(1 +op(1)) +Hn(u0) wherern= 1=pnh

Hn(u0) =r2n;(e u0);1Xn

i=1q1((u0)

Z

iYi);q1(A(Ui)

X

iYi)]

X

iK((Ui;u0)=h)(1 +op(1)) andop(1) is uniform with respect to u0:

Proof.

Let(u0Ui

X

i) =(u0)

Z

i and=rn;1(;(u0)). For any compact setC,2C, l() = hXn

i=1lfg;1((u0Ui

X

i) +rnZi)Yig;lfg;1((u0Ui

X

i))Yig]Kh(Ui;u0)

= hrnXn

i=1q1((u0Ui

X

i)Yi)

Z

iKh(Ui;u0) (7.22) +hr2n

2

n

X

i=1q2((u0Ui

X

i) +n

Z

iYi)(

Z

i)2Kh(Ui;u0) wheren=tn0tn rn:In the following we shall prove

hr2n=2Pni=1q2((u0Ui

X

i) +n

Z

iYi)(

Z

i)2Kh(Ui;u0)

=;diag(;(e u);(e u)Rt2K(t))+op(1): (7.23) Consider the empirical process indexed byF =fFn:u02Djjjj1gwhere

Fn(u0) =q2((u0U

X

) +

Z

Y)

XX

(U;u0)=h

XX

(U;u0)=h

XX

(U;u0)2=h2

XX

!

Kh(U;u0): Under the conditions (A2), (A4) and (B2), it is easy to show that for some functionc(

X

UY),

jFn(u11);Fn(u22)jc(

X

UY)h;2(jj1;2jj+ju1;u2j)

with Ec(

X

UY)<1. Following the same arguments as those in Lemma 7.4 (Zhang and Gijbels, 1999), we obtain that whenhn(;1)==c0(logn), >(;1)=(;2)

hr2n=2Xn

i=1q2((u0Ui

X

i) +n

Z

iYi)

Z

i

Z

iKh(Ui;u0)

=

1 R tK(t)dt

R tK(t)dt R t2K(t)dt

!

;(e u0)(1 +op(1)) uniformly foru0in a compact set.

Let

The remaining proof is almost the same as that of Theorem 5 if we invoke the following equalities:

E"ij(

X

iUi)] = 0 E"2ij(

X

iUi)] =;Eq2(A0(Ui)

X

i)Yi)j(

X

iUi)]: The proof is completed.

References

Aerts, M., Claeskens, G. & Hart, J.D. (1998). Testing lack of t in multiple regression. Manuscript.

Azzalini, A. & Bowman, A.N. (1993). On the use of nonparametric regression for checking linear relation-ships. J. Roy. Statist. Soc. Ser.B

55

, 549-557.

Azzalini, A., Bowman, A.N. & Hardle, W. (1989). On the use of nonparametric regression for model checking. Biometrika

76

, 1-11.

Bickel, P.J. & Ritov, Y. (1992). Testing for goodness of t: a new approach. In Nonparametric Statistics and Related Topics, Ed. A.K.Md.E. Saleh, pp.51-7. North-Holland, New York.

Bickel, P.J. and Rosenblatt, M. (1973). On some global measures of the deviation of density function estimates. Ann. Statist.,

1

, 1071{1095.

Brown, L. D. and Low, M. G. (1996). Asymptotic equivalence of nonparametric regression and white noise.

Ann. Statist.,

24

, 2384{2398.

Cai, Z., Fan, J. and Li, R. (1998). Generalized Varying-Coecient Models. manuscript

Carroll, R.J., Fan, J., Gijbels, I. and Wand, M.P. (1997). Generalized partially linear single-index models.

J. Amer. Statist. Assoc.,

92

, 477-489.

Chen, J. H. and Qin, J. (1993). Empirical likelihood estimation for nite populations and the eective usage of auxiliary information. Biometrika,

80

, 107{116.

Cleveland, W.S. and Devlin, S.J. (1988). Locally-weighted regression: an approach to regression analysis by local tting. J. Amer. Statist. Assoc.,

83

, 597{610.

Cleveland, W.S., Grosse, E. and Shyu, W.M. (1992). Local regression models. In Statistical Models in S (Chambers, J.M. and Hastie, T.J., eds), 309-376. Wadsworth & Brooks, Pacic Grove.

de Jong, P. (1987). A central limit theorem for generalized quadratic forms. Probab. Theory Related Fields,

75

, 261-277.

Eubank, R.L. and Hart, J.D. (1992). Testing goodness-of-t in regression via order selection criteria. Ann.

Statist.,

20

, 1412-1425.

Eubank, R.L. and LaRiccia, V.M. (1992). Asymptotic comparison of Cramer-von Mises and nonparametric function estimation techniques for testing goodness-of-t. Ann. Statist.,

20

, 2071-86.

Fan, J. (1993). Local linear regression smoothers and their minimax eciency. Ann. Statist.,

21

, 196{216.

Fan, J. (1996). Test of signicance based on wavelet thresholding and Neyman's truncation. J. Amer.

Statist. Assoc.,

91

, 674-88.

Fan, J. and Gijbel, I. (1996). Local Polynomial Modeling and Its Applications. Chapman & Hall, London.

Fan, J., Hardle, W. and Mammen, E. (1998). Direct estimation of low-dimensional components in additive models. Ann. Statist.,

26

, 943{971.

Fan, J. and Huang, L. (1998). Goodness-of-t test for parametric regression models. Technical report, Department of Statistics, UCLA.

Fan, J., Liu, A. and Zhang, J. (1999). Sieve empirical likelihood ratios for nonparametric functions.

manuscript.

Hall, P. and Owen, A. B. (1993). Empirical likelihood condence bands in density estimation. J. Comput.

Graph. Statist.,

2

, 273{289.

Hardle, W. and Mammen, E. (1993). Comparing nonparametric versus parametric regression ts. Ann.

Statist.,

21

, 1926{47.

Hart, J.D. (1997). Nonparametric Smoothing and Lack-of-Fit Tests. Springer-Verlag, New York.

Hastie, T.J. and Tibshirani, R.J. (1990). Generalized Additive Models. Chapman & Hall, London.

Hastie, T.J. and Tibshirani, R.J. (1993). Varying-coecient models (with discussion). Journal of the Royal Statistical Society, B,

55,

757-796.

Huber, P.J. (1973). Robust regression : asymptotics, conjectures and Monte Carlo. Ann. Statist.,

1

, 799{821.

Inglot, T., Kallenberg, W.C.M. & Ledwina, T. (1994). Power approximations to and power comparison of smooth goodness-of-t tests. Scand. J. Statist.

21

, 131-45.

Inglot, T. and Ledwina, T. (1996). Asymptotic optimality of data-driven Neyman's tests for uniformity.

Ann. Statist.,

24

, 1982{2019.

Ingster, Yu. I. (1993). Asymptotic minimax hypothesis testing for nonparametric alternatives I-III. Math.

Methods Statist,

2

, 85-114

3

, 171-189

4

249-268.

Kallenberg, W.C.M. and Ledwina, T. (1997). Data-Driven smooth tests when the hypothesis is composite.

Jour. Ameri. Statist. Assoc.,

92

, 1094 {1104.

Koroljuk, V.S. and Borovskich, Yu.V. (1994). Theory of U- Statistics. Kluwer Academic Publisher, Ams-terdam.

Kuchibhatla, M. & Hart, J.D. (1996). Smoothing-based lack-of-t tests: variations on a theme. Jour.

Nonpara. Statist.,

7

, 1-22.

Lepski, O.V. and Spokoiny, V.G. (1999). Minimax nonparametric hypothesis testing: the case of an inhomogeneous alternative. Bernoulli,

5,

333-358.

Li, G., Hollander, M., McKeague, I. W. and Yang, J. (1996). Nonparametric likelihood ratio condence bands for quantile functions from incomplete survival data. Ann. Statist.,

24

, 628{640.

Neyman, J. (1937). Smooth test for goodness of t. Skandinavisk Aktuarietidskrift,

20

, 149-99.

Nussbaum, M. (1996). Asymptotic equivalence of density estimation and Gaussian white noise. Ann.

Statist.,

24

, 2399{2430.

Owen, A. B. (1988). Empirical likelihood ratio condence intervals for a single functional. Biometrika,

75

, 237{249.

Owen, A. B. (1990). Empirical likelihood ratio condence regions. Ann. Statist.,

18

, 90{120.

Randle, D.H. and Wolfe, D.A. (1979). Introduction to the Theory of Nonparametric Statistics. John Wiley

& Sons, New York-Chichester-Brisbane.

Shen, X., Shi, J. and Wong, W.H. (1999). Random sieve likelihood. J. Amer. Statist. Assoc., to appear.

Silverman, B.W. (1984). Spline smoothing: the equivalent variable kernel method. Ann. Statist.,

12

, 898{916.

Spokoiny, V.G. (1996). Adaptive hypothesis testing using wavelets. Ann. Statist.,

24

, 2477-2498.

Wilks, S.S. (1938). The large-sample distribution of the likelihood ratio for testing composite hypotheses.

Ann. Math. Stat.,

9

, 60-62.

Zhang, J. and Gijbels, I. (1999). Sieve empirical likelihood and extensions of generalized least squares.

Discussion paper, Institute of Statistics, Universite Catholique de Louvain.

Documents relatifs