• Aucun résultat trouvé

On linear estimates defined by a continuous weight function By JAN Ju~G

N/A
N/A
Protected

Academic year: 2022

Partager "On linear estimates defined by a continuous weight function By JAN Ju~G"

Copied!
11
0
0

Texte intégral

(1)

A R K I V F O R M A T E M A T I K B a n d 3 n r 15

C o m m u n i c a t e d 10 N o v e m b e r 1954 b y HARALD CRAM]~R

On linear estimates defined by a continuous weight function

B y JAN J u ~ G

W i t h 1 f i g u r e i n t h e t e x t

1. Introduction

L e t X be a r a n d o m variable having the cumulative distribution function (cdf)

/ \

F ( ~ - ~ ) , where

F(y)is

a known cdf and where ju a n d (r are u n k n o w n para- meters. F r o m n independent observations of X, we obtain b y rearranging t h e m in order of magnitude the order statistics x(1)-<x(2)- < "" ~x(~). A linear e s t i m a t e of # or a is a systematic statistic

Assuming t h e means a n d t h e covariances of the standardized order statistics

y(~)x(,,)-# to

be known, the constants h(~ n) m a y be determined so as to m a k e

(~

0* unbiased and of m i n i m u m variance. (LLOYD [4].) I n a few special cases explicit solutions of the constants h(f ) are obtained (DowNToN [3], SA~HAN [6]).

I n the general case m u c h numerical work is needed.

I n this note, we p u t h ( ~ n ) = l h ( v ) n n - ~ ' where h (u) is a continuous function, depending on F (y) only, a n d we shall prove t h a t h (u) m a y often be chosen so as to m a k e the corresponding estimate

O*(h)

consistent a n d a s y m p t o t i c a l l y efficient.

I n section 2 we s t u d y t h e properties of the statistic 0* (h) for an unspecified function

h(u).

I n the n e x t section we solve a special m i n i m u m problem, and t h e solution is applied to the determination of the best weight function in sec- tion 4, where we also m a k e the transformations necessary for practical use.

The last sections deal with t h e a s y m p t o t i c efficiency of the estimates o b t a i n e d a n d some applications.

2. Asymptotic expressions for the m e a n and the variance of a certain type of linear systematic statistics

L e t Y(1) < Y(2)< "'" < y(n) be the order statistics from a sample of n independent observations of the r a n d o m variable Y. We assume t h a t Y has the cdf 2' (y), a continuous probability density function (pdf) [ (y) and a finite second m o m e n t .

(2)

J. Jur~G, On linear estimates defined by a continuous weight function

I f h(u) is a real function defined on (0, 1) we are going to s t u d y t h e r a n d o m variable

h v I n the sequel we need the following

Lezrm~a t . I] h (u) satis[ies the condition

C 1 : h (u) is bounded and lu~ at least /our derivatives, which are bounded in (o, 1),

then

1

E (Z.) = S a (u) h (u) d ~ + 0 (n -1) (2)

0

1 v

D~ <Z'~) =~" o f

o

Su(i-v) G'(u)h(u)G'(v)h(v)dudv+O(n-2)' <3)

where G (u) is the inverse /unction

F -1 (u) o/ F (y). (4)

P r o o f of (2). T h e expectation of y(,) can be written

1

f i t ~ %

E [ y ( , ) ] = n l G ( u ) ( : - - l l ) U V - l ( 1 - u ) "-" du (el. C~AM~,R [1], p. 370).

t J \~ - -I

0

Thus f r o m (1) the expectation of Za is

1

E (Z.) = f G (u) S (u) du, (5)

0

where we h a v e (substituting v + 1 for v)

s (u) = Y. h - u" (1 - . (6),

B y C 1 we m a y expand h \ n - ~ ] in a T a y l o r series a b o u t u~ n + 1 ( v + l ~ ~

[,-(n-1)U]~h,~) [v-(n-1)U]ih<.)tct~

h t ~ / = ~ o ~ 7 ) ~- ( u . ) + 4 ! . . . . ( n + 1)~ ' ~ ' " (7)

1 n

where 0r depends on n, u a n d v and satisfies n--+-i < 0~ < n + 1"

(3)

ARKIY FOR M A T E M A . T I K . Bd 3 nr 15 If we change the order of summation in (6) and (7), we shall need the sums

.1 ( : )

b k = ~ [v - (n - 1 ) u] k n 1 u ' ( 1 - - u ) n-l-"

v ~ 0

which are the well-known central moments of a binomial distribution, whence b o = l bl=O b ~ - ~ ( n - 1 ) u ( 1 - u ) b a = ( n - 1 ) u ( 1 - u ) ( 1 - 2 u ) and b 4 = a ( n - 1 ) 3 u ~ ( 1 - u ) ~ + ( n - 1 ) u ( 1 - u ) [ 1 - 6 u ( 1 - u ) ] < ~ ( n - 1 ) 2.

Combining (6) and (7) we thus have

(n - 1) u (1 - u) h" (Un) -t (n -- 1) u (1 -- u) (1 -- 2 u) h"' (un) + RI, S ( u ) ~ h ( u , ~ ) + 2 ( n + l ) ~ 6 ( n + l ) 3

where, supposing [h(4)(u)[<M4 (cf. C 1),

t~,~ - ~'(1-u) ~-~-~ <

- - 4 [ ( n +

1),5,- <

128n 3 Renewed T a y l o r expansion about u gives

(u) - h (u) + 1 [(1 - 2 u) h' (u) -~ u (1 - u) h " (u)] ÷ R~

n 2

(8) where R 2 depends on the upper bounds of the derivatives h" (u), h'" (u) a n d h (4) (u) and is of the form 0 (n-2).

1

As ~q(u) is bounded, and as f G ( u ) d u = E ( Y ) exists, eq. (2)follows a f o r t i o r i .

0

The approximation of E(Z~) is a straightforward generalisation of the above m e t h o d to two dimensions, and will not be carried out here. Subtracting the square of E(Zn) as given b y (5) and (8) we get

w ( z n ) = a(v) ~ [ ( 1 - v) h(~)] f a ( u ) d o d u [ u h ( u ) ] d u d v + 1 1

+n f 5 G~ (u) h~ (u) du + O (~-~).

B y three successive integrations b y part, integrated parts vanish, and we get eq. (3).

Introduce in (3) the symmetrical kernel

the right integral is cancelled, the

= 0 < u _ < v < _ l )

k ( u , v ) u ( 1 - v ) if (9)

( 1 - u ) v O<_v<_u<_l

1 1

• -

D'(zn)=~! fk(u'v)a'(~)h(u)a'(v)h(v)du~v+ ° ( ~ - ~ ) ' o (lO)

(4)

J. JUNG~, On linear estimates defined by a continuous weight function

1 ~ Y(,) and Z2n =1- h 2 ~ y(~),

n 1,~1 n t,=l

h 1 (u) and h~ (u) satis/y C 1, then 1 1

C°V (~ln'~2n)--l--n f

f k(u' v)G' (u)hl(u)G' (v)

0 0

where

P u t t i n g Z~ = Z 1 , + Z ~ , we h a v e

h , ( v ) d u d v + 0 (n-~). (11)

D ~ ( Z . ) = D 2 ( Z 1 n) Jr 2 C o v ( Z l n, Z 2 n) ~- D 2 (Z2 ~).

A p p l y i n g (10) t o b o t h m e m b e r s of t h e equation, t h e e o r r o l l a r y follows.

3. On a special minimum problem associated with the asymptotically best linear estimates

L e t gl(u) a n d g2(u) b e continuous f u n c t i o n s in (0, 1) a n d k ( u , v ) b e t h e k e r n e l defined in (9).

P u t

1 1

z(~, ~ ) = f fk(~, v ) ~ ( ~ ) ~ ( v ) d ~ d v .

0 0

L e r m m a 2. I / gl (u) and g~ (u) satis]y the conditions /or g(u) C 2 : g ( 0 ) = g ( 1 ) = 0

g" (u) exist.s, is continuous and ]inite in the open interval (0, 1) [ g " ( u ) ' = O ( 1 ) as u ~ O , ' g " ( u ) ' - - O ( l ~ u ) as u ~ l

then there exists one unique ]unction qJ (u)= To (u), that makes I (q~, q~) a minimum subject to

1 1

f ~ (u) gl (~) = ~ , f ~ (u) g~ (~) = ~ . (12)

0 0

P r o o f . F r o m t h e calculus of v a r i a t i o n s we quote, t h a t if a solution exists, i t m u s t be of t h e f o r m

go (u) = a 1 ~1 (u) + a s ~ (u), (13)

w h e r e Vj (u) a n d ~p2(u) satisfy

1

fk(u,v)q~,(v)dv=g~(u) ( i = 1 , 2) (14)

0

a n d where a I a n d a z a r e d e t e r m i n e d b y (12).

I n s e r t i n g (9) in (14) a n d d i f f e r e n t i a t i n g twice w i t h r e s p e c t t o u we o b t a i n a n e c e s s a r y condition for Vt (u):

~, (~) = - g;' (~). (15)

(5)

ARKIV FOR MATEMATIK. Bd 3 nr 15 B y partial integration of the left member of ( 1 4 ) w i t h ~v~(u)=-g~'(u), the integrated terms vanish b y C 2, and thus (15) is a unique solution of (14).

In the sequel we need the integrals

1 1 1

d ~ . = I (q., ~ . ) =

f q~,,(u) f k(u,

v) c f . ( v ) d u d v =

f q~.(u)g~(u)du=

0 0 0

1 1

= f

-g'~'(u)g.(u)du-- fg'.(u)g:(u)du, (/~,v=l,2) (16)

0 0

The successive equalities follow from equations (14), (15) and (C 2).

Introducing the matrix notations

a=(~1,~2) a=(al, a2) D = { dll d1' " D = [ D ] > 0

. . d 2 1 d~.~ '

we get from (12), (13) and (16)

(17)

D a = c t , "" a = D - l a . (18) Having solved a, we have easily

I(~0, qo) = a ' D a = c t ' D - 1 a. (19) T h a t I(q~ o, ?o) is a minimum value can be proved in the usual way.

4. Asymptotically best linear estimates o f location and scale parameters X is a random variable with cdf F I ~ - - ~ - ) , where # and a are unknown

\ ~ /

parameters, and where F (y) has a finite second moment, a continuous pdf i t (y) and an inverse function G(u)=F-l(u). Having observed the order statistics xc1)- x(2) <- "'- - x(n), we want to estimate an arbitrary linear combination 0 =

= ~ 1 # + ~ 2 ~ of the parameters /x and ~ b y an estimate of the form

o* (h)- 1 h ~ x~,),

- - T ~ vffil

function h is to be conveniently chosen. As the inverse of F ( ~ ) .

/ k

where the

is /x + a G (u) we obtain, when Lemma 1 is applicable,

1

E[O*(h)]= ] [ u + a G ( u ) ] h ( u ) d u + O ( n - 1 ) = - M ( h ) + O ( n -1) (20)

1 1 0

o'ff

D2 [0" (h)]----~ - k(u,v) G'(u)h(u)G'(v)h(v)dudv+O(rC2) = -

0 0

O- 2

= V ( h ) + O ( n - 2 ) , (21)

~b

where we have defined M (h) and V(h), and where k (u, v) is defined by (9).

(6)

j. Zur~¢, On linear estimates defined by a continuous weight function

The " b e s t " estimate of this kind is defined b y a function h = h 0, t h a t makes V (h) a minimum when M (h) = 0 = ~1 # + ~2 a.

B y introduction of the functions

1 (7 (u)

(u) = o ' (u) h (u) gl (u) = U' (u) g~ (u) = ~ ~ ) (22) we get identically the problem of section 3.

As G' ( u ) > 0, we have the following

T h e o r e m 1. Subject to the conditions C 1 and C 2 o/ the lemmata 1 and 2, the asymptotically best estimate is de/ined by the /unction

a I d ~ 1 a S d ~ G ( u )

h ° ( u ) = G'(u) d u ~ G'(u) G'(u) d u z G'(u) ~ a l h l ( u ) + a ~ h 2 ( u ) where a I and a2 are defined by (16), (18) and (22).

The asymptotical variance o/ O* (h) is

D ~ [0" (h)] = - ~ (a' D -1 a) + 0 (n-2), with ot and D /rom (16), (17) and (22).

As a rule the function G (u) is difficult to handle, and therefore in practical apphcations it is easier first to calculate the functions Ht (y)--ht [F (y)], and t h e n to calculate hi (u) from F ( y ) .

The transformations, given without proof, are elementary, b u t somewhat tedious.

Introduce the auxiliary functions

d log ] (y) d log /(y)

71 (Y)-- d y 7~ (Y) = - 1 - y d y (23)

T h e n we have for the m a t r i x D

d ~ = f r l (Y) ~ (Y) [ (Y) d y.

- - o o

(24) F o r the functions H~ we get

H 1 (y) = ~ (y) H~. (y) -- ~,~ (y) (25)

H o (y) = a 1 H 1 (y) + a 2 H~ (y), where as before (26)

a ~ D - l a .

(7)

ARKIV FOR MATEMATIK. B d 3 nr 15

F o r t h e speciM cases of #* a n d a* we o b t a i n

w h e n 0 = j u :

Ho (y) - B (y) = ~ Hl (y) - ~ H2 (y ) D 2 (#*) =~--~ d22

+ o (n-s) ;

w h e n 0 = ~ :

H o ( y ) - C ( y ) = - - ~ H l ( y ) +

d12 H s ( y ) cl s dll

D s (a*) = ~ - ~ + 0 (n-~).

(27)

(2s)

Note.

I f p or a is known, we m a y from the estimate (19) subtract the known part of (20), and get only one condition on h 0 (u). We then obtain the estimates and weight functions

1

= - ~ b I - - x(,)--a

G(u)bl(u)du,

(29)

P~ n v=l n + l

0 where

where

1 a S 1

B, (y) = b, I F (y)] = anT- H~ (y), D~ (p~) = n 7 - + 0 (n -s) a n

1

al = - cl x(~) (u) d u, (30)

O

CI(Y)----c~[F(Y)]=~2H~(Y)'

1

D2(a~)=a 2 1

-~ ~ + o (,~-s).

The estimate a~ is asymptotically equivalent with the estimate

~2 = - cl (x(,) -- ~). ( 30 a)

5 . O n t h e a s y m p t o t i c a l e f f i c i e n c y o f t h e e s t i m a t e s

I f

g(x; 01... Ok)

is a p d f d e p e n d i n g on ]c p a r a m e t e r s 01... Ok t h e C r a m 6 r - R a o t h e o r e m (cf. CRAM~R [2]) gives t h e m i n i m u m v a r i a n c e of a n y regular un- biased e s t i m a t e 0~ or t h e m i n i m u m generalized v a r i a n c e of a n y j o i n t estimate (0f,, Of .. . . . 0~) in t e r m s of t h e m a t r i x D defined b y

[0 log g 0 log g] (/z, v = 1, k). (31) d . , = E

L-5~

o0, " ' "

i (

X 01 \

I n our case g ( x ; 01, 02) = 02 \ 02 ] [ | , a n d it is e a s y t o prove, t h a t t h e elements d~,, of (24) are identical with those of (31). A simple c o m p a r i s o n of t h e vari- ances t h e n proves

(8)

J. JUNG, On linear estimates defined by a continuous weight function

T h e o r e m 2. I[ the conditions G 1 and C 2 o/ the lemmata 1 and 2 are saris/led, then

1) (if*, a*) as de/ined by (19), (27) and (28) are asymptotically ioint e//ieient;

2) I~1. and a~ de/ined by (29) and (30) are asymptotically e//icient; *

3) /a. and a* are each o/ asymptotical minimum variance i/ the other is re- gardezl as an unknown nuisance parameter. They are each asymptotically e//icient i/ the matrix element

o o

( I ' (y) [ + I' (y)]

d'2 = d~x = ) / - ~ L1 y 7 ~ J / (y) d y = (by C 2)

--0O

o o

/'~ (Y) d y = 0

= ) Y-i

(This applies e.g. to all symmetric distributions.)

6 . A p p l i c a t i o n s

a .

where

Student's distribution with m d . o/ 1. P u t

F (y) = Sm (y) and , (y) = sm (y) = cm " ( l + ~ - ) - - - -

F r o m eq. (23) we get m + l 7t (Y) = - - - -

m

_ ,

y m + 1 y2

2 79. (Y) yZ 1.

l + y - m l + - -

m m

m + l 2

F r o m (24) we have

o ¢

d,. = f r , (Y) 7, (Y) /(Y) dy.

- o o

Introducing 1 + ~ - - = : - the integrals d . . are easily evaluated, and we get

m

m + l 2 m

dll = m + 3 d1~ = dzj = 0 d22 = m +----3"

F r o m (25) we get

(32~

(9)

ARKIV F6R MATEMATIK. Bd 3 nr 15 h (u 2

÷2

-1

-2.

f \ h e , . )

I i \fk,-<.>

iltl

r n ; 3

h(,.~)

) ~ 0 4 - -,¢-- ~¢-.----~ ~

-.-1

- 2 .

m =:~o

Fig. 1. Weight functions b (u) of p* ( - - - ) and c(u) of a* ( ) for Student's distribution with 3 and 30 degrees of freedom.

m + l H1 (y) = -

m ( 1 +

H2 (y) = 2 m + 1 m

a n d finally from (27) and (28)

m + 3 m

for /~* : B (y) =

o

y~

m

Y y~2

a ~ m + 3 + 0 (n_2) D2 (~*) ~ m + I

for a*: C(y): (m + 3) (m + l) . y a ~ m + 3

m 2 ( y~2 D 2 (a*)

1 + m ) n 2 m

+ 0 (~-~).

Introducing in B ( y ) and C ( y ) u = S , n (y) we get the weight functions b (u) and c(u) to be used in (19).

In Fig. 1 b(u) and c(u) are shown for m = 3 0 and m = 3 .

(10)

J. JUNG, On linear estimates defined by a continuous weight function

For m = 1 or 2, the second moment of X is infinite, a n d therefore the finite expression (21) does n o t represent the variance of #* or a*. (Leaving out the 4 extreme observations we can, however, obtain consistent estimates for # and a.) F o r m - ~ , b ( u ) ~ l in the interior of (0, 1) and hence v~ is an asymptotically efficient estimate of the mean of a normal distribution, which is no overstatement.

W h e n m - + ~ , Ic(u)l tends to infinity at u = 0 or 1. Thus we cannot a p p l y L e m m a 1 and are no~ justified to use as weight function for a* the limit q}-l(u), as m - + o o , of e(u) for m d . of f.

1 y~-i e-~

b. Distribution.~ o[ the Pearson type I I I . With / ( y ) = ~ - - ~ for 0_< y, we obtain formally Hl(y ) = 2 - 1 , y- H~(y)= 1,

d11=~-2-~, d s z = d 1 2 = l , d ~ = ; t . 1

As soon as ~t is unknown, we cannot avoid the use of H 1 (y) or h l(u), which do n o t satisfy condition C 1. B u t in the common case where # is known, we could apply (30) or (30 a) and obtain the asymptotically efficient estimate

as = n- ,=1 el [x, - ;u], where c 1 (u) = 1

a~ = - - - ~ - with the variance

O 3 (~) = ~ + 0 (n-~).

7. R e m a r k s

I t should be stressed t h a t the conditions C 1 and C 2 are not necessary condi- tions. The essential part of C 2 is the condition lim gj ( u ) = l i m g~.(u)-=O as u - + 0 or u-->l. I n terms of /(y) this implies t h a t limes / ( y ) = l i m y/(y)=O as y tends to the endpoints of the distribution. If these endpoints are finite, this is an essential condition, reminding of t h e conditions for the existence of a regular estimate. (Cf. C R i e R [2], p. 485, ex. 4.)

The condition C 1 appears to be unnecessary restrictive. If it were possible to allow h (u) to tend to infinity at u = 0 or 1 and still have a remainder term in (3) which tends to zero not too slowly, some of the difficulties in the applications could be removed.

A m o n g all the problems left open, the most i m p o r t a n t one seems to be the conditions on h in order t h a t 0* should be asymptotically normal. If such a condition could be derived, we should have a restricted class of B A N (Best Asymptotically Normal) estimates (cf. N E Y ~ N [5]).

(11)

AI~KIV FOR MATEMATIK. B d 3 n r 15

R E F E R E N C E S

1. CP~M~R, H., Mathematical Methods of Statistics, P r i n c e t o n 1946.

2. , A contribution to t h e t h e o r y of statistical estimation. Skand. Aktuarietidskrift, 1946, p. s s .

~. DowNTo~, F., Least-squares estimates using ordered observations. A n n . Mat. Star, Vol. 25, p. 303.

4. LLOYD, E. H., Least-squares estimation of location a n d scale p a r a m e t e r s using order statistics.

Biometrika, Vol. 39, p. 88.

-5. I~Ey~AN, J . , Contribution to t h e t h e o r y of t h e %2 t e s t . Proceedings of t h e Berkeley S y m p o - sium, Berkeley 1949, p. 239.

6. SARHAN, A. E., E s t i m a t i o n of t h e m e a n a n d s t a n d a r d deviation b y order statistics. A n n . Mat. Star. Vol. 25, p. 317.

Tryckt den 21 april 1955

Uppsala 1955. Ahnqvist & Wiksells Boktryckeri AB

Références

Documents relatifs

3.5] which remains valid although parts of its proof require some corrections that we now explain.. Proof of Proposition 3.5

Our result is a sufficient condition for the convergence of a finite state nonhomogeneous Markov chain in terms of convergence of a certain series.. Definition

It is shown that this condition is sufficient for f to be Lipschitz at each point of X.. The correct form of the theorem on page 276 is

In this paper, we propose a randomized algorithm based on quasi– Monte Carlo method for estimating the condition number of a given matrix.. Computational results on various

Parashar and Choudhary [27] have intro- duced and examined some properties of four sequence spaces defined by us- ing an Orlicz function M, which generalized the well-known

This paper provides an effective method to create an abstract simplicial complex homotopy equivalent to a given set S described by non-linear inequalities (polynomial or not)1. To

WHEEDEN, Harnack’s inequality and mean-value inequalities for solutions of degenerate elliptic equations, Comm. SERAPIONI, The local regularity of solutions

determining a certain configuration S in Euclidean n-dimen- sional space; in what cases and by what methods can one pass from the equation to the topological