• Aucun résultat trouvé

Developing p-values: a Bayesian-frequentist convergence

N/A
N/A
Protected

Academic year: 2021

Partager "Developing p-values: a Bayesian-frequentist convergence"

Copied!
16
0
0

Texte intégral

(1)

229:2004.08.02

DEVELOPING

p

-VALUES:

A BAYESIAN-FREQUENTIST CONVERGENCE

By D.A.S. FRASER and JUDITH ROUSSEAU

Department of Statistics, University of Toronto, Toronto, Canada M5S 3G3 Biomedicale, University Paris5, 75270 Cedex 06, Paris, France

dfraser@utstat.toronto.edu, jroussea@biomedicale.univ-paris5.fr

ABSTRACT

Various p -values for a composite null hypothesis have had extensive attention in the Bayesian literature, with some preference shown for two versions designated pppost and pcpred; and it has been indicated that certain candidate p -values can be upgraded to the preceding prefered p -values by the parametric bootstrap. Also recent likelihood theory gives a factorization of a statistical model into a marginal density for a full dimensional ancillary and a conditional density for the maximum likelihood variable. For any given initial or trial statistic that provides a location for a data point we develop: a special version of the Bayesian pcpred, an ancillary based p -value designated panc, and a bootstrap based p -value designated pbs. Then we show under moderate regularity that these are equivalent to the third order and have uniqueness as a determination of the statistical location of the data point, as of course derived from the initial location measure; we also show that

(2)

they have a uniform distribution to third order, as based on calculations in the moderatedeviations region. This gives three very different routes to a unique p -value corresponding to the initial trial measure of location and the composite null hypothesis. We thus have great flexibility for the derivation of p -values from a convergence of Bayesian and frequentist points of view. Examples are given to indicate the ease and flexibility of the approach.

Some key words. Ancillary; Bayesian; Bootstrap; Conditioning; Departure mea-sure; Likelihood; p -value.

1. INTRODUCTION

Consider a statistical model f (y; θ) with observed data y0 and a default or subjective prior π(θ) and suppose that some statistic t(y) is being proposed to describe the location of the data with respect to the model. A central issue in statistics is to statistically calibrate t(y) or an appropriate component of it so as to obtain a p -value that gives a true statistical position of the data relative to the model. For example, with a sample from a normal model with known mean µ0 and a proposed location measure t(y) = ¯y , we might reasonably hope that the indicated p -value would be p0 = Hn−1(t0) , where H is the Student distribution function and t0 is the observed t -statistic for assessing y0 relative to µ0; this p -value is of course the usual Student value recording the % -ile position of the data with respect to the normal model located at µ0.

Bayarri & Berger (2000) and Robins et al (2000) discuss a wide range of Bayesian methods for developing p -values for the model f (y; θ) , with some prefer-ence indicated for two versions designated pppost, pcpred. A Bayesian p -value for t(y) is obtained by first determining a posterior density for θ derived from some

(3)

aspect of the data designated say Data1,

π(θ|Data1) = cL(θ; Data1)π(θ),

and then using it to eliminate θ from the distribution function say G for t(y) derived from another aspect of the data say Data2,

p0 = Z

G(t0; θ)π(θ|Data1)dθ.

If the full data y0 is used in both places there is a clear conflict in the probability calculations, often refered to as ’double-use’ of the data. To avoid this double use of the data, pcpred is obtained by taking π(θ|Data1) = π(θ|U ) to be the posterior of the parameter based on some relevant statistic U and taking G(t; θ) = P {t(y) < t|U, θ} to be the conditional distribution function of t given the same statistic U . Bayarri & Berger (2000) and Robins et al (2000) study the case where U is the conditional maximum likelihood estimator given the test statistic t(y) ; for this case, Robbins et al (2000) prove that when t(y) is asymptotically normal then pcpred is asymptotically uniform to first order. In many settings however this conditional maximum likelihood estimator is extremely difficult to obtain as it requires an explicit expression for the density of t(y) given the parameter θ . Here we will consider pcpred based on the use of the maximum likelihood value ˆθ as the relevant statistic U to split the data information; in other words Data1 = ˆθ . Robert and Rousseau (2003) prove that for any statistic t(y) , pcpred is then first-order equivalent to the conditional p -value P (t(y) < t|ˆθ, θ) .

As a simple frequentist first-order p -value, we have the plug-in value

p0plug = G(t0; ˆθ0), (1.1) where G(t; θ) = P (t(y) < t|θ) is the unconditional distribution function for t(y) and the parameter value has been replaced by the observed maximum likelihood

(4)

value. This plug-in p -value can also be viewed as the result of a single paramet-ric boot-strap calculation. For the simple normal example the plug-in p -value is Φ{t0(1 − 1/n)1/2} where Φ is the standard normal distribution function. This p -value has been shown to behave poorly in some cases; see Bayarri & Berger (2000) and Robins et al et al (2000).

Recent likelihood theory has produced a factorization of the model into a marginal density g(a) for a full ancillary a(y) and a conditional density h(ˆθ|a, θ) for ˆθ given a , where ˆθ has the dimension p of the parameter θ and a has the data dimension n less p ; this is available to third order in the moderate deviations region, under moderate regularity. We can then think of the model as h(ˆθ|a; θ)g(a) on a product space for (ˆθ, a) .

Typically it is not convenient to isolate n − p coordinates for the ancillary a but to use the full coordinates for points say yc on the observed maximum likelihood surface

Sθˆ0 = {y : ˆθ(y) = ˆθ

0};

such a surface has n−p dimensions, is cross sectional to the contours of the ancillary and thus indexes the ancillary contours. Accordingly we take a = yc where yc designates a point on the surface S0ˆ

θ and let da = dyc designate Euclidean measure on the surface Sθˆ. Expressions for g(a) and h(ˆθ|a; θ) are recorded as (A.2) and (A.1) in the Appendix at point (i).

Although the ancillary a(y) does not have uniqueness to third order, we do have that the distribution g(a) has such uniqueness (Fraser & Reid, 1995, 2001). The analysis here is examined in the moderate deviations region and terms or components to order O(n−1) are retained and those of higher order are omitted; considerations of the large deviations region are postponed.

(5)

For the simple normal example with scaling σ , the maximum likelihood value is ˆσ = [(P y2

i)/n]1/2, the ancillary a = (y1, ..., yn)0 is a point restricted to the sphere ˆσ = ˆσ0, the distribution of ˆσ|a is χn/n1/2 where χ designates a chi variable, and a is uniform on ˆσ = ˆσ0.

For the Bayesian p -value we follow Robert & Rousseau (2000) as indicated above and use the posterior distribution based on the marginal f (ˆθ; θ) for ˆθ ,

π(θ|ˆθ0) = π(θ)f (ˆθ0; θ)/ Z

π(α)f (ˆθ0; α)dα,

to eliminate θ from the conditional distribution f (a|ˆθ0; θ) thus giving the density

˜ g(a) =

Z

f (a|ˆθ; θ)π(θ|ˆθ0)dθ

on Sθˆ0. The proposed Bayesian pcpred is then

pcpred = Z

t(a)<t0

˜

g(a)da. (1.2)

This Bayesian p -value is examined in Section 4 and shown to be third order equiv-alent to a frequentist p -value now to be defined.

The frequentist p -value is obtained directly from the ancillary distribution:

p0anc = Pg{t(a) < t0} = Z

t(a; ˆθ0)<t0

g(a)da = Gg(t0; ˆθ0) (1.3)

where Pg designates probability using the ancillary density g(a) on Sθˆ0 and Gg(t : ˆ

θ0) designates the related distribution function. This ancillary p -value is examined in Section 2.

For the bootstrap p -value we let Gi designate the distribution function for a variable indexed by i as calculated from the model f (y; θ) . Then with p0 designating some initial function t(y) and with the iteration pi+1 = Gi(pi; ˆθ) we have that p1 = pplug is the plug in p -value, and p4 = pbs is the proposed bootstrap

(6)

p -value. In Section 3, we show that this bootstrap pbs is third order equivalent to the ancillary panc.

For the simple normal example, the ancillary p -value panc is just the standard Student p -value mentioned above and has accordingly good properties. Also, in the case where t(a, ˆθ0) = t(a) , we have directly that p0anc is a true p -value in the sense that it is uniform. We prove in this paper that this is a general feature, i.e. that this p -value is first order asymptotically uniform (to the order OP(n−1/2) ) for any statistic t(y) , that under some mild conditions on the asymptotic behaviour of t(y) it is second order asymptotically uniform (to the order OP(n−1) ) and that under stronger asymptotic conditions on t(y) it is third order asymptotically uniform. We thus extend Robbins et al. results in two ways : first by relaxing the hypothesis on the statistic t(y) and second by obtaining higher order results. The first aspect is of importance as in many cases the test statistic is complicated, with no available asymptotic distribution; see for instance the goodness of fit tests in Robert and Rousseau (2003). Moreover, a p -value is a way to obtain a universal scale for a decision procedure and can be considered from a Bayesian perspective as a calibration of the test procedure, see Robert and Rousseau (2003). Hence it is of importance to be as close as possible to the uniform.

The presentation of the ancillary in terms of points on the surface Sθˆ0 leads

to convenient and accessible formulas. But it is somewhat nonstandard in that the marginal distribution for the ancillary a is recorded where one might reasonably expect to find the conditional distribution for a given the observed ˆθ0. Such a con-ditional distribution has been used by Robert & Rousseau (2004) for simulations and has second order dependence on θ ; for θ = ˆθ0 it can be given as J−1|ˆθθ|1/2g(a) where J =R |ˆθθ|1/2g(a)da is the mean value of the root information calculated on

(7)

Sθˆ0; and more generally is available as

f (a|ˆθ0; θ) = k(θ)J−1(θ) exp{`(θ; a) − `(ˆθ0; a)}|ˆθθ|1/2g(a) (1.4) where the norming constant k(θ) has somewhat the role of a transform; for some further details see Fraser & Reid (1995). We note that a change in ˆθ0 means a change in the coordinate system for the ancillary and does need recognition at points in the analysis below. Nonetheless the maximum likelihood surface can be viewed as a convenient index for the ancillary contours.

2. THE ANCILLARY p -VALUE.

Consider a scalar statistic t(y) proposed as a measure of the location of the data point relative to the model f (y; θ) . We assume regularity for the statistic t(y) in addition to that for the model f (y; θ) . And a convenient way to do this is by introducing an additional parameter γ and an enlarged model, say

˜

f (y; θ, γ) = f (y; θ) exp{γt(y) − κ(θ, γ)},

obtained by exponential tilting relative to t(y) . Other enlarged models are of course possible but we need just the presence of a corresponding ancillary d(y) of dimension n − (p + 1) , a scalar reduction from the ancillary a(y) for the initial null model. This allows a one-dimensional conditional analysis of t that mimics to first order the marginal analysis of t and is central to the present calculations. We assume asymptotic properties for the enlarged model.

As a scalar variable to complement d(y) for given a(y) we define ¯t(y) in terms of the γ -score variable in the enlarged model: on Sθˆ0 we take ¯t(y) = t(y) ;

on other surfaces we obtain ¯t(y) by lifting using the conditional structure given a . We then have that y = (ˆθ, a) can be given equivalently as (ˆθ, ¯t, d) with dimensions (p, 1, n − p − 1) in the moderate deviations region..

(8)

Recent likelihood theory shows that the general route to the elimination of a nuisance parameter, here θ , is by marginalization over a conditional distribution describing the nuisance parameter effect (Fraser, 2004). Accordingly with ˆθ thus removed, we will be concerned with the distribution of (¯t, d) and will focus on the conditional distribution of ¯t given d . From likelihood theory as noted in the Appendix at point (ii), we have that the effect of the ancillary to say third order is limited to a finite number say k of characteristics of the ancillary. Thus we can marginalize over unneeded characteristics and take the effective dimensions for (ˆθ, ¯t, d) to be (p, 1, k − 1 − p) with k fixed.

For the normal example the enlarged model is just that for a sample from the Normal (µ, σ) , the statistic d(y) corresponds to the location-scale standardized residuals, and the p -value from ¯t either given d or marginally is just the familiar Student p = Hn−1(t0) mentioned in Section 1.

Now consider briefly the effective form of the ancillary p -value panc given by (1.2). An observed y0 gives a surface Sθˆ0 and values ¯t0 and d0. The related

ancillary set

T0 = {(ˆθ, ¯t, d) : ¯t < t0} (2.1) has probability content panc free of θ ; but its boundary is based on the partition induced by t(y) on Sθˆ0, and different surfaces can have different partitions that

need not correspond under the lifting that defined ¯t . We now link the panc values and obtain the effective statistic say ˜t(y) :

˜

t(y) = G−1g [Gg{t(y); ˆθ(y)}; ˆθ0], (2.2)

and the related set

(9)

This set in terms of y is free of the initial Sθˆ0 and is shown in Section 5 to be

third order ancillary. We note that ˜t(y) differs from ¯t(y) by an adjustment of order O(n−1/2) .

3. BOOTSTRAP p -VALUE

The bootstrap p -value mentioned in Section 1 is obtained in four iterations, pi+1 = Gi(pi; ˆθ) from an initial statistic p0 = t(y) . We examine the iteration process using the coordinates (ˆθ, ¯t, d) described in the preceding section.

For an initial θ = ˆθ0 we have an asymptotic distribution for (ˆθ, ¯t, d) in k -dimensions which can be standardized to be first-order Nk(0, I) . Let r be the asymptotic correlation of t(y) with the vector ˆθ(y) ; we assume that r is bounded from 1 , as a multiple correlation coefficient. The ancillary p -value panc = Φ(¯t) to first order, both conditionally and marginally.

The first order bootstrap p1(y) = Φ{¯t(1 − r2)1/2} , as obtained from the first order normal distribution of (ˆθ, ¯t) . As a p -value it is O(1) relative to panc; but as a statistic it is equivalent to ¯t(y) to order O(n−1/2) and can be reexpressed as t1 = ¯t(y) + B(y)/n1/2.

The second order bootstrap is then obtained from t1(y) . Consider a data point y0 on S0

ˆ

θ and for computational convenience write y

0 as (ˆθ, ¯t0, d0) with d0 at its modal value under θ = ˆθ0. A contour for t(y) expressed implicitly as say t1(y) = t1(y0) can be recorded explicitly by expressing ¯t in terms of the remaining coordinates: thus ¯t = ¯t0 + b(ˆθ, d)/n1/2 where the order of the adjustment term is recorded explicitly. For calculations to order O(n−1) and when examining the O(n−1/2) adjustment terms, it suffices to use the first order standard normal for ˆ

(10)

the deviations we obtain ¯

t = t0+ b10θ/nˆ 1/2+ b01d/n1/2

+ b20θˆ2/2n1/2+ b11θd/nˆ 1/2+ b02d2/2n1/2 (3.1) where the order of the linear terms derives from that of b(ˆθ, d)/n1/2. Integrating over the standard normal distributions for ˆθ and d we obtain

p2(y) = Gg(t0+ b20/2n1/2+ b02/2n1/2; ˆθ0). (3.2) As a p -value it is O(n−1/2) relative to panc; but as a statistic it is equivalent to say t2(y) = ¯t(y) + C(y)/n .

The third step bootstrap can be calculated from t2(y) in the manner above. The explicit expressions recording ¯t as a cubic version of (3.1) with terms of order O(n−1) leads to

p3(y) = G(t0+ c20/2n1/2+ c02/2n1/2; ˆθ) (3.3) where the linear and cubic terms integrate out directly. As a p -value it is O(n−1) relative to panc; but as a statistic it is equivalent to t3(y) = ¯t(y) to order O(n−3/2) .

The fourth step p -value then has p4(y) = panc.

The use of the bootstrap (Beran, 1987; Davison & Hinkley 1997) was mentioned by Robins et al (2000) as a means of upgrading various Bayesian p -values.

4. BAYESIAN p -VALUES

Consider the statistical model f (y; θ) and a candidate statistic t(y) for as-sessing location relative to the model. We first examine the distribution of the probability in an interval ˆθ(y0) ± δ/2 about the observed maximum likelihood surface Sθˆ0 . From the Appendix at point (i) we have

(11)

where `(θ; y) here is the log-density log f (y; θ) . We now average this with respect to an indicated prior π(θ) using standard Laplace integration:

c|ˆj|−1/2exp{`(ˆθ0; y)}|`θˆ0:y(ˆθ; a)|

−1

j|da.δ = g(a)da.δ (4.2) where c = 1+O(n−1) as obtained from third and fourth derivatives at the maximum and g(a) is given as (A.2) in the Appendix. Thus the average, using the prior, of the unnormed distribution of probability in the ±δ/2 interval around the surface Sθˆ0 is just the marginal distribution of the ancillary a .

Now consider pcpred based on the maximum likelihood statistic as the statistic to separate the data information (1.4). The posterior is obtained from the maximum likelihood statistic and is rroportional to π(θ)f (ˆθ0; θ) . This is then used to average f (a|ˆθ0; θ) , giving

Z

π(θ)f (ˆθ0; θ)dθ = Z

f (y0; θ)|`θ;y(ˆθ0; a)|−1|ˆj|dθ

which reproduces the ancillary distribution g(a) as in (4.2). It follows that pcpred = panc.

5. THE DEVELOPED p -VALUES ARE THIRD ORDER

On any particular maximum likelihood surface Sθˆ0 , an observed data point y0

determines an ancillary/bootstrap/Bayesian region T0 that has probability content p0 = p

anc and is third order free of θ . Under repetitions however with different maximum likelihood values and thus different maximum likelihood surfaces, the sets T0 may not be mutually consistent, as a particular set T0 is used only where it intersects the corresponding surface Sθˆ0 As recorded at (2.3) the operative set is

given by T∗ and from its definition is seen to be free of the initial defining surface; thus T∗ is a statistic in structure and it suffices to show that it has probability content p0 to third order for any particular parameter value θ = ˆθ0.

(12)

For notational ease we work with scalar ˆθ and d but the calculations extend to the vector case. We compare the probability left of ˜t(y) = t0 with the probability left of ¯t(y) = t0 which is of course p0; let ∆ be the difference. We express the boundary of ˜t(y) = t0 in explicit form as say ¯t = t0 + b(ˆθ, d) ; the difference in probability content is then

∆ = Z d Z ˆ θ Z t0+b( ˆθ,d) ¯ t=t0 f (¯t, ˆθ, d)d¯tdˆθdd (5.1) where the densities are those for θ = ˆθ0 and the inner integral is negative if b(ˆθ, d) is negative. For convenience we assume that (ˆθ, ¯t, d) has been standardized so the first order limiting distribution is N (0, I) .

We expand the boundary ¯t = t0+ b(ˆθ, d) in a Taylor series about (0, 0) : ¯

t = t0+ {a0/n1/2+ a1d/n1/2+ a2(d2− 1)/n}ˆθ

+ {b0/n1/2+ b1d/n}ˆθ2/2 + {c0/n}ˆθ3/6 (5.2) where quadratic terms are O(n−1/2) , cubic terms are O(n−1) , and other terms in the braces are O(n−1/2) due to the definition of ˜t(y) relative to ¯t(y) , as noted after (2.3). Then combining (5.1) and (5.2) we obtain

∆ = Z ˆ θ Z d [{.}1θ + {.}ˆ 2θˆ2/s + {.}3θˆ3/6]f (t0, ˆθ, d)dddˆθ. (5.3) The g(a) density at (1.3) is given by f (¯t, d) and, if we integrate it with the square bracket expression in (5.3) we get zero for all values of ˆθ as again a consequence of the definition (2.2) of the boundary contour on each maximum likelihood surface. It follows: 1) we have c0/n = O(n−3/2; 2) as d is first-order N(0,1) for terms of order O(n−1) we have that b1d/n has no effect and thus b0/n1/2 = O(n−3/2) . It follows that we can rewrite ∆ :

∆ = Z ˆ θ [ Z d [{a0/n1/2+a1d/n1/2+a2(d2−1)/n}ˆθ+{b1d/n}ˆθ2/2}]f (d|ˆθ, t0)dd]f (ˆθ, t0)dˆθ. (5.4)

(13)

As again d is first-order Normal(0,1) for terms of order O(n−1) we find that there is no contribution from the terms with coefficients a2 and and b1; accordingly we can rewite the integral in the square brackets of (5.4) as

Z d

{a0/n1/2+ a1d/n1/2}

f (d|ˆθ, t0)

f (d|ˆθ) f (d|ˆθ)dd. (5.5) The quotient of conditional densities can be expanded as 1 + At0d/n1/2. The contribution of the 1 to the expression for ∆ is zero, as again a consequence of the definition (2.2). Then combining At0d/n1/2 with the remainder of the integrand we can rewrite (5.5) as

Z d

{A1/n + A2dt0/n}f (d|ˆθ)dd = A1/n.

which is a constant to order O(n−3/2) . The integral (5.4) then becomes ∆ = Z ˆ θ ˆ θA1/nf (ˆθ, t0)dˆθ.

As ˆθ is first order Normal(0,1) it follows that ∆ = 0 , that ˜t(y) is third order ancillary, and that the developed Bayesian, frequentist and bootstrap p -values are Uniform(0,1) to the third order.

ACKNOWLEDGEMENT

The Natural Sciences and Engineering Research Council of Canada has pro-vided support for this research.

APPENDIX

(i) The ancillary and conditional distributions. Recent likelihood theory has developed third order p -values for scalar parameters in quite general contexts, substantially generalizing the availabilities using the saddlepoint method with ex-ponential models; see for example, Fraser, Reid & Wu (1999), Fraser & Reid (1995,

(14)

2001), and Cakmak et al (1998). We report results from this development and as-sume the corresponding regularity conditions; calculations are to third order in the moderate deviations region.

The conditional distribution of the maximum likelihood value ˆθ given an as-sumed third order ancillary is available from Barndorff-Nielsen’s (1980) p∗ formula; it gives density at a point from likelihood information at that point. For our pur-poses here we want the full conditional distribution for ˆθ from likelihood informa-tion on the surface Sθˆ0, where the ancillary density is recorded; this is given by

the tangent exponential model (Fraser & Reid, 1993, 1995):

f (ˆθ; a; θ)dˆθ = c

(2π)p/2 exp{` 0

(θ) − `0(ˆθ0) + (ϕ − ˆϕ0)0s}|ˆϕϕ|1/2d ˆϕ, (A.1)

where ϕ(θ) is a nominal reparameterization obtained as the gradient of likelihood for given a on Sθˆ0 , ˆϕϕ is the observed information function for ϕ , `0(θ) = `{θ(ϕ); y0} is the observed likelihood treated as a function of ϕ , and s is the score variable at θ = ˆθ0.

The marginal distribution for the ancillary a = yc is then obtained by dividing the full model probability differential f (y; θ)dy by the preceding conditional model differential (Fraser & Reid, 1995) giving:

g(a)da = (2π) p/2

c exp{`(ˆθ; a)}|`θ;y(ˆθ; a)| −1

θθ|1/2da (A.2)

where |`θ;y| = |`θ;y(ˆθ; y)`0θ;y(ˆθ; y)|

1/2 is the nominal volume of the p row vectors in the p × n matrix `θ;y = (∂2/∂θ∂y0)`(θ; y) .

(ii) Coordinates for the ancillary. Under moderate regularity we have that the observed information ˆ varies to order O(n−1/2) . The analysis in Fraser & Reid (1995) gives background for examining such variation and how it affects the conditional distribution of ˆθ given a . If θ is locally recorded in location-type

(15)

coordinates (Fraser, Reid, Wong, & Yun Yi, 2000) then the conditional model is location with variance matrix (ˆ0)−1; this has close connections to the Bayesian-frequentist discussions in Section 3, and implies that the Welch & Peers (1963) result is available, at least to second order. Here we comment just on a correspond-ing simplification of coordinates for the ancillary. As just indicated and followcorrespond-ing Fraser & Reid (lbid) we have that the effect of the ancillary a on the conditional distribution is dominated by the observed information function with lesser effect from certain third and fourth derivative characteristics. From this we have that the ancillary can be summerized in terms of say k coordinates and other original coordinates can be marginalized out and ignored. Accordingly, we take a to have fixed dimension k and the equivalently (¯t, d) to have dimensions (1, k − 1) .

REFERENCES

Barndorff-Nielsen, O.E. (1983). On a formula for the distribution of the max-imum likelihood estimator. Biometrika 70, 343-65.

Barndorff-Nielsen, O.E. (1986). Inference on full or partial parameters based on the standardised signed log likelihood ratio. Biometrika 73, 307-22.

Bayarri, M.J. and Berger, J.O. (2000). p -values for composite null models. J. Amer. Statist. Soc. 95, 1127-1142.

Beran, R.J. (1988) Pre-Pivoting test statistics: a bootstrap view of asymptotic refinements. J. Amer. Statist. Soc. 83, 687-697.

Cakmak, J., Fraser, D.A.S., McDunnough, P., Reid, N., and Yuan, X. (1998). Likelihood centered asymptotic model exponential and location model versions. J. Statist. Planning and Inference 66, 211-222.

(16)

Davison, A.C., and Hinkley, D.V. (1997). Bootstrap methods and their appli-cation, Cambridge U.K.: Cambridge University Press.

Fraser, D.A.S. (2003). Likelihood for component parameters. Biometrika 90, 327-339.

Fraser, D.A.S. (2004). Ancillaries and conditional inference. Statistical Science. To appear.

Fraser, D.A.S., and Reid, N. (1995). Ancillaries and third order significance. Utilitas Mathematica 47, 33-53.

Fraser, D.A.S., and Reid, N. (1995). Ancillaries and third order significance. Utilitas Mathematica 47, 33-53.

Fraser, D.A.S., and Reid, N. (2001). Ancillary information for statistical infer-ence. In S.E. Ahmed and N. Reid (Eds), Empirical Bayes and Likelihood Inference, 185-207. Springer-Verlag.

Fraser, D.A.S., Reid, N., Wong, A., and Yun Yi, G. (2003), Direct Bayes for interest parameters. Valencia 7, 529-533.

Fraser, D.A.S., Reid, N., and Wu, J. (1999). A simple general formula for tail probabilities for frequentist and Bayesian inference. Biometrika 86, 249-264.

Robert, C., and Rousseau, J. (2000) Robert, C., and Rousseau, J. (2003) Robert, C., and Rousseau, J. (2004)

Robins, J.M., van der Vaart, A., and Ventura, V. (2000). J. Amer. Statist. Soc. 95, 1143-1156.

Welch, B.L. and Peers, H.W. (1963). On a formula for confidence points based on integrals of weighted likelihoods. J. R. Statist. Soc. B 25, 318-29.

Références

Documents relatifs

FERRANTE, On a stochastic delay difference equation with boundary conditions and its Markov property, to appear on Stochastic

L’accès aux archives de la revue « Rendiconti del Seminario Matematico della Università di Padova » ( http://rendiconti.math.unipd.it/ ) implique l’accord avec les

With the marginals fixed, the joint distribution of the order statistics can be characterized by the connect- ing copula of the random vector, which contains all information on

Tout polynôme non constant de C [X] de degré n admet n racines dans C , si chacune d’elles est comptée avec son ordre de multiplicité..

Avant de commencer cette fiche de révisions, il faut d’abord connaître parfaitement son cours (vocabulaire et propriétés). Exercice n°1 : Compléter par les formules

This advantage in efficiency is however accompanied by an important bias (the bias is nearly five times as important for small γ T compared to the modal

Thus, studying the statistics of the Dyck cost in a model of equispaced points is a special case of our original problem of determining the distribution of A (ω p ) (w) induced by

Although the theoretical values of the probabilities are rather intricate, in a first approach, we can neglect these aspects and give some results in an heuristic point of