HAL Id: hal-00893855
https://hal.archives-ouvertes.fr/hal-00893855
Submitted on 1 Jan 1990
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of
sci-entific research documents, whether they are
pub-lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Variance estimation from integrated likelihoods (VEIL)
D Gianola, Jl Foulley
To cite this version:
Original
article
Variance estimation
from
integrated
likelihoods
(VEIL)
D
Gianola
1
JLFoulley
1
University of Illinois, Department of Anima1 Sciences, Urbana, IL 61801 USA;
2 Institut National de la Recherche
Agronomique,
b’tation de (Uénétique Quantitative et Appliquée,
Centre de Recherches de Jouy-en-Josas, 78352 Jouy-en-Josas Cedex, France
(Received
17 January 1990; accepted 22August 1990)
Summary - A method of variance component estimation in univariate mixed linear mod-els based on the exact or approximate posterior distributions of the variance components
is presented. From these distributions, posterior means, modes and variances can be
cal-culated exactly, via numerical methods, or approximately, using inverted
X
2
distributions.Although
particular attention is given to a Bayesian analysis with $at priors, informa-tive prior distributions can be used without great additional difficulty. Implementation ofthe exact analysis can be taxing from a numerical point of view, but the computational requirements of the approximate method are not greater than those of REML.
variance components / maximum likelihood / Bayesian methods
Résumé - Estimation des composantes de la variance à partir de vraisemblances
intégrées. Cet article présente une méthode d’estimation des composantes de la variance,
pour des modèles linéaires mixtes univariés, qui est fondée sur des expressions exactes et approchées de la distribution a posteriori de ces composantes. On peut calculer les espérances, modes et variances de ces distributions, de façon exacte par des méthodes numériques ou
de façon
approchée en utilisant des loisde
X
2
inverses. Bien que l’attentionsoit centrée sur une analyse bayésienne faisant intervenir des a priori
uniformes,
le recoursà des lois a priori
informatives
estenvisageable
sansdifficulté
notoire additionnelle. La mise en œuvre de l’analyse exacte peut être contraignante d’un point de vue numérique,mais la méthode approchée ne demande pas plus de calculs que la résolution du REML.
composantes de la variance / maximum de vraisemblance / méthodes bayésiennes.
INTRODUCTION
Harville and Callanan
(1990)
noted that likelihood-based methods of variancecom-ponent estimation have
gained
favor amongquantitative
geneticists.
Inparticu-lar,
theprocedure
known as "restricted maximumlikelihood",
or REML for short(Thompson,
1962;
Patterson andThompson,
1971),
is nowwidely
regarded
inrespect to the variances
only
the part of the likelihood function(normality
isas-sumed)
that does notdepend on
fixed effects. In sodoing,
Patterson andThompson
(1971)
obtained an estimator that &dquo;accounts for thedegrees
of freedom lost in the estimation of fixed effects&dquo;which, according
to theirreasoning,
is notaccomplished
by
full maximum likelihood(ML’.
With balanceddata,
the REMLestimating
equa-tions reduce to those used in estimation
by analysis
of variance(ANOVA)
so thatif the ANOVA estimates are within the parameter space, these are REML as well.
The basic idea
underlying
REML isobtaining
a likelihood-based estimator(thus
retaining
the usualasymptotic
properties)
whilereducing
the bias of ML.However,
REML estimators are biased as well and because these are constructed from a
partial likelihood,
one should expectlarger
variance of estimates than with ML. A more reasonablecomparison
is in terms of a loss function such asmean-squared
error, but Monte Carlo studies
(for
example,
Swallow andMonahan,
1984)
havegiven ambiguous
results. Infact,
in apurely
fixed model with asingle
locationparameter and 1 variance component, the ML estimator has smaller
mean-squared
error than REML
throughout
the whole parameter space,ie,
REML is inadmissible underquadratic
loss for this model.Lacking
evidence to the contrary, a similarsampling
behavior can beexpected
at least in some other models. Searle(1988)
puts it
concisely :
&dquo;It isdifficult
to beanything
but inconclusive about whichof
ML and REML is thepneferred
method&dquo;.The 2
procedures
have the sameasymptotic
properties
(although
theirasymp-totic variance is
different)
and unknown smallsample
distributions,
a feature sharedby
allsampling theory
estimators of variance components.Although
ML and REMLestimates are defined inside the
appropriate
parameter space, theirasymptotic
dis-tribution
(normal)
can generate interval estimates that includenegative
values. Thispotentially embarrassing phenomenon
is often overlooked in discussions oflikelihood based methods.
Harville
(1974)
showed that REMLcorresponds
to the mode of themarginal
posterior
density
of the variance components in aBayes
analysis
with flatpriors
for fixed effects and variance components. The fixed effects(0)
are viewed in REML as nuisanceparameters
and,
therefore are eliminated from the full likelihoodby
integration
so that the restricted likelihood is amarginal
likelihood(Dawid, 1980),
seemingly
a moreappropriate
term. This likelihood does take into account the error incurred inestimating ,0
because it can be viewed as aweighted
average oflikelihoods of variance components evaluated at all
possible
values of,Q,
theweight
function
being
theposterior
density
of the fixed effects(proportional
to themarginal
likelihood of
,0
in thiscontext).
Gianola et al(1986)
gave aBayesian
justification
of REML whenestimating breeding
values with unknown variance components. The same,by
virtue of the symmetry of theBayesian analysis, applies
to estimation of fixed effects.There
are 2 aspects of REML that have not received sufficient discussion.First,
the methodproduces
joint
modes rather thanmarginal
modes. If the loss function isquadratic,
theoptimum
Bayes
estimator is theposterior
mean, andmarginal
modesprovide
betterapproximations
to this thanjoint
modes(O’Hagan, 1976).
Second,
in some
problems
not all variance parameters haveequal
importance.
Forexample,
suppose that there is interest in
making
inferences about the amount of additiverequires
a modelthat,
in addition to fixedeffects,
includes randomherd, additive,
dominance,
permanent environmental and temporary effects. In thissituation,
themarginal
or restricted likelihood of the variance components involves 5dimensions,
1 for each of thevariances,
yet all components other than the additive one should beregarded
as nuisance parameters.Carrying
thelogic
of Patterson andThompson
(1971)
1 stepfurther,
REML would not take into account the error incurred inestimating
the nuisance variancesand, therefore, only
the part of the likelihood that is a function of the additivegenetic
variance should be maximized. Construction ofthis likelihood
employing
classical statistical arguments seemsimpossible.
The
objective
of this note is to present a new method of variance component estimation in univariate mixed linear models based on theprinciple
offinding
themarginal
posterior
density
of each of the variances. Fromthis,
means,modes,
variances and
higher
order moments of the distribution of variance components canbe calculated
exactly,
using
numericalmethods,
oranalytically
viaapproximations.
Particular attention is
given
to theBayesian
analysis
with flatpriors
for fixed effects and variance components.MODEL AND DEFINITIONS
Model
Consider the mixed linear model:
where :
y: data vector;
X, Z
i
:
known incidence matrices. For i = c,Z
i
=I,
anidentity
matrix ofappropriate
order;
(
3:
p x 1 vector ofuniquely
defined fixed effects(X
has full-column
rank);
ui: qi x 1 random vector distributed asN(O,
Ai!2),
whereA
i
is a known square,positive-definite
matrix andQ2
(i
=1, 2, ... ,
c)
is a variancecomponent. It is assumed that all ui and Uj are
mutually
independent.
For i = c,A!
is often taken to be anidentity
matrix ofappropriate order;
this will be assumedin the present paper.
The
marginal
distribution:gives
rise to the &dquo;full&dquo; likelihoodfunction,
from which ML estimatorsof P
and ofeach of the variance components can be obtained.
Bayesian
elementsand the normal distributions
assigned independently
to theu
i
’s
are viewed asprior
probability
distributions as well. Tocomplete
themodel,
assignment
ofprior
probability
distributions to the variance components is necessary. In this paper,and for the sole purpose of
showing
how ahigher degree
ofmarginalization
can be achieved than inREML,
independent
&dquo;flat&dquo;prior
distributions areassigned
to eachof the variances
!2.
With thisformulation,
thejoint
posterior
density
of P
and ofvariances becomes:
so that there is no information about fixed effects and variance parameters
beyond
that contained in the likelihood function. If
(4)
is maximizedjointly
with respect to(3, u!, !z ! ! ! ! ! u§
one obtains the maximum likelihood estimates of all parameters. If(4)
isintegrated
with respect top,
themarginal
(restricted)
likelihood function of the variance components is obtained(Harville, 1974).
This contains no information about!3
and maximization of thismarginal
likelihood with respect to the variancesgives
restricted maximum likelihood estimates of these parameters.REPRESENTATION OF THE MARGINAL
(RESTRICTED)
LIKELIHOOD
As shown
by
Gianola et al(1990a, 1990b),
integration
of(4)
with respectto
/
3
gives:
where:
and
is the solution to Henderson’s mixed model
equations.
Although
(5)
is aposterior
density,
it isstrictly proportional
to themarginal
likelihood of the variancePOSTERIOR DENSITY OF THE RESIDUAL VARIANCE
AND OF RATIOS OF VARIANCE COMPONENTS Consider the one-to-one
reparameterization
fromo,
2
,0-
22
,...,0,2
toand
Because maximum likelihood estimates are invariant under one-to-one
reparam-eterizations,
maximization of(5)
with respect to the 0-2 or to the a-parametersgives
the same estimates.However,
andarguing
from a likelihoodveiwpoint,
it isunclear how
(5)
can be factorized into separate components for each of the old or new parameters.Hence,
estimates of aparticular
uf
or ai from(5)
are obtained in the presence of theremaining or
2
1
or a’s. To the extent that these are viewed as nuisance parameters, REML would not take into account the&dquo;degrees
of freedomlost&dquo; in the estimation of such parameters.
In the
Bayesian approach
parameters are viewed as randomvariables,
so achange
from a a2 to an
a-parameterization
must be madeusing theory
of transformation of random variables. The absolute value of the determinant of the Jacobian matrixof the transformations indicated in
(6a)
and(6b)
is:Using
this in(5)
gives
asjoint posterior
density
of the a’s:The range of the a’s
depends
on the model inquestion.
The a! parameter isalways larger
than 0. In an &dquo;animal&dquo; model with 2 variance components, the ratiobetween the environmental and the additive
genetic
variances can vary between 0and
infinity.
On the otherhand,
in a &dquo;sire&dquo;model,
thecorresponding
ratio variesbetween 3 and
infinity.
POSTERIOR DENSITY OF THE RATIOS OF VARIANCE COMPONENTS
With the above
parameterization,
a! can beintegrated
outanalytically
using
Although
thejoint
posterior
density
(8)
is not in anyobviously recognizable form,
the components of the modal vector, or a-values
maximizing
(8)
can be calculated. Let thelogarithm
of(8)
beL,
so that:where:
The mixed model &dquo;residual&dquo;
sum
of squaresB(.)
depends
on the variance ratiosthrough
the solution vectorb.
Following
Macedo and Gianola(1987)
andsuppressing
thedependence
ofB(.)
on the a’s in thenotation,
we have fori = 1,2,...,c- 1:
Setting
(10)
to zero andrearranging:
with:
Above:
is a
block-diagonal
matrix,
andwhere
C
ii
is defined below.Using
(12a)
and(12b)
in(11)
andrearranging gives,
for i =1,2,...,c - 1:
Above,
and
Ci
i
is thepartition
of(W’W+E)-
1
corresponding
to the ui effects. Note thatq
= > 4 and n >
!+2c
arerequired.
Theexpression
in(13)
can beinterpreted
as the ratio between an &dquo;estimator&dquo; ofo,2
and
an &dquo;estimator&dquo; of!2.
Because(13)
is notexplicit
in thea’s,
theexpression
must be iterated upon, thusgenerating
a sequenceof successive
approximations.
Forexample, starting
witha!O]
(i
=1, 2, ... ,
c—1)
onecan solve the mixed model
equations,
evaluate theright
hand-side of(13),
obtaina new set of a’s and repeat the calculations until convergence.
In addition to first order
algorithms
such as the onesuggested above, computing
strategies
based on second derivatives arepossible.
These are tedious and are notpresented
here. Theimportant point
is that the components of the mode of thejoint posterior
density
of al, a2, ... , a!_1 can be obtained without greatconceptual
di!culty. Hereafter,
these components will be denoted as5i,Q’2,5e-i. Contrary
toREML,
account isbeing
taken here of the&dquo;degree
of freedom&dquo; lost inestimating
a!
(a! in
theoriginal
parameterization).
EXACT MARGINAL DISTRIBUTION OF EACH
OF THE VARIANCE COMPONENTS Residual component
Consider
density
(7)
and write it as:The first conditional
density
in(14)
is obtained from(7)
by regarding
ac as avariable and the
remaining
a’s as constants. Hence:This is
exactly
the kernel of thedensity
of an invertedx
2
random variable withparameters:
as before. The
complete density
(Zellner, 1971)
is:and the mean, mode and variance of this distribution can be shown
by
standardtechniques
to be:’
provided
that n — p — 2c > 4. From(14),
themarginal density
of a, is:This is a
weighted
average of an infinite number of invertedx
2
densities as in(16),
theweight
functionbeing
theposterior density
(8).
Whereas themarginal density
of ac cannot be written
explicitly,
its mean and variance can be calculatedby
numerical means. For
example,
note that:because the
expectation
is calculated with respect to the distribution al, a2, ..., -;a!-1 ! y.
This is theBayes
estimatorminimizing expected
posteriorquadratic
loss.
Evaluating
theexpectation
in(19)
requires
solving
Henderson’s mixed modelequations
alarge,
technically
aninfinite,
number of times. Monte-Carlo(Kloek
and vanDijk,
1978)
or numerical(Smith
etal,
1987)
integration procedures
are neededto
complete
the calculation of(19).
Othercomponents
Now consider the
parameterization:
i different from c
The Jacobian matrix of the transformation from a
parameterization
in terms ofis ai. The
joint posterior
density
ofuf
and
al, a2, ... , a!_1 is then from(7):
As in the
preceding
section, given
the ratios of variance components, andprovided
that each of the variances!2
takes
values between 0 and oo, one obtains:where:
Again,
(22)
is thedensity
of aninverted
X
2
random variable with the features:Using
the argument of(18),
themarginal
density
ofa?
is
aweighted
average of an infinite number of invertedX
2
densities,
and theposterior density
of the ratiosof variance components,
(8),
isthe
weight
function. Themarginal
posterior
density
cannot be written in
explicit form,
but its moments can be evaluatednumerically.
Forexample:
where the
expectation
is taken with respect to the distribution c!l, a2, ... ,a!_1
[ y.
As discussed
earlier, carrying
out this calculationrequires
numericalintegration
or Monte Carlotechniques.
APPROXIMATE MARGINAL DISTRIBUTION OF EACH OF THE VARIANCE COMPONENTS
Because of the numerical difficulties
arising
in themarginal analysis
of the varianceanalytical
treatment of theresulting
distributions. If theposterior density
of the ratios of variance components ispeaked
orsymmetrical
about itsmode,
one can resort to the well knownapproximations
(Box
andTiao, 1973;
Gianola andFernando,
1986;
Gianola etal,
1986):
and
where the a’s are the components of the mode of the
posterior
distribution of the varianceratios,
withdensity
as in(8).
Themarginal
densities of the variancecomponents can be
approximated using
densities(16)
and(22),
but with theparameters now
being:
where
0
is the solution to Henderson’s mixed modelequations
evaluated atS
i
,S
2
)&dquo;-)
S
c
-i-
Because the densities are in invertedx
2
form,
one canreadily
obtain their moments and constructpoint
estimates of the variance components ina Bayesian
decision-theoretical context. Forexample,
for aquadratic
lossfunction,
the estimator that minimizes
expected quadratic
loss is theposterior
mean, or:and
Estimators
minimizing
&dquo;all or none&dquo;posterior
loss are:and
The
posterior
variances can beapproximated
by evaluating
expressions
S
’
in
distributions can be
depicted
graphically by plotting
densities(16)
and(22),
evaluated at the a
values,
thusgiving
a full(approximate)
solution to theproblem
of inferencesregarding
variance components in a univariate mixed linear model.DISCUSSION
Harville
(1990)
stated: &dquo;A more extensive useof Bayesian
ideasby
animal breeders and otherpractitioners
is desirable and is morefeasible from
acomputational
standpoint
thancommonly thought...
TheBayesian approach
can be used to deviseprediction
procedures
that are more sensible -from
both aBayesian
andfrequentist
perspective -
than those in current use&dquo;.In this paper we have used the
Bayesian
mechanism ofintegrating
nuisanceparameters to construct
marginal
likelihoods from whichmarginal
inferences about each of the variance components can be made. In this respect, the method advanced in this paper goesbeyond
REML indegree
ofmarginalization.
InREML,
the fixed effects are viewed as nuisance parameters and areintegrated
out of the likelihood(Harville, 1974)
so inferences about variance components are carried outjointly.
In the present paper fixed effects and other variance components also
regarded
as nuisances areintegrated
out so that inferences about individual variances of interest can becompleted
aftertaking
into account,exactly
orapproximately,
the error due to notknowing
all nuisance parameters.Hence,
practitioners
that acceptR.EML in terms of the argument advanced
by
Patterson andThompson
(1971),
ie,
taking
into accountdegrees
of freedom&dquo;lost&dquo;,
should also feel comfortable with ourapproach
because additionaldegrees
of freedom(stemming
from the nuisance variancecomponents)
are also taken into account.When fixed effects are viewed as nuisance parameters, it is fortunate that the
likelihood can be
separated
into a component thatdepends
on the fixed effectsplus
an &dquo;error contrast&dquo; part(Patterson
andThompson,
1971).
However,
in likelihoodinference it seems
impossible
to eliminate nuisance parameters other thanby
ad hoc methods or without recourse toBayesian
ideas. Apossible
method fordealing
with nuisance variances would betreating
all random effects other than that of interest asfixed,
and thenestimating
theappropriate
componentby
some translation invariantprocedure
such as REML. Thissuggests
that the distribution of theresulting
statistic would notdepend
on parameters of the distribution of the nuisance randomeffects. On the other
hand,
the variance of interest would be estimatedjointly
withthe residual
variance;
in most instances this shouldcause
littledifficulty
because of thelarge
amount of information available about the residual component. It seemsintriguing
to examine thesampling
properties
of this method.An
interesting
question
is the extent to which theproposed
method differs fromREML. It is known that ML and REML can differ
appreciably
when the number offixed effects is
large
relative to the number of observations.Likewise,
the difference between estimates obtained with the new method and REML should be a function ofthe number of variance components in the model and of the amount of information about such components contained in the
sample.
For the sake ofbrevity,
we will referto the new method as &dquo;VEIL&dquo;
(standing
for &dquo;variance estimation fromintegrated
The finite
sampling
properties
such asbias,
variance and meansquared
error ofVEIL are
unknown,
but the same holds for REML andML,
except for some trivialmodels. Differences among methods
depend
on data structures and parametervalues,
and canonly
be assessed via simulationexperiments.
When theapproximate
analysis
described in this paper isused,
thegoodness
of VEIL willdepend
onthe extent to which the
density
Q’i,a2)-&dquo;)Q’c-i ! y
ispeaked;
thisassumption
surely
does not hold when there is limited information in the data about thevariance ratios. In this case, the exact
approach
isstrongly
recommended over theapproximate
one. With respect tolarge sample
properties,
the work of Cox(1975)
on
partial
likelihoodssuggests
that theproperties
of maximum likelihood estimatesapply
to maximummarginal
likelihood estimates as well. Agood
illustration isprecisely
REML relative to ML.Hence,
thelarge sample properties
of estimates obtained from thehighly marginalized
likelihoodspresented
here should be the same as those of REMLalthough,
of course, the parameters of theasymptotic
distribution must differ. For
example,
theasymptotic
variance-covariance matrix ofML estimates of variance components is different from that of REML estimates. From a
Bayesian
viewpoint,
the method described in this paper is a markedimprovement
over ML and REMLprovided
theobjective
of theanalysis
ismaking
marginal
inferences about individual components ofvariance,
as the other param-eters are treated as nuisances. As statedearlier,
themarginal
densities of the vari-ance components areapproximately
invertedX
2
.
This allows to generate acomplete
posterior
distribution from which moments can be calculated asneeded,
andprob-ability
statements can be obtained at leastby
numericalprocedures.
The invertedx
2
distribution is definedonly
over thepositive
part of the real line so theembar-rassing
situations causedby
encountering point
estimates that fall outside of theparameter space, or
negative
values of the variancesyielding
apositive
likelihood(Harville
andCallanan,
1990)
are avoided.It is
interesting
to observe that the(approximate)
marginal
posterior
densities of the variance components appear in the invertedx2
form.
In a fixedmodel,
themarginal
posterior
distribution of the residual variance isexactly
invertedX
Z
and itsmodal value is the usual unbiased estimator
(identical
to REML in thiscase).
Thisimplies
that this distribution is suitable forspecifying marginal
prior
knowledge
about variance parameters asdone,
forexample, by Lindley
and Smith(1972)
andH6schele et al
(1987).
Forinstance,
suppose that theprior
distributionof
Q2
is takenas
inverted
x2 with
parameters
voand so.
Then themarginal
posterior
density
ofQ2
can be written as:Taking
into account that theintegrated
likelihood isapproximately proportional
towhere:
and
Hence,
themarginal
posterior
density
of componenta?
remains in the invertedX
2
form. This
&dquo;conjugate&dquo;
property of the invertedx2 density
can be used toadvantage
when
computing
themarginal density
of a variance component from alarge
dataset. If the data can be
partitioned
intoindependent pieces, repeated
use of(30)
withparameters
updated
as in(31a)
and(31b)
will lead to the(approximate)
marginal
posterior
density
of the variance components. Each of thesepieces
of data should have arelatively
large
amount of information about the variance ratios so thatreplacing
the variance ratios aiby
the modal valuesix
i
can bejustified.
Note in(31a)
and(31b)
that while the&dquo;degree
of belief&dquo; parameter accumulatesadditively,
the
parameter s
2
is
updated
in the form of aweighted
average which is a commonfeature in
Bayesian
posterior
analysis.
The
computations
required
forcompleting
theapproximate
analysis
are similarto those of REML. As illustrated in
(13),
an iteration constructed on the basis of successiveapproximations requires
at each round the solution vector and the inverse of the coefficient matrix of Henderson’sequations.
These are also thecomputing
requirements arising
in first and second orderalgorithms
for REML(Harville
andCallanan,
1990).
Derivative freealgorithms
should beexplored
as well. On the otherhand, carrying
out the exactanalysis
requires
numericalintegration
and the mixed modelequations
would need to be solved at each of the steps ofintegration.
Thismay not be feasible in a
large
class of models. In summary, the exact method can beapplied
only
in data sets of moderatesize,
but it isprecisely
in this situationwhere it would be most useful.
As stated
by
Searle(1988),
thesampling
distributions ofanalysis
of variance andlikelihood based estimates of variance components are unknown and will
probably
never be because ofanalytical intractability.
It would beintriguing
tostudy
theextent to which the inverted
x2
distributions
with densitites as in(16)
and(22)
canprovide
adescription
of thevariability
encountered overrepeated sampling
among estimates of variance components obtained with the present method. In such astudy
theQZ
’s
would be estimates obtained in different(independent)
samples
and
v ands!
would
be parameters of thesampling
distribution. In otherwords, perhaps
the smallsample
distribution ofpoint
estimates obtained with VEIL can besuitably
fittedby
an invertedX
2
distribution.
ACKNOWLEDGMENTS
This work was carried out while the senior author was at
INRA,
and on sabbaticalleave from the
University
of Illinois. The support of these 2 institutions isgratefully
REFERENCES
Box
GEP,
Tiao GC(1973)
Bayesian inference
in statisticalanalysis.
Addison-Wesley, Reading
Cox DR
(1975)
Partial likelihood. Biometrika62,
269-276Dawid AP
(1980)
ABayesian
look at nuisance parameters. In:Bayesian
Statistics(Bernardo
JM,
De GrootMH,
Lindley
DV,
Smith AFMeds)
UnivPress, Valencia,
167-175
Gianola
D,
Fernando RL(1986)
Bayesian
methods in animalbreeding theory.
J Anim Sci63,
217-244Gianola
D,
Foulley
JL,
Fernando RL(1986)
Prediction ofbreeding
values when variances are not known. Genet SelEvol 18,
485-498Gianola
D,
ImS,
FernandoRL, Foulley
JL(1990a)
Mixed modelmethodology
and the Box-Coxtheory
of transformations: aBayesian approach.
In: Advances in statistical methodsfor
genetic improvement
of
livestock(Gianola
D,
Hammond Keds) Springer-Verlag,
NewYork-Heidelberg-Berlin,
15-40Gianola
D,
ImS,
Macedo FWM(1990b)
A framework forprediction
ofbreeding
value. In: Advances in statistical methods
for
genetic improvement
of
livestock(Gianola
D,
Hammond Keds) Springer-Verlag,
NewYork-Heidelberg-Berlin,
210-239Harville DA
(1974)
Bayesian
inference for variance componentsusing only
errorcontrasts. Biometrika
61,
383-385Harville DA
(1990)
BLUP(Best
linear unbiasedprediction)
andbeyond.
In:Advances in statistical methods
for
genetic improvement
of
livestock(Gianola
D,
Hammond Keds)
Springer-Verlag,
NewYork-Heidelberg-Berlin
239-276Harville
DA,
Callanan TP(1990)
Computational
aspects of likelihood based inference for variance components. In: Advances in statistical methodsfor
g
C1’
lctic
improvement
of
livestock(Gianola
D,
HammondK,
eds)
Springer-Verlag,
NewYork-Heidelberg-Berlin
136-176Hoschele
I,
GianolaD, Foulley
JL(1987)
Estimation of variance components withquasi-continuous
datausing
Bayesian
methods. J Anim Breed Genet104, 334-349
Kloek T,
VanDijk
H(1978)
Bayesian
estimates ofequation
system parameters: anapplication
ofintegration by
Monte Carlo. Econometrica46,
1-19Lindley DV,
Smith AMF(1972)
Bayes
estimates for the linear model. JRoy
Stat Soc9,
45-58Macedo
FWM,
Gianola D(1987)
Bayesian analysis of
univariate mixed models withinformative
priors.
Eur Assoc AnimProd,
38th AnnuMeet,
Lisbon, Portugal,
35 pO’Hagan
A(1976)
Onposterior joint
andmarginal
modes. Biometrika63, 329-333
Patterson
HD,
Thompson
R(1971)
Recovery
of interblock information when blocksizes are
unequal.
Biometrika58,
545-554Searle SR
(1988)
Mixed models and unbalanced data:wherefrom,
whereat and whereto? Comm Stat17,
935-968Smith
AFM,
SkeneAM,
ShawJEH, Naylor
JC(1987)
Progress
with numerical andgraphical
methods forpractical Bayesian
statistics. Statistician36,
75-82Thompson
WA(1962)
Theproblem
ofnegative
estimates of variancecomponents.
Ann Math Stat26, 721-733
Zellner A