HAL Id: hal-00893998
https://hal.archives-ouvertes.fr/hal-00893998
Submitted on 1 Jan 1993
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of
sci-entific research documents, whether they are
pub-lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Marginal maximum likelihood estimation of variance
components in Poisson mixed models using Laplacian
integration
Rj Tempelman, D Gianola
To cite this version:
Original
article
Marginal
maximum likelihood estimation
of variance
components
in Poisson
mixed
models
using Laplacian
integration
RJ
Tempelman
D
Gianola
1
Louisiana
StateUniversity, Department of Agricultuml Statistics,
53
Agricultuml
AdministrationBuilding,
BatonRouge,
LA 70803-56062
University of Wisconsin, Department of Dniry Science,
266 Animal SciencesBuilding, Madison,
WI 53706, USA(Received
10 November 1992;accepted
18May
1993)
Summary - An
algorithm
forcomputing
marginal
maximum likelihood(MML)
estimates of variance components in Poisson mixed models ispresented.
ALaplacian approximation
is used to
integrate
fixed and random effects out of thejoint posterior
density
of all parameters. Thisapproximation
is found to be identical to that invoked in themore
commonly
usedexpectation-maximization
typealgorithm
for MML.Numerically,
however,
a different sequence of iterates isobtained, although
the same variance componentestimates should result. The
Laplacian algorithm
isprecisely
DFREML(derivative
freeREML)
optimization whenapplied
tonormally
distributeddata,
and could then be termed DFMML(derivative-free
marginal
maximumlikelihood).
Because DFMML is based on anapproximation
to themarginal
likelihood of the variance components, itprovides
amechanism for
testing hypotheses
about such components viaposterior
odds ratios ormarginal
likelihood ratio tests.Also, asymptotic posterior
standard errors of the variancecomponents can be
computed
with DFMML. ATierney-Kadane procedure
forcomputing
theposterior
mean of a variance component is alsodeveloped; however,
itrequires
2joint
maximizations,
andconsequently
may not beexpected
toperform
well in many linear and non-linear mixed models. Anexample
of a Poisson model ispresented
in which the null estimatecommonly
found whenjointly estimating
variance components with fixed and random effects isobserved; thus,
theTierney-Kadane procedure
forcomputing
theposterior
mean failed. On the otherhand,
theLaplacian
method succeeded inlocating
themode of the
marginal
distribution of the variance component in aBayesian
model withflat priors for fixed effects and variance components; that is, the MML estimate.
generalized linear
model / marginal
maximum likelihood/
variance component/
Résumé - Estimation des composantes de variance par le maximum de vraisemblance
marginale dans des modèles mixtes de Poisson à l’aide de la méthode d’intégration
de Laplace. Un
algorithme
de calcul des estimées de composantes de variance par lemaximum de vraisemblance
marginale
dans des modèles mixtes de Poisson estprésenté.
On utilise une
approximation
deLaplace
pour éliminer parintégration
leseffets fixés
etaléatoires de la densité
conjointe
aposteriori
de tous lesparamètres.
Cetteapproxima-tion se montre
identique
à celle àlaquelle
il estfait appel
dansl’algorithme plus classique
du typeespérance-maximisation.
Du point de vuenumérique cependant,
laséquence
desvaleurs obtenues par itération est
différente,
bien que les mêmes estimées de composantesdoivent être obtenues.
L’algorithme
deLaplace est précisément l’optimisation
de DFREML(maximum
de vraisemblance restreinte sansdérivée)
quand
onl’applique
à des donnéesdistribuées
normalement,
etpourrait
dont êtreappelé
DFMML(maximum
de vraisem-blancemarginale
sansdérivée).
Parce que DFMML est basé sur uneapproximation
de lavraisemblance
marginale
des composantes de la variance, ilfournit
un moyen de testerdes
hypothèses
relatives à de telles composantes via des rapports deprobabilités
apos-teriori ou des tests de rapport de vraisemblance. De
plus,
des valeursasymptotiques
a posteriori des composantes de variance peuvent être calculées au moyen de DFMML. Uneprocédure
deTierney-Kadane
pour calculer la moyenne aposteriori
d’une composante devariance est
également présentée;
ellerequiert cependant
2 maximisationsconjointes
et, enconséquence,
on ne doit pas s’attendre à cequ’elle
donne de bons résultats dansbeau-coup de modèles linéaires et non linéaires. Un
exemple
de modèle de Poisson estdonné,
danslequel
on obtient les valeurs nulles habituellement trouvéesquand
on estimeconjointe-ment des composantes de variance avec des
effets fixés
etaléatoires;
ainsi, laprocédure
deTiernay-Kadane
pour calculer la moyenne aposteriori
échoue. Enrevanche,
la méthode deLaplace
réussit à localiser le mode de la distributionmarginale
des composantes devariance dans un modèle
bayésien
avec des a prioriuniformes
pour leseffets fixés
et lescomposantes de variance, ie l’estimée du maximum de vmisemblance
marginale.
composantes de variance
/
distribution de Poisson/
modèle linéaires généralisé/
maximum de vraisemblance marginale/
intégration de LaplaceINTRODUCTION
Non-linear models for
quantitative
genetic analysis
ofcategorically
scoredpheno-types
have beendeveloped
in recent years(Gianola
andFoulley,
1983;
Harville andMee,
1984).
In thesemodels,
it is assumed that the observedpolychotomies
correspond
to realizations of anunderlying
normal variate inside intervals of the real line that are delimitedby
fixed thresholds. The mathematical link between theunderlying
and the discrete scalesis,
thus,
theprobit
function.Although
threshold models have been used foranalysis
of differenttypes
of discrete data(eg,
Meijering,
1985;
Weller etal, 1988;
Weller andGianola, 1989;
Manfredi etal,
1991)
counted variates areprobably
better modelledusing
Poisson or related distributions such asthe
negative
binomial distribution. Non-linear Poisson models for countedvariates,
important
traits. This is notentirely
satisfactory
for discrete characters because REML relies on theassumption
of multivariatenormality;
thedegree
of robustness of this method todepartures
fromnormality
has not beensufficiently
studied.Further,
unless there is alarge
amount of statistical information in the data about the varianceparameters,
thesampling
performance
of REML whenapplied
todiscrete traits may be
unsatisfactory,
assuggested by
the simulationstudy
ofTempelman
and Gianola(1991).
The
procedure
forestimating
variancecomponents
suggested by Foulley
et al
(1987)
in their Poisson model ismarginal
maximum likelihood(MML).
Ina
Bayesian
context with flatpriors
for variances and fixedeffects,
this methodgives
aspoint
estimates thecomponents
of the mode of themarginal
posterior
distribu-tion of all variancecomponents
(Foulley
etal,
1990).
With normaldata,
MML is identical to REML. With discretetraits,
such as in the Poisson model ofFoulley
et al
(1987),
approximations
to MML must beused,
because the exactintegration
of nuisance
parameters
(fixed
and randomeffects)
out of thejoint posterior
distri-bution is onerous. InFoulley
et al(1987),
theposterior
distribution of fixed and randomeffects,
given
the variancecomponent,
isapproximated by
a multivariatenormal process when
computing
MML estimates.The
objective
of this paper is to describe anotherapproximation
tomarginal
maximum likelihood estimation of variance
components
in a Poisson mixed model based onLaplace’s
method ofintegration,
assuggested by
Leonard(1982)
forcalculating
posterior modes,
andby Tierney
and Kadane(1986)
forcomputing
posterior
means. A model with asingle
variancecomponent
is considered inthe
present
study,
and therelationship
ofLaplacian
integration
to derivative-free methods forcomputing
REML with normal data ishighlighted.
A numericalexample
ispresented.
THE POISSON MIXED MODEL
Foulley
et al(1987)
employ
aBayesian approach
to make inferences in a Poissonmixed model. Given a location
parameter vector,
0,
the conditional distributionf ( )
of a counted variate y2 is assumed to be Poisson.where e denotes the natural
exponent,
n is the number ofobservations,
andA
i
is the Poissonparameter
for observation i.By definition,
the Poissonparameter
mustbe
positive; however,
thetransformation
q
j
=ln A
j
,
defined as the canonical link function for Poisson variables(McCullagh
andNelder,
1989),
can take any value on the real line.Foulley
et al(1987)
introduce the linearrelationship
where
9’ =
91
u’l,
andwi = [x’, zi!
is the ith row of the n x(p
+q)
incidence matrix W =[X, Z].
X and Z are known incidence matrices of dimensions n x pobservation. Under the Poisson
model,
the mean and variance of anobservation,
given 0,
isequal
to the Poissonparameter
A
i
.
Hence,
the residual variance in this model isprecisely
Aj .
The
vectorsP
and u are distinct in thefollowing
sense.Typically,
the elementsofp
pertain
to levels of fixed effects such asherd,
year and season, whereas those of the vector upertain
to &dquo;random&dquo; effects of the animalsbeing
recorded and of their known relatives. In aBayesian
context,
a flatprior
density
isassigned
top
and amultivariate normal
prior
distribution is assumed for u(Foulley
etal,
1987).
If uis a vector of
breeding values,
Above,
A is a matrix of additiverelationships,
andJ
fl is
the additivegenetic
variance.
If the
dispersion
parameter
J
fl is
unknown,
it can be estimated from itsmarginal
posterior
distribution so as toprovide
aparametric
empirical Bayes approach
tojoint
estimationofp
and u. When theprior
density assigned
toU2
is
flat,
then the mode of themarginal
posterior
distribution ofJfl
is
identical to the maximum of themarginal
likelihood ofor2
.
u
The unknown
parameters
arethus!i, u,
andou .
In animalbreeding applications,
often p+
q > n. Forexample,
in ’animal’ models with asingle
observationper recorded
individual,
the dimension of u is oftengreater
than the number ofobservations,
thatis, q >
n. This leads to ahighly
parameterized
model. When theelements of u are
strongly
intercorrelated,
apotentially
lowdegree
oforthogonality
canseriously
slow down convergence of Monte Carlo Markov Chainmethods,
suchas Gibbs
sampling,
as a means ofestimating
marginal
densities, modes,
or means(Smith, 1991).
Under theseconditions, approximating
themarginal
density
ofU2
uby
Laplacian integration procedures
may be attractive from a numericalpoint
ofview.
ESTIMATION OF THE VARIANCE COMPONENT FROM THE MODE OF ITS MARGINAL POSTERIOR DISTRIBUTION
We first assume
that,
conditionally
on8,
the observations areindependent, following
a Poisson distribution as in[1].
Let id ==
,
l
er
2
’
1
[0
[p’,
U
,,a2j’
represent
allparameters
of interest.Assigning
a flatprior
to the variancecomponent
Q
u
and
to
p,
such that thejoint prior
density
of tl isproportional
to that of u, we can writethe
log
of thejoint posterior
density
ofp,
u, andor2
asBecause
!E&dquo;!
=IAQu!
=!A!(<r!)!,
and A does notdepend
on theparameters,
itfollows that
[5]
isexpressible
as:The
joint posterior
density
of the fullparameter
set can be written as:where
p(0 )
!u,
y)
is theposterior
density
of0,
given
that the variancecomponents
are
known,
andp(o,2 u
y)
is themarginal density
of the varianceparameter.
Define:to be the mode of the
joint posterior
density
of0, given
Q
u,
andIgnoring
third andhigher
orderterms,
theasymptotic approximation
is then made thatwhich is also used
by Foulley
et al(1987)
to obtainapproximate
MML. In order tocompute
[8a],
these authorsemploy
theNewton-Raphson algorithm,
which can beshown to lead to the iteration:
where
[t]
indicates iteratenumber,
is a residual. Note that
R-
l
v
={(Yi - Ài) / Ài}
can
beinterpreted
as a residual vectorexpressed
in units of residualvariance,
or relative to the mean of the conditionaldistribution of the observations. It can also be shown that:
Note that the solution to
system
(9J,
which resemblesHenderson’s
mixed modelWhe wish to find the mode of the
marginal
distribution ofJfl
(or
maximum of themarginal
likelihood of thedata)
by
recourse toLaplacian
integration,
as in Leonard(1982).
Now:A
second-orderTaylor
seriesexpansion
of thelog
joint posterior
density
about0,
at a fixedau
gives:
Employing
[13]
in[12]
andletting
p
A
(.)
denote anapproximate
density,
Using
this in[11]
andrecalling
that9 !
1 o,2, y
isapproximately
normal,
Taking logs
of[15],
andusing
[6],
we note thatapart
from aconstant,
where
A
j
= exp{wi9
a
}
iscomputed
from the mode of thejoint
posterior
density
ofp
and u,given
Q
u.
One can find theposterior
marginal
mode ofJfl
by establishing
agrid
ofpoints
of{Qu, LA(Q! !
y)}
and theninterpolating
with a second orderpolynomial
as in Smith and Graser(1986).
It is
interesting
to note that if the data werenormally distributed,
thealgorithm
just
described reduces to thatsuggested by
Graser et al(1987),
or DFREML(Meyer,
VARIANCE COMPONENT ESTIMATION FROM THE POSTERIOR MEAN
Theory
The
posterior
mean is an attractivepoint estimator;
from a decisiontheory
viewpoint,
it can be shown to minimizeexpected
posterior
quadratic
loss(Lee,
1989).
The mean of themarginal
posterior
distribution of the variancecomponent
can be written as:
where
!2!
is the space of the entireparameter
vector. In this section we considerdevelopments
forcomputing posterior
meanspresented by
Tierney
and Kadane(1986)
and derived in detailby
Cantet et al(1992);
these are extensionsof Laplacian
procedures
introducedby
Leonard(1982).
Let:
The
posterior
mean of the variancecomponent
can then berepresented
asNote that the denominator assures that the
joint posterior integrates
to oneThe
negative
joint
Hessians above can be written as:The upper left blocks in both
negative
Hessianspertaining
to the vector of locationparameters,
0,
are as in[10].
Theremaining
terms are:Further:
so that
Using
[22a]
and[22b]
in[19],
theposterior
mean isapproximately
This
approximation
has been deemed to behighly
accurate. The errors of theapproximations
to theintegrals
in the denominator and the numerator in[19]
are of order0(n-
1
).
This would also be the order of the error incurred whenapproximating
thejoint posterior
by
a normal distribution.However,
theleading
terms in the 2 errors arenearly
identical and cancel when the ratio in[19]
is taken(Tierney
andKadane,
1986),
thereby leading
to an error term that isproportional
to0(n-
2
).
Computational
considerationsConsider the ratio of
integrals
in[19],
and themaximize_rs, !
and 4
*
in[20a]
and[20b].
Computing
the denominator entailsevaluating
L(!),
which isequivalent
tomaximizing
thejoint
log-posterior density
in[6].
Likewise, computing
the numeratorwould entail the same
procedure,
except
thatlog(a!)
is added to thelog
joint
posterior.
From
[6],
it follows that:Setting
the first derivatives to zero leads toexpressions:
which would be used in
conjunction
withsystem
[9]
(evaluated
at the ’current’ values ofJfl
)
to obtain thejoint posterior
mode. We obtained estimates of0’
u2equal
to zero in several simulation tests of thisalgorithm,
whenapplied
to Poisson models. Thisimplies
that thejoint
density
is maximum whenJ
fl =
0. As notedby Lindley
and Smith(1972),
Harville(1977),
Thompson
(1980),
and Gianola and Fernando(1986),
joint
maximization of ajoint
posterior density
withrespect
tofixed and random effects and the variance
components
in a linearmodel,
often leadsto a sequence of iterates for the latter
converging
towards zero. Harville(1977)
attributed theproblem
to ’severedependencies’
between u andJfl
(clearly,
thethe
problem
also arises whensearching
for the mode ofp(p, uly)
orp(uly)
whereany
’dependency’
would be eliminatedby
integration
ofQ
u.
Ingeneral,
theproblem
does not occur when informative
priors
areemployed
fora!.
H6schele et al(1987)
also found that many of their variancecomponent
iterations weredrifting
towards zero whenusing
a first orderalgorithm
formaximizing
thejoint
posterior density
in threshold models.
It is instructive to contrast the
log
of thejoint posterior
density
in[6], L(!),
with theapproximate
marginal density
ofa 2,
LA (
U2l
y),
in[16].
Apart
from theconstant
terms,
these 2 functions differ in that in[16],
half the value of thelog
of the determinant of thenegative
Hessian matrix is subtracted. Ritter(1992)
views this as animportant
’width or varianceadjustment’
in the estimation ofor from
itsmarginal
distribution;
thissupports
the claim madeby
O’Hagan
(1976)
thatmarginal
modes are better estimators thanjoint
modes.Because the
Tierney
and Kadane(1986)
approximation
to theposterior
meanfails whenever
<7! goes
to zero in thejoint
maximizationalgorithm,
alternativestrategies
must besought.
Onepossibility
would be to evaluate theapproximate
marginal density
of the variancecomponent
as in[16]
and thencompute
theposterior
meanby
cubicspline fitting
(deBoor,
1978)
orby
Gaussianquadrature
involving ’strategic’
evaluationpoints
of o,2 U.
NUMERICAL EXAMPLE
Data on
embryo yields
within a nucleus scheme were simulated with a Poisson animal modelaccording
toprocedures
given
inTempelman
and Gianola(1991).
Theunderlying
mean on thelog
scale waslog(4).
Two ’fixed’factors,
one with 5 levels and the other with 20 levels weregenerated
from aN(0, 0.10)
distribution onthe canonical
log
scale. Additivegenetic
effects weregenerated
from aN(0, 0.05)
distribution for a base
population
of 16 sires and 128 cows. Cows weresuperovulated
and mated at random to outside sires also drawn at random from thepopulation
atlarge.
The numbers ofembryos
produced
per cow was adrawing
from the Poissondistribution,
with the value of itsparameter
depending
on the fixed effects andthe additive
genetic
value of the female inquestion.
Sex ratios in theembryos
was50: 50,
and sexes wereassigned
atrandom, using
the binomial distribution. Maleembryos
werediscarded,
and thegenetic
value of femaleembryos
was obtained as:where as is the
breeding
value of an outsidesire,
aD is thebreeding
value of thedonor cow, and zo -
NiiD(0,1).
The femaleembryos
were ’raised’(probability
of survival to anembryo
collection was0.70),
and mated at random to nucleussires,
toproduce
a newgeneration.
Records onembryo yields
obtained from thesematings
were simulated as before.Thus,
information onembryo
yields
was available on foundation cows and theirsurviving
female progeny. The simulation involved a ’natural selection’ process because donor cows withoutembryos
recovered left noprogeny at
all,
whereas donor cows withhigher embryo yields
left more femaleIn the
simulation,
p = 24(25
levels of fixed effects minus 1dependency)
and q = 242
(16
sires,
128 dams and 98surviving
progeny).
The mode of theapproximate
marginal
density
ofor was
locatedemploying
[16].
An iterativequadratic
fit led too,2=
0.0347 asmaximum,
and theapproximate
log marginal
density
isdepicted
infigure
1,
with a cubicspline
fittedthrough
the iterates. TheEM-type algorithm
ofFoulley
et al(1987)
gave a modal value ofor =
0.0343.Convergence
criteria of the 2algorithms
were notdirectly comparable, thereby
contributing
to some of thediscrepancy
between the 2 estimates.As noted
previously,
theTierney
and Kadane(1986)
approach
gaveJfl
= 0 asthe value of the variance
component
that maximized thejoint
density.
Toillustrate,
the
log
joint
density
[6]
was evaluated at the ’current’ value ofor2
(and
of theresulting
solution tosystem
(9))
during
iteration and theplot
is shown infigure
2.Clearly,
u2
= 0 would be themaximizer, giving
adensity
value ofplus
infinity.
Thedegeneracy
of thislog density
highlights
theimportance
of the Hessianadjustment
in
[16].
DISCUSSION
The
Laplacian procedure
forfinding
the mode of themarginal
posterior
distribution of asingle
variancecomponent
in a Poisson mixed model was found to beanalogous
to ’derivative-free’ methods
employed
forcomputing
REML estimates of variancecomponents
in a mixed linear model(Smith
andGraser,
1986;
Graser etal,
1987).
In
fact,
ifLaplacian marginalization
isapplied
to variance estimation withnormally
distributed data in aBayesian
model with flatpriors
for fixedeffects
and varianceLaplacian integration
is then exact.Although
asingle
variancecomponent
was considered in this paper, thealgorithm
generalizes
in astraightforward
manner toa Poisson model with several
variances,
and one obtains MML estimates of variancecomponents.
Because of theanalogy
notedabove,
wesuggest
DFMML(derivative-free
marginal
maximumlikelihood)
as ageneric
term for thisalgorithm,
since theprocedure
extendsbeyond
the class of mixed linear models.The
Laplacian technique
used forfinding
the mode of themarginal
posterior
distribution ofQ
u
is theoretically, although
notnumerically, equivalent
to theEM-type
algorithm suggested by Foulley
et al(1987).
In order to obtain the mode of themarginal
distribution ofQ
u
these
authorsemploy
therelationship:
The first term of
[26]
is obtainedby
differentiating
[6]
with respect to<r!,
withAt
and ureplaced by
the numericalquantities, À
i
and urespectively.
Thenwhere
C
uu
is the randomby
random block of theinverse
of the conditionalnegative
Hessian[10].
Finally,
setting
[27]
to zero andsolving
forQ
u
gives
the iteration.which is
precisely
thealgorithm
ofFoulley
et al(1987).
It isimportant
to notethat the
algorithm developed
in this paper in connection with[16]
isnumerically
different from
[8],
ie,
adifferent
sequence of iterates is tobe
expected.
However,
within the limits of numerical
precision,
as determinedby
the localcurvature
of themarginal log
likelihood,
convergence to the same maximum should beattained,
as verified in the numericalapplication
discussedpreviously.
Application
of theLaplacian procedure
tothe
thresholdmodel
of
Gianola andFoulley
(1983)
and Harville and Mee(1984)
isstraightforward.
Onewould
simply
replace
thelog-likelihood
and the conditional Hessiangiven
in thispaper
by
the
corresponding
termsin,the
threshold model. For multidimensionalproblems,
ie,
more than one variance
parameter,
procedures
suggested
for DFREMLby Meyer
(1989)
such as thesimplex
orquasi-Newton algorithms
could be used.Optimization
procedures
thatincorporate
information on the vector of first derivativesand
onthe function to be
optimized
would beexpected
to be mostuseful.
Approximate
posterior
standard errors for the variancecomponents
may becomputed
using
aquadratic
fit of thelog
posterior
density
near the mode as inGraser and Smith
(1987).
Thecomputation
of thelog-posterior
densities allows alsoto construct
marginal
likelihood ratiotests,
orposterior
oddsratios,
forassessing
the
importance
of different sources ofgenetic
and environmental variation.Harville
(1977)
asserted that theposterior
mode is an attractiveestimator,
being
less sensitive than the mean to the tails of the
posterior
density.
However,
undera
squared-error loss,
theposterior
mean isoptimum.
Unfortunately,
theLaplacian
procedure
ofTierney
and Kadane(1986)
forcomputing
theposterior
mean wouldbe
expected
to fail in many instances. Joint maximization should work well when there is alarge
amount of information on each randomeffect,
eg siremodels,
butnot in animal models.
Hence,
alternative numericalprocedures
should besought
forcomputing posterior
means. Further enhancements tomarginal
estimation ofparameters
involving
Laplacian
integration
aregiven
by
Kass andSteffey
(1989)
and Leonard et al
(1989).
REFERENCES
Cantet
RJC,
FernandoRL,
Gianola D(1992)
Bayesian
inference aboutdispersion
deBoor C
(1978)
A Practical Guide toSpline Fitting. Springer Verlag,
New YorkFoulley
JL,
GianolaD,
Im S(1987)
Genetic evaluation of traits distributed as Poisson-binomial with reference toreproductive
characters. TheorAppl
Genet73,
870-877Foulley
JL,
GianolaD,
Im S(1990)
Genetic evaluation for discretepolygenic
traitsin animal
breeding.
In: Advances in Statistical Methodsfor
GeneticImprovement
of
Livestock(Gianola
D,
HammondK,
eds)
Springer Verlag, Heidelberg,
361-409 GianolaD,
Fernando RL(1986)
Bayesian
methods in animalbreeding theory.
J Anim Sic63,
217-244Gianola
D,
Foulley
JL(1983)
Sire evaluation for orderedcategorical
data with athreshold model. Genet Sel Evol
15,
201-224Gianola
D,
ImS,
Macedo FW(1990)
A framework forprediction
ofbreeding
value. In: Advances in Statistical Methodsfor
GeneticImprovement
of
Livestock.(Gianola
D,
HammondK,
eds)
Springer Verlag, Heidelberg,
210-238Graser
HU,
SmithSP,
Tier B(1987)
A derivative-freeapproach
forestimating
variance
components
inanimal
modelsby
restricted maximum likelihood. J Anim Sci64,
1362-1370Harville DA
(1977)
Maximum likelihoodapproaches
to variancecomponent
esti-mation and to relatedproblems.
J Am Stat Assoc72, 320-338
Harville
DA,
Mee RW(1984)
A mixed modelprocedure
foranalyzing
orderedcategorical
data. Biometrics40,
393-408Hoschele
I,
GianolaD,
Foulley
JL(1987)
Estimation of variancecomponents
withquasi-continuous
datausing
Bayesian
methods. J Anim BreedGenet 104,
334-349 KassRE,
Steffey
D(1989)
Approximate
Bayesian
inference inconditionally
inde-pendent
hierarchical models(parametric
empirical Bayes
models).
J Am Stat Assoc84, 717-726
Lee PM
(1989)
Bayesian
Statistics: An Introduction. OxfordUniversity
Press,
NY Leonard T(1982)
Comment on ’Asimple
predictive
density
function’. J Am Stat Assoc77,
657-658Leonard
T,
HsuJSJ,
Tsui KW(1989)
Bayesian
marginal
inference. J Am Stat Assoc84,
1051-1058Lindley DV,
Smith AFM(1972)
Bayes
estimates for the linear model. J R Stat Soc B34,
1-18Manfredi
ET,
San CristobalM,
Foulley
JL(1991)
Some factorsaffecting
theestimation of
genetic
parameters
for cattledystocia
under a threshold model. AnimProd
53,
151-156McCullagh
P,
Nelder JA(1989)
Generalized Linear Models.Chapman
andHall,
NY,
2nd ednMeijering
A(1985)
Sire evaluation forcalving
traitsby
best linear unbiasedprediction
and nonlinearmethodology.
J Anim Breed Genet102,
95-105Meyer
K(1989)
Restricted maximum likelihood to estimate variancecomponents
for animal models with several random effectsusing
a derivative-freealgorithm.
Genet SelEvol 21,
317-330Ritter C
(1992)
Modern inference in nonlinearleast-squares
regression.
PhDthesis,
Department
ofStatistics,
University
ofWisconsin-Madison,
WISmith AFM
(1991)
Bayesian computational
methods. Phil Trans R Soc Lond A337,
369-386 ’Smith
SP,
Graser HU(1986)
Estimating
variancecomponents
in a class of mixed modelsby
restricted maximum likelihood. JDairy
Sci69,
1156-1165Tempelman
RJ,
Gianola D(1991)
Evaluation of a Poisson animal model for thegenetic
evaluation ofembryo yields.
JDairy
Sci 74(suppl. 1),
157Thompson
R(1980)
Maximum likelihood estimation of variancecomponents.
MathOperationsforsch
Stat11,
545-561Tierney
L,
Kadane JB(1986)
Accurateapproximations
forposterior
moments andmarginal
densities. J Am Stat Assoc81,
82-86Weller