• Aucun résultat trouvé

Coverage Ratios for the Linear Predictor in GLMM study

2.4 Simulation Studies

2.6.1 Coverage Ratios for the Linear Predictor in GLMM study

0.90 0.95 0.99

Method v ubi 7 14 21 7 14 21 7 14 21

RWLB v1 cm 0.7203 0.8481 0.8898 0.7796 0.8979 0.9386 0.8537 0.9475 0.9795 ep 0.8889 0.9204 0.9147 0.9185 0.9483 0.9549 0.9557 0.9726 0.9853 v2 cm 0.7922 0.8673 0.8949 0.8402 0.9122 0.9421 0.8977 0.9548 0.9808 ep 0.7957 0.8682 0.8958 0.8521 0.9154 0.9429 0.9202 0.9618 0.9820 v3 cm 0.8160 0.8732 0.8982 0.8688 0.9191 0.9446 0.9308 0.9637 0.9827 ep 0.8923 0.9121 0.9099 0.9301 0.9468 0.9523 0.9666 0.9774 0.9856 REBE v1 cm 0.9201 0.9497 0.9862 0.9409 0.9652 0.9919 0.9650 0.9816 0.9969 ep 0.9847 0.9545 0.9873 0.9903 0.9692 0.9927 0.9954 0.9841 0.9973 v2 cm 0.9841 0.9558 0.9868 0.9898 0.9705 0.9924 0.9951 0.9857 0.9973 ep 0.9906 0.9573 0.9873 0.9941 0.9718 0.9928 0.9973 0.9865 0.9975 PBE cm 0.8869 0.9194 0.9139 0.9170 0.9478 0.9543 0.9544 0.9723 0.9851 ep 0.8903 0.9114 0.9093 0.9284 0.9463 0.9519 0.9658 0.9772 0.9855 Table 2.6: Average Coverage Ratios of Prediction Intervals for η with n = 300.

0.90 0.95 0.99

Method v ubi 7 14 21 7 14 21 7 14 21

RWLB v1 cm 0.7149 0.8454 0.8888 0.7750 0.8961 0.9377 0.8499 0.9458 0.9790 ep 0.8885 0.9198 0.9145 0.9182 0.9476 0.9545 0.9551 0.9721 0.9849 v2 cm 0.7855 0.8643 0.8939 0.8343 0.9098 0.9411 0.8930 0.9530 0.9803 ep 0.7797 0.8660 0.8948 0.8390 0.9140 0.9422 0.9110 0.9603 0.9816 v3 cm 0.7936 0.8707 0.8973 0.8509 0.9173 0.9439 0.9192 0.9620 0.9823 ep 0.8918 0.9119 0.9097 0.9291 0.9461 0.9520 0.9661 0.9767 0.9853 REBE v1 cm 0.8507 0.9314 0.9770 0.8837 0.9493 0.9846 0.9247 0.9704 0.9928 ep 0.9783 0.9347 0.9782 0.9856 0.9528 0.9856 0.9930 0.9734 0.9935 v2 cm 0.9766 0.9367 0.9775 0.9844 0.9542 0.9851 0.9922 0.9745 0.9933 ep 0.9886 0.9377 0.9781 0.9928 0.9552 0.9856 0.9967 0.9753 0.9936 PBE cm 0.8874 0.9195 0.9142 0.9174 0.9472 0.9542 0.9545 0.9721 0.9849 ep 0.8907 0.9116 0.9095 0.9284 0.9459 0.9518 0.9656 0.9765 0.9852 Table 2.7: Coverage Ratios of Prediction Intervals for η with n = 600.

List of Acronyms

PDF Probability Density Function PMF Probability Mass Function LM Linear Models

LMM Linear Mixed Models GLM Generalized Linear Models

GEE Generalized Estimating Equations GLMM Generalized Linear Mixed Models ML Maximum Likelihood

REML Residual orRestricted Maximum Likelihood LAML Laplace-approximated Maximum Likelihood

LALL Laplace-approximated log-Likelihood

LAWLL Laplace-approximated Weighted log-Likelihood RE Random Effect

BLUP Best Linear Unbiased Predictor REB Random Effect Bootstrap EP Empirical Predictions CM Conditional Modes

EBP Empirical Best Predictors CB Cluster Bootstrap

GCB Generalized Cluster Bootstrap PB Parametric Bootstrap

RWLB Random Weighted Laplace Bootstrap BCI Bootstrap-based Confidence Intervals CR Coverage Ratios

MSEP Mean Squared Error of the Prediction CV Conditional Variances

BP Best Predictor

MC Monte-Carlo

List of Tables

1.1 Parameters in the Simulation Study for LMM. . . 19

1.2 Coverage ratios : 1−α= 0.95 in the LMM Study. . . 23

1.3 Parameters in the second set of simulation studies. . . 25

1.4 Coverage ratios : 1−α= 0.95 in the Mixed Logit Study . . . 30

1.5 Coverage ratios for 1−α= 0.90 in the LMM Study.. . . 38

1.6 Coverage ratios : 1−α= 0.99 in the LMM Study. . . 38

1.7 Coverage ratios : 1−α= 0.90 in the Mixed Logit Study . . . 39

1.8 Coverage ratios : 1−α= 0.99 in the Mixed Logit Study . . . 40

2.1 Average relative bias of estimators of MSEP. . . 57

2.2 Average RRMSE estimators of MSEP. . . 58

2.3 Average coverage ratios for Prediction Intervals of level 1−α= 0.95. . . . 60

2.4 Average Coverage Ratios of Prediction Intervals forui with n = 300. . . 62

2.5 Average Coverage Ratios of Prediction Intervals forui with n = 600 . . . . 63

2.6 Average Coverage Ratios of Prediction Intervals forη with n = 300. . . 64

2.7 Coverage Ratios of Prediction Intervals forη with n = 600. . . 64

List of Figures

1.1 Original trajectories in the Orthodonticdataset . . . 18 1.2 Q-q plots of√

nβbkβbk vs. √

nβbkβk fork ={0,1,2,3} and ni = 4 in the LMM study . . . 21 1.3 Q-q plots of √

n(σbk2∗σbk2) vs. √

n(σbk2σk2) for k ={1,2} and ni = 4 in the LMM study. . . 22 1.4 Q-q plots of √

nφbφb vs. √

nφbφin the LMM study. . . 22 1.5 Evolution of the average infection rate over the visits in the Toenail

dataset. . . 25 1.6 Q-Q plots of √

nβbkβbk vs. √

nβbkβk for k = {0,1,2,3}, n = 300 in the Mixed Logit Model. . . 27 1.7 Q-Q plots of √

nβbkβbk vs. √

nβbkβk for k = {0,1,2,3}, n = 600 in the Mixed Logit Model. . . 28 1.8 Q-Q plots of√

n(σb2∗σb2) vs. √

n(σb2σ2) the Mixed Logit Model. . . . 29 1.9 Q-q plots of√

nβbkβbkvs. √

nβbkβkfork ={0,1,2,3}in the LMM study for ni = 12 . . . 36 1.10 Q-q plots of √

n(σb2∗kσb2k) vs. √

n(σb2kσk2) for k = {1,2} for ni = 12 in the LMM study . . . 37

References

Ole E Barndorff-Nielsen and David Roxbee Cox. Asymptotic techniques for use in statis-tics. Chapman & Hall, 1989.

D. Bates. lme4: Mixed-effects modeling with r. URL http://lme4. r-forge. r-project.

org/book, 2010.

D. Bates, M. M¨achler, B. Bolker, and S. Walker. Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1):1–48, 2015.

J. G. Booth and J. P. Hobert. Standard errors of prediction in generalized linear mixed models. Journal of the American Statistical Association, 93(441):262–272, 1998.

Miguel Boubeta, Mar´ıa Jos´e Lombard´ıa, and Domingo Morales. Empirical best prediction under area-level poisson mixed models. Test, 25(3):548–569, 2016.

N. E. Breslow and D. G. Clayton. Approximate inference in generalized linear mixed models. Journal of the American Statistical Association, 88(421):9–25, 1993.

Norman E Breslow and Xihong Lin. Bias correction in generalised linear mixed models with a single component of dispersion. Biometrika, 82(1):81–91, 1995.

F. B. Butar and P. Lahiri. On measures of uncertainty of empirical bayes small-area estimators. Journal of Statistical Planning and Inference, 112(1):63–76, 2003.

J. R. Carpenter, H. Goldstein, and J. Rasbash. A novel bootstrap procedure for assessing the relationship between class size and achievement. Journal of the Royal Statistical Society: Series C (Applied Statistics), 52(4):431–443, 2003.

Arindam Chatterjee and Soumendra Nath Lahiri. Bootstrapping lasso estimators.Journal of the American Statistical Association, 106(494):608–625, 2011.

S. Chatterjee and A. Bose. Generalized bootstrap for estimating equations. The Annals of Statistics, 33(1):414–436, 2005.

S. Chatterjee, P. Lahiri, and H. Li. Parametric bootstrap approximation to the distribu-tion of eblup and related predicdistribu-tion intervals in linear mixed models. The Annals of Statistics, 36(3):1221–1245, 2008.

K. Das, J. Jiang, and JNK Rao. Mean squared error of empirical predictor. The Annals of Statistics, 32(2):818–840, 2004.

G.S. Datta and P. Lahiri. A unified measure of uncertainty of estimated best linear unbiased predictors in small area estimation problems. Statistica Sinica, 10(2):613–

628, 2000.

A.C. Davison and D.V. Hinkley. Bootstrap methods and their application. Number 1.

Cambridge Univ Pr, 1997.

M De Backer, C De Vroey, Emmanuel Lesaffre, I Scheys, and P De Keyser. Twelve weeks of continuous oral therapy for toenail onychomycosis caused by dermatophytes:

a double-blind comparative trial of terbinafine 250 mg/day versus itraconazole 200 mg/day. Journal of the American Academy of Dermatology, 38(5):S57–S63, 1998.

Nicolaas Govert De Bruijn. Asymptotic methods in analysis, volume 4. Courier Corpora-tion, 1970.

Pauline Ding and Alan H Welsh. Bootstrapping longitudinal data with multiple levels of variation. Working Paper, 2017.

Randal Douc, Eric Moulines, Tobias Ryd´en, et al. Asymptotic properties of the maxi-mum likelihood estimator in autoregressive models with markov regime. The Annals of statistics, 32(5):2254–2304, 2004.

B Efron. Bootstrap methods: Another look at the jackknife. The Annals of Statistics, pages 1–26, 1979.

C.A. Field and Alan H. Welsh. Bootstrapping clustered data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(3):369–390, 2007.

C.A. Field, Z. Pang, and Alan H Welsh. Bootstrapping data with multiple levels of variation. Canadian Journal of Statistics, 36(4):521–539, 2008.

C.A. Field, Z. Pang, and Alan H Welsh. Bootstrapping robust estimates for clustered data. Journal of the American Statistical Association, 105(492):1606–1616, 2010.

Daniel Flores-Agreda. On the Inference of Random Effects in Generalized Linear Mixed Models. PhD thesis, Universit´e de Gen`eve, 2017.

David A Fournier, Hans J Skaug, Johnoel Ancheta, James Ianelli, Arni Magnusson, Mark N Maunder, Anders Nielsen, and John Sibert. Ad model builder: using auto-matic differentiation for statistical inference of highly parameterized complex nonlinear models. Optimization Methods and Software, 27(2):233–249, 2012.

David A Freedman and Stephen C Peters. Bootstrapping a regression equation: Some empirical results.Journal of the American Statistical Association, 79(385):97–106, 1984.

W Gonz´alez-Manteiga, MJ Lombard´ıa, I Molina, D Morales, and L Santamar´ıa. Bootstrap mean squared error of a small-area eblup. Journal of Statistical Computation and Simulation, 78(5):443–462, 2008.

Wenceslao Gonz´alez-Manteiga, Mar´ıa Jos´e Lombard´ıa, Isabel Molina, Domingo Morales, and L Santamar´ıa. Estimation of the mean squared error of predictors of small area lin-ear parameters under a logistic mixed model. Computational statistics & data analysis, 51(5):2720–2733, 2007.

Peter J Green. Penalized likelihood for general semi-parametric regression models. Inter-national Statistical Review/Revue InterInter-nationale de Statistique, pages 245–259, 1987.

References 73 Andreas Griewank and Andrea Walther. Evaluating derivatives: principles and techniques

of algorithmic differentiation. SIAM, 2008.

Peter Hall and Tapabrata Maiti. On parametric bootstrap methods for small area pre-diction. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68 (2):221–238, 2006.

D.A. Harville. Decomposition of prediction error. Journal of the American Statistical Association, pages 132–138, 1985.

C.R. Henderson. Estimation of genetic parameters. Biometrics, 6:186–187, 1950.

Feifang Hu and John D Kalbfleisch. Estimating equations and the bootstrap. Lecture Notes-Monograph Series, pages 405–416, 1997.

P. Huber, E. Ronchetti, and M.P. Victoria-Feser. Estimation of generalized linear la-tent variable models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66(4):893–908, 2004.

Woncheol Jang and Johan Lim. PQL estimation biases in generalized linear mixed models.

Institute of Statistics and Decision Sciences, Duke University, Durham, NC, USA, pages 05–21, 2006.

J. Jiang. Consistent estimators in generalized linear mixed models. Journal of the American Statistical Association, 93(442):720–729, 1998a.

J. Jiang. Empirical best prediction for small-area inference based on generalized linear mixed models. Journal of Statistical Planning and Inference, 111(1-2):117–127, 2003.

J. Jiang. Linear and generalized linear mixed models and their applications. Springer Verlag, 2007.

J. Jiang and P. Lahiri. Empirical best prediction for small area inference with binary data. Annals of the Institute of Statistical Mathematics, 53(2):217–243, 2001.

J. Jiang, P. Lahiri, and S.M. Wan. A unified jackknife theory for empirical best prediction with m-estimation. Annals of Statistics, pages 1782–1810, 2002.

Jiming Jiang. Asymptotic properties of the empirical blup and blue in mixed linear models. Statistica Sinica, pages 861–885, 1998b.

Harry Joe. Accuracy of Laplace approximation for discrete response mixed models. Com-putational Statistics & Data Analysis, 52(12):5066–5074, 2008.

R N Kackar and D Harville. Approximations for standard errors of estimators of fixed and random effect in mixed linear models. Journal of the American Statistical Association, pages 853–862, 1984.

Robert E Kass and Duane Steffey. Approximate bayesian inference in conditionally in-dependent hierarchical models (parametric empirical bayes models). Journal of the American Statistical Association, 84(407):717–726, 1989.

Kasper Kristensen, Anders Nielsen, Casper Berg, Hans Skaug, and Bradley Bell. Tmb:

Automatic differentiation and laplace approximation, 2016. ISSN 1548-7660. URL https://www.jstatsoft.org/v070/i05.

Tatsuya Kubokawa and Bui Nagashima. Parametric bootstrap methods for bias correction in linear mixed models. Journal of Multivariate Analysis, 106:1–16, 2012.

Pierre Lahiri et al. On the impact of bootstrap in survey sampling and small-area esti-mation. Statistical Science, 18(2):199–210, 2003.

Kung-Yee Liang and Scott L Zeger. Longitudinal data analysis using generalized linear models. Biometrika, pages 13–22, 1986.

Dennis V Lindley. Approximate bayesian methods. Trabajos de estad´ıstica y de investi-gaci´on operativa, 31(1):223–245, 1980.

Qing Liu and Donald A Pierce. Heterogeneity in mantel-haenszel-type models.

Biometrika, pages 543–556, 1993.

Jan R Magnus, Heinz Neudecker, et al. Matrix differential calculus with applications in statistics and econometrics. John Wiley & Sons, 1995.

Peter McCullagh. Resampling and exchangeable arrays. Bernoulli, pages 285–301, 2000.

C.E. McCulloch and S.R. Searle. Generalized, Linear, and Mixed Models. New York:

Wiley, 2001.

C N Morris. Parametric empirical bayes inference: theory and applications. Journal of the American Statistical Association, pages 47–55, 1983.

Jeffrey S Morris. The blups are not “best” when it comes to bootstrapping. Statistics &

probability letters, 56(4):425–430, 2002.

J A Nelder and R W M Wedderburn. Generalized linear models. Journal of the Royal Statistical Society. Series A (General), pages 370–384, 1972.

Michael A Newton and Adrian E Raftery. Approximate bayesian inference with the weighted likelihood bootstrap. Journal of the Royal Statistical Society. Series B (Methodological), pages 3–48, 1994.

H. Ogden. On asymptotic validity of naive inference with an approximate likelihood.

ArXiv e-prints, January 2016.

Zhen Pang and A. H. Welsh. The generalised bootstrap for clustered data. Int. J. Data Anal. Tech. Strateg., 6(4):407–415, December 2014. ISSN 1755-8050. doi: 10.1504/

IJDATS.2014.066604. URLhttp://dx.doi.org/10.1504/IJDATS.2014.066604.

J.C. Pinheiro and D.M. Bates. Mixed-effects models in S and S-PLUS. Springer Verlag, 2009.

Jos´e C Pinheiro and Douglas M Bates. Approximations to the log-likelihood function in the nonlinear mixed-effects model. Journal of computational and Graphical Statistics, 4(1):12–35, 1995.

References 75 Jos´e C Pinheiro and Edward C Chao. Efficient laplacian and adaptive gaussian quadrature algorithms for multilevel generalized linear mixed models. Journal of Computational and Graphical Statistics, 2012.

Richard F Potthoff and SN Roy. A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika, pages 313–326, 1964.

N. Prasad and J. Rao. The estimation of the mean squared error of small-area estimators.

Journal of the American Statistical Association, 85(409):163–171, 1990.

R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2015. URLhttps://www.R-project.org/. Sophia Rabe-Hesketh and Anders Skrondal. Multilevel and longitudinal modeling using

Stata. STATA press, 2008.

Sophia Rabe-Hesketh, Anders Skrondal, Andrew Pickles, et al. Reliable estimation of generalized linear mixed models using adaptive quadrature. The Stata Journal, 2(1):

1–21, 2002.

Sophia Rabe-Hesketh, Anders Skrondal, and Andrew Pickles. Maximum likelihood esti-mation of limited and discrete dependent variable models with nested random effects.

Journal of Econometrics, 128(2):301–323, 2005.

C Radhakrishna Rao and LC Zhao. Approximation to the distribution of m-estimates in linear models by randomly weighted bootstrap. Sankhy¯a: The Indian Journal of Statistics, Series A, pages 323–331, 1992.

Stephen W Raudenbush, Meng-Li Yang, and Matheos Yosef. Maximum likelihood for generalized linear models with nested random effects via high-order, multivariate laplace approximation. Journal of computational and Graphical Statistics, 9(1):141–157, 2000.

G.K. Robinson. That blup is a good thing: The estimation of random effects. Statistical Science, pages 15–32, 1991.

Donald B Rubin et al. The bayesian bootstrap. The annals of statistics, 9(1):130–134, 1981.

Mayukh Samanta and A.H. Welsh. Bootstrapping for highly unbalanced clustered data.

Computational Statistics & Data Analysis, 2012.

S.R. Searle, G. Casella, and C.E. McCulloch.Variance components. Wiley Online Library, 1992.

Jun Shao and Dongsheng Tu. The jackknife and bootstrap. Springer Science & Business Media, 2012.

Zhenming Shun and Peter McCullagh. Laplace approximation of high dimensional inte-grals.Journal of the Royal Statistical Society. Series B (Methodological), pages 749–760, 1995.

Avinash C Singh, DM Stukel, and D Pfeffermann. Bayesian versus frequentist measures of error in small area estimation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(2):377–396, 1998.

Luke Tierney and Joseph B Kadane. Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81(393):82–86, 1986.

Luke Tierney, Robert E Kass, and Joseph B Kadane. Fully exponential laplace approxima-tions to expectaapproxima-tions and variances of nonpositive funcapproxima-tions. Journal of the American Statistical Association, 84(407):710–716, 1989.

G. Verbeke and G. Molenberghs. Linear mixed models for longitudinal data. Springer Verlag, 2009.

Chien-Fu Jeff Wu. Jackknife, bootstrap and other resampling methods in regression analysis. the Annals of Statistics, pages 1261–1295, 1986.