Results: Overall, we found that the absolute percent bias of the odds ratio estimated via PQL or QUAD increased as the PQL- and QUAD-estimated odds ratios became more discrepant, though results varied markedly depending on the characteristics of the dataset
Conclusions: Given how markedly results varied depending on data set characteristics, specifying a rule above which indicated biased results proved impossible. This work suggests that comparing results from **generalized** **linear** **mixed** **models** estimated via PQL and QUAD is a worthwhile exercise for regression coefficients and variance components obtained via QUAD, in situations where PQL is known to give reasonable results.

En savoir plus
Unit´e de recherche INRIA Lorraine, Technopˆole de Nancy-Brabois, Campus scientifique, ` NANCY 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LES Unit´e de recherche INRIA Rennes, Ir[r]

An interesting direction for further research would be to develop the statisti- cal methodology for semi-Markov switching **generalized** **linear** **mixed** **models**. Since the hidden semi-Markov chain likelihood cannot be written as a simple product of matrices, the MCEM algorithm proposed by Altman (2007) for the MS-GLMM cannot be directly extended to the semi-Markovian case. In our MCEM-like algorithm proposed for MS-LMM and SMS-LMM, the difficulty lies mainly in the prediction of the random effects.

The estimation algorithms proposed in this paper can directly be transposed to other families of hidden Markov **models** such as for instance hidden Markov tree **models**; see Durand et al. (2005) and references therein. Another interesting direction for further research would be to develop the statistical methodology for semi-Markov switching **generalized** **linear** **mixed** **models** to take into account non-normally distributed response variables (for instance, number of growth units, apex death/life, non flowering/flowering character in the plant architec- ture context). Since the conditional expectation of random effects given state sequences cannot be analytically derived, the proposed MCEM-like algorithm for semi-Markov switching **linear** **mixed** model cannot be transposed to the case of non-normally distributed observed data and other conditional restora- tion steps, for instance based on a Metropolis-Hastings algorithm, have to be derived for the random effects.

En savoir plus
frederic.mortier@cirad.fr
Abstract
We address the component-based regularisation of a multivariate **Generalized** Lin- ear **Mixed** Model (GLMM). A set of random responses Y is modelled by a GLMM, using a set X of explanatory variables, a set T of additional covariates, and random effects used to introduce the dependence between statistical units. Variables in X are assumed many and redundant, so that regression demands regularisation. By contrast, variables in T are assumed few and selected so as to require no regularisation. Reg- ularisation is performed building an appropriate number of orthogonal components that both contribute to model Y and capture relevant structural information in X. To estimate the model, we propose to maximise a criterion specific to the Supervised Component-based Generalised **Linear** Regression (SCGLR) within an adaptation of Schall’s algorithm. This extension of SCGLR is tested on both simulated and real data, and compared to Ridge- and Lasso-based regularisations.

En savoir plus
In this paper we are exploring some new possibilities based on separated space-time representations, that are closely inspired from some existing and well established strategies [4] [12]. In the next section we motivate the use of separated representations and its connection with other more experienced tech- niques, as the model reduction techniques based on the use of proper orthogonal decompositions. In section 3 we illustrate the application of the Proper Gen- eralized Decomposition through an academic parabolic model. In section 4 we consider the coupling between global and many species kinetic local **models** and the issue related to the existence of diﬀerent characteristic times of both the local and global model. This issue was addressed in the context of proper gen- eralized decompositions in [17]. Finally, in section 5 we address a numerical example.

En savoir plus
variables. In this context, the set of λ i (resp. of U i ) are understood as a re-
duced basis of random variables (resp. of deterministic functions). Optimal decompositions could be easily defined if the solution u were known. Such a decomposition can for example be obtained by a KL expansion (or clas- sical spectral decomposition) of u, which is the optimal decomposition with respect to a classical inner product. The GSD method consists in defining an optimality criterion for the decomposition which is based on the equation(s) solved by the solution but not on the solution itself. The construction of the decomposition therefore does not require to know the solution a priori or to provide a surrogate (approximation on coarser mesh or low order Neumann expansion) as pointed previously. The GSD method was first proposed in [27] in the context of **linear** stochastic problems. In the case of **linear** symmetric elliptic coercive problems, by defining an optimal decomposition with respect

En savoir plus
In this paper, we have considered the problem of correc- tion of significance level for a series of several codings of an explanatory variable in a **Generalized** **Linear** Model with several adjusting variables. The methods developed, based on resampling methods, enable us to consider categorical transformations as more flexible in order to explore the unknown shape of the effect between an explanatory and a dependent variable. The simulation studies presented above show, firstly, that the resampling method provides similar results for the Type-I error rate control and the power as those found with the exact method proposed by Liquet and Commenges [14] for dichotomous and Box-Cox transformations. Secondly, in the situation of categorical transformations, these simula- tions demonstrate the good performance of our proposed approaches. Finally we observed the robustness estima- tion of the p value by the resampling methods. These meth-

En savoir plus
Third set of take home messages, after the description of the strength of individual based and distance dependent **models**
35
Individual based distance dependent **models**, either G&Y or process-based, are useful tools to test hypothesis on the species interactions (light, water, nutrients) and give insights on community ecology and functional ecology

not clear if, in practice, a homogeneous controller could have a better performance than a well-tuned **linear** regulator.
1.2 Homogeneity vs linearity
Quality of any control system is estimated by many quantitative indexes (see e.g. [14], [15], [16]), which reflect control precision, energetic effectiveness, robustness of the closed-loop system with respect to disturbances, etc. From mathematical point of view, the design of a ”good” control law is a multi-objective optimization problem. The mentioned criteria frequently contradict to each other, e.g. a time optimal feedback control could not be energetically optimal but it may be efficient for disturbance rejection [8]. In practice, an adjustment of a guaranteed (small enough) convergence time can be considered instead of minimum time control problem, and an exact convergence of systems states to a set-point is relaxed to a convergence into a sufficiently small neighborhood of this set-point.

En savoir plus
The analysis of the residual correlations in Section 4.2.2 fails at giving independent normalized resid- uals, suggesting that a more complex correlation matrix should be introduced. Unfortunately, as far as we know, although the ♥❧♠❡ library provides a large set of classes of correlation structures (the ❝♦r❙tr✉❝t classes), it does not allow such a mod- elling. To deal with this issue, an extension to our work would be to develop a new ❝♦r❙tr✉❝t class, integrating a more complex correlation matrix. Thus, the difficulty of dealing with complex data involving the use of **linear** **mixed** effects **models** is clearly illustrated, and the need for further evidence on the implications of this tool is demonstrated.

En savoir plus
(ii) independency of the response given the random effects i.e. independency of the errors (iii) normality of the error
(iv) homoscedasticity of the error
Several studies have shown that maximum likelihood inference on fixed effects is robust to non-gaussian random effects distribution (Butler and Louis 1992; Verbeke and Lesaffre, 1997; Zhang and Davidian 2001). Some results also suggest robustness to misspecification of the covariance structure. First, Liang and Zeger (1986) have demonstrated convergence of fixed effects estimates obtained by **Generalized** Estimating Equations (GEE) whatever the working covariance matrix. Given that, for the **linear** model, estimating equations obtained by derivation of the maximum likelihood are identical (except for the covariance estimator) to GEE with appropriate covariance structure, this result demonstrates convergence of MLE for fixed effects in the **linear** **mixed** model when the covariance structure is not correct. On the other hand, Liang et Zeger (1986) demonstrated that variance estimates of fixed effects may be biased when the covariance structure is not correct and they recommend the use of the robust sandwich estimate (Royall, 1986). However this robust estimate may be instable for small sample sizes.

En savoir plus
Comparison of the estimated Gaussian hidden semi-Markov chain (GHSMC) parameters (i.e. where the influence of covariates and the inter-individual heterogeneity are not taken into account) with the estimated semi-Markov switching **linear** **mixed** model (SMS-LMM) parameters (state occupancy distributions and marginal observation distributions). The regression parameters, the cumulative rainfall effect and the variability decomposition are given

We study a generic minimization problem with separable non-convex piecewise linear costs, showing that the linear programming (LP) relaxation of three textbook mixed int[r]

sulting in unbiased estimates of variance components in many situations [2, 3] .
The standard errors (SE) of parameter estimates are obtained asymptotically from the in- verse of the Fisher information matrix [2, 3]. The above estimates of SE might be biased when the asymptotic approximation is incorrect, for example when the sample-size is small. Sometimes, they can not be obtained when the model is complex or the design is too sparse. Bootstrap methods represent an alternative approach for estimating the SE of parameters, as well as to provide a confidence interval without assuming it is symmetrical. It was first introduced by Efron (1979) for independent and identically distributed (iid) observations. The principal idea of bootstrap is to resample the observed data repeatedly to create datasets similar to the original dataset, then fit them to construct the distribution of an estimator or a statistic of interest [4, 5]. Four main bootstrap approaches have been proposed for simple **linear** regression: case bootstrap, residual bootstrap, parametric bootstrap and wild bootstrap [6, 7, 8, 9]. The case bootstrap is the most simple and intuitive form which consists in resam- pling the entire vector of observations with replacement. The residual bootstrap resamples the residuals after model fitting. The parametric bootstrap adopts the principle of residual bootstrap but, instead of directly resampling observed residuals, we simulate the residuals

En savoir plus
July 21, 2016
Abstract
We introduce a **mixed** **generalized** Dynkin game/stochastic control with E f -expectation in
a Markovian framework. We study both the case when the terminal reward function is Bore- lian only and when it is continuous. By using the characterization of the value function of a **generalized** Dynkin game via an associated doubly reflected BSDEs (DRBSDE) first provided in [ 16 ], we obtain that the value function of our problem coincides with the value function of an optimization problem for DRBSDEs. Using this property, we establish a weak dynamic programming principle by extending some results recently provided in [ 17 ]. We then show a strong dynamic programming principle in the continuous case, which cannot be derived from the weak one. In particular, we have to prove that the value function of the problem is contin- uous with respect to time t, which requires some technical tools of stochastic analysis and new results on DRBSDEs. We finally study the links between our **mixed** problem and **generalized** Hamilton–Jacobi–Bellman variational inequalities in both cases.

En savoir plus
Or, when discussing if a **linear** transformation is an isomorphism, they wrote:
S4 and S5: T(0,0,0) = (0,0); T(1,0,1)=(1,0); T(1,0,2)=(1,0). As T(1,0,1)=T(1,0,2)=(1,0) ==> T is not injective. T is not an isomorphism.
Most students in both experiences showed difficulties when facing rotations since the rule was not easy to find from the picture or from a table of values. Most students in Brazil had not been introduced to the matrix form of **linear** transformations, when the teacher introduced it, they considered it as a novelty and used it without problems, showing again encapsulation of **Linear** Transformations. Mexican students struggled with the rule using trigonometric functions; they had not been introduced to the matrix representation of the transformation. After some time, students in team A found a possible rule (Figure 3a), and only students in group C realized those equations could be written as the product of a matrix and a vector (Figure 3b). These students also showed encapsulation when they realized that a composition of transformations was needed:

En savoir plus
In this paper, we are not concerned by deterministic com- puter **models** but by stochastic numerical **models** - i.e. when the same input variables set leads to different output values. The model itself relies on probabilistic methods (e.g. Monte- Carlo) and is therefore intrinsically stochastic because of some “uncontrollable variables”. For the uncertainty analy- sis, Kleijnen (1997) has raised this question, giving an exam- ple concerning a queueing model. In the nuclear engineer- ing domain, examples are given by Monte-Carlo neutronic **models** used to calculate elementary particles trajectories, Lagrangian stochastic **models** for simulating a large num- ber of particles inside turbulent media (in atmospheric or hydraulic environment). In our study, “uncontrollable” vari- ables correspond to variables that are known to exist, but unobservable, inaccessible or non describable for some rea- sons. It includes the important case in which observable vec- tor variables are too complex to be described or synthesized by a reasonable number of scalar parameters. This last situ- ation might concern the code for which some simulations of complex random processes are used. For example, one can quote some partial differential equation resolutions in het- erogeneous random media simulated by geostatistical tech- niques (e.g. fluid flows in oil reservoirs, Zabalza et al., 1998, and acoustical wave propagation in turbulent fluids, Iooss et al., 2002), where the uncontrollable variable is the sim- ulated spatial field involving several thousand scalar values for each realization. Of course, in this case, behind this un- controllable variable stands a fully controllable parameter: the random seed. However, the effect of the random seed on the computer code output is totally chaotic because a slight modification of the random seed leads to a very different random medium realization. For simplicity and for general- ity, we use the expression “uncontrollable variable” in this paper.

En savoir plus
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

1 Introduction and results
The mathematical study of spin glass **models** is by now more than 20 years old and still a flourishing subject. These **models** were first introduced in the field of theo- retical physics in an attempt to understand the singular low temperature behavior of disordered magnetic materials. Historically, the first model developed to this end was a random interactions version of the classical nearest neighbor Ising model called Edwards-Anderson model [12]. Soon after a mean-field version of the EA model was introduced by Sherrington and Kirkpatrick [23]: It played and continues to play a central rˆ ole in the development of the subject. Although it was announced as being “solvable” the SK model happened to be the source of many very difficult problems and it is only recently that its thermodynamic limit was proved to exist for all values of the temperature and the external field [14] and its free energy computed [27].

En savoir plus