Therefore, a potential variety bias in our sample can- not be completely excluded.
This is the first study to establish the reliability and val- idity of the GOHAI scale in a French representativesample of patients with schizophrenia, thus making it a suitable tool to measure OHrQOL and its multiple as- pects in this population. Having a validated scale to evaluate OHrQOl in patients with schizophrenia pro- vides a positive step towards future multidimensional re- search in mental and physical well-being to this group. The evaluation of self-perceived oral health is necessary to help caregivers and researchers to develop ways to improve oral and overall health in schizophrenic patients. In the future the GOHAI could support re- search in assessment schizophrenia specific oral health scale.
Study design and sample
A cross-sectional population-based survey was conducted by telephone from a random sample of White Pages list- ings in the state of Victoria in Australia. In order to reflect general population occupational group proportions, quo- tas were set to match Australian Bureau of Statistics (ABS) census proportions of upper white-collar, lower white- collar, and blue-collar groups (29%, 30%, and 41%, respectively). We also quota sampled for urban/Mel- bourne (72%) versus rural/regional Victoria (28%). The inclusion criteria were 1) being aged 18 years or older, and 2) working at the time of the survey for profit or pay (including self-employed). Interviews were completed in November 2003 with a 66 % response rate from in-frame households (i.e., had one or more working residents aged 18 or over) to yield a representativesample of 1,101 work- ing Victorians (526 men and 575 women.).
Fig. 10. Variance D 2
t (L) of the thickness of PS layers depending on the
sample size L, computed from image analysis.
function of the sample size is shown in Figure 10. The γ expo- nents of the scaling law for each morphological property were estimated from the results of image analysis, by fitting the slope of the variance curves. Values of K are estimated from Figures 9 and 10 based on Eq. (18). From Figure 9, two slopes are iden- tified for the power law, indicating the existence of two scales of heterogeneities. The first-scale, or local, variability is intrin- sic to the microstructure induced by the extrusion process: it encompasses the effects of short-range physical phenomena, such as flow nonlinearities, local thermal inhomogeneity and interfacial interactions. This first scale of variability is always present although its effects become blunter for a larger system; it is characterized by a consistent γ exponent of 0.66–0.75 for both properties, which should be compared to the theoretical value of 0.5 obtained for random fibres in 2D (Jeulin, 2015). The second scale of variability to consider is seen only for sam- ple sizes higher than 10 4 nm; its origin could be described as boundary effect patterns during the process. Indeed, due to higher shear rate prescribed to the melt at the wall while passing through multiplying elements, layers in the vicinity of the wall become thinner than others. If this phenomenon oc- curs at each multiplying step, the final sample is constituted of patterns with long-range varying layer thickness sequences. The tipping point between the slopes could then be interpreted as the characteristic length of such pattern. In our case, the pattern dimension can be estimated to be 2.10 4 nm, corre- sponding to approximately 100 layers, i.e. about 10% of the film thickness. Rather than considering this a limitation of the statistical approach invoked for the case of films with finite dimensions, we propose to use this method for the characteri- zation of microstructural variability, in order to study the effect of process parameters on the quality of nanolayered films. As a
With regard to the results of this study, the specificity of the model developed was only 49.3%. Although the authors argue that 75% of ‘false-positive gamblers’ (ie, the gamblers wrongly classified as problem gamblers, 50.7%) had responded positively to at least one question on the PGSI, a more discriminant model might have been achieved by using another type of algorithm. In partic- ular, the authors explain that quantitative variables were categorised into quartiles, which lead to a loss of infor- mation compared with treating them as continuous. Furthermore, in a prediction perspective, it is often worth to train multiple models and select the best one. More- over, like most of the studies using real gambling data, this work focused on only one type of gambling from a single operator, whose clients may not be representative of all online gamblers in a given country.
Most individuals in the study sample were men (MSM: 39%, heterosexual men: 27%) and were over 40 years old (78%) ( Table 1 ). Nine percent were chronically co-infected with HCV. More than one third reported feeling lonely, with a mean index of experience of discrimina- tion of 0.36. Fig 1 . presents the percentage of PLHIV who reported experiencing discrimina- tion in each of the six different social contexts, as a function of suicide risk. The percentage of discriminated PLHIV–irrespective of the social context considered—was systematically higher among individuals with suicide risk than among the other PLHIV. Medical care (health ser- vices) and family were the two social contexts of discrimination most reported among PLHIV with suicide risk.
Methods: This is a cross-sectional study based on the data from a nationally representative study about health and use of healthcare resources in France (ESPS 2012). The number of frailty criteria was assessed among exhaustion, unintentional weight loss, muscle weakness, impaired mobility, and low level of physical activity. Polypharmacy and PIMs were assessed from the data of reimbursement by the National Health Insurance over the whole year 2012. PIMs were defined according to the Laroche list plus additional criteria dealing with inappropriate prolonged use of medications. The analyses used Poisson regression models, with the number of frailty criteria as dependent variable.
The question of representativity has been a topic of interest in scientific communities for half a century, especially in the field of materials science, micromechanics and microscopy. Indeed, microstructural heterogeneities play a critical role on the macroscopic physical properties of materials. One common way to account for this underlying complexity is resorting to homogenization techniques. Many approaches, including an- alytical and computational ones, are available for determining the homogenized properties of random media. Most of them necessitate the existence of a representative volume element (RVE). More refined definitions have been given for the RVE over the past 50 years, mostly within the context of microme- chanics of elastic media. A review of this topic can be found in Gitman et al. (2007) and Dirrenberger et al. (2014). The clas-
Naturalistic data are a useful source for language acquisition research. Re- cently, the importance of denser corpora has been emphasized in order to capture an accurate picture of child language development. However, working with large amounts of data raises resources issues, since it is time-consuming to record and to transcribe. In this article, we focus on the ideal duration of a naturalistic recording for it to be considered a representativesample of children’s linguistic behaviors to observe the acquisition of words and sounds. Some of our results may suggest that 30 minutes of recording may be enough to capture these specific developments, but these results are discussed in the perspective of what an ideal session could be. Keywords : method, language acquisition, naturalistic data, lexical development, phonolog- ical development
well be related to individual differences in socio-economic characteristics and social norms of behavior.
The empirical question we address in this paper is to measure how variations in social norms, economic and social characteristics of individuals affect their propen- sities to provide and sustain social capital. In order to perform our measurements, we combine the strengths of survey and experimental methods by having a large representativesample of the Dutch population play a computerized version of the two player game similar to that presented by Berg, Dickhaut and McCabe (1995) (henceforth BDMc). The structure of the game allows concerns for social efficiency and motives of trust, trustworthiness, positive reciprocity, and altruism to emerge from the players’ decisions. In this game, two players are given an equal endow- ment, with one player randomly assigned to the role of a sender, and the other player assigned to the role of a responder. The sender must decide how much to invest from his endowment. This amount is doubled and transferred to the respon- der, who must choose how much of his total wealth, i.e., the amount received plus his endowment, should be returned to the sender. It is easy to see that investments are socially desirable in this game as they increase the overall social surplus. An element of trust is involved as senders bear a risk that responders return nothing. Trustworthiness and reciprocity are involved as responders have the possibility to reward trust placed by senders. Moreover, senders and responders may also invest or return, regardless of the action of the other player, out of pure altruism (see e.g., Cox, 2004).
Taken together, to our knowledge, no data appears to be available on the relationship between PA and SED- time with hsCRP in patients with arthritis or fibromyalgia. Furthermore, no study has investigated the specific level of PA that is associated with hsCRP levels that are below the clinical cut-point of 3 mg/L in any population. Therefore, the aim of the present study was to examine the transversal association between objectively measured PA and SED-time with hsCRP levels in adults with arthritis and fibromyalgia. We also investigated the level of PA that was associated with lower clinical levels of hsCRP. We hypothesized that the higher daily levels of PA and lower daily time spent in sedentary behaviors in adults with arthritis and fibromyalgia will be associated with lower hsCRP levels (i.e., <3 mg/L). The shape of these associations should be not linear. This is the first study to identify a specific level of objectively measured PA that is associated with lower hsCRP levels in people with arthritis or fibromyalgia in a national representativesample.
Feature selection, topological representative subgraphs, frequent subgraphs, graph databases.
With the emergence of graph databases, the task of frequent subgraph discovery has been extensively addressed. Many approaches have been proposed in the literature allowing the extraction of frequent subgraphs in an efficient way. Yet, the number of discovered frequent subgraphs is ex- tremely high which causes information overload that may hinder or even makes unfeasible further exploration. Feature selection is a way to tackle this information over- load problem. As structural similarity represents one major cause of redundancy in frequent subgraphs, many works have been proposed for subgraph selection based on exact or approximate structural similarity [1, 2, 3, 4]. Some works have been proposed based on closed and maximal subgraphs such as [1, 2, 6, 5]. Although the set of closed or maximal subgraphs is much smaller than that of frequent ones, the number of subgraphs is still very high. In some applications, slight differences between subgraphs do not matters. Yet, in real-world cases very similar subgraphs so- metimes slightly differ in structure. Exact structural iso- morphism does not help to overcome this issue.
Identifying representative muscle synergies in overhead football throws
A.L. Cruz Ruiz a,b *, C. Pontonnier a,b,c , A. Sorel a , G. Dumont a,b
IRISA/INRIA MimeTIC, Rennes, France; b ENS Rennes, Bruz, France; c Ecoles de Saint-Cyr Coëtquidan, Guer, France
Our pure extrinsic based parcellation has good agreement with anatomical and functional par- cellations from the literature. Particularly, the motor and sensory cortex appear to be found. Having the tractograms in a vectorial space allowed us to work efficiently with them and to create a population-representative parcellation.
Abstract: A good sample is a point set such that any ball of radius contains a constant number of points. The Delaunay triangulation of a good sample is proved to have linear size, unfortunately this is not enough to ensure a good time complexity of the randomized incremental construction of the Delaunay triangulation. In this paper we prove that a random Bernoulli sample of a good sample has a triangulation of linear size. This result allows to prove that the randomized incremental construction needs an expected linear size and an expected O(n log n) time.
n (Gretton et al., 2006) where
n is the number of samples. In contrast, it is well known that standard OT suffers from the curse of di- mensionality (Dudley, 1969): Its sample complexity is exponential in the dimension of the ambient space. Although it was recently proved that this result can be refined to consider the implicit dimension of data (Weed and Bach, 2017), the sample complexity of OT appears now to be the major bottleneck for the use of OT in high-dimensional machine learning problems. A remedy to this problem may lie, again, in regular- ization. Divergences defined through regularized OT, known as Sinkhorn divergences, seem to be indeed less prone to over-fitting. Indeed, a certain amount of reg- ularization seems to improve performance in simple learning tasks (Cuturi, 2013). Additionally, recent pa- pers (Ramdas et al., 2017; Genevay et al., 2018) have pointed out the fact that Sinkhorn divergences are in fact interpolating between OT (when regularization goes to zero) and MMD (when regularization goes to infinity). However, aside from a recent central limit theorem in the case of measures supported on discrete spaces (Bigot et al., 2017), the convergence of empirical Sinkhorn divergences, and more generally their sample complexity, remains an open question.
VI. C ONCLUSION
In this paper we have shown the importance of performing automated web browsing sessions through real on-market different web browsers, use different Internet protocols and user-representative residential network access to bring light on the parameters which can increase or decrease web browsing quality. Our monitoring website helps in monitoring a set of websites and the induced temporal web browsing quality i.e the remote websites’ structure evolution, through which Internet protocol the objects are delivered and from where they are downloaded. Among the measured websites, 61.02% are Google websites and when using Google Chrome, an average of 3 resources are pre-fetched on the browser’s start- up, thus decreasing the overall download time and through benchmarking techniques, Google Chrome is best at graphical rendering and contributes at increased web browsing quality (compared to Mozilla Firefox).
Given a network N with rule collection R having c atoms, any classical verification task can obviously be centrally checked in O(cℓ P u∈V |T (u)|) time once an optimal representative collection for R has been computed. The reason is that the action taken by a router u ∈ V with forwarding table T (u) on any header of an atom A is the action associated to first rule of T (u) containing s where s is the atom representative of A (or drop if no rule contains s) and can be identified in O(|T (u)| ℓ). Classical verification tasks including NO-LOOP, NO-BLACKHOLE, REACHABILITY(u, v), CONSIS- TENCY(u, v) can then be performed in linear time on the graph resulting from actions of all routers.