Proverbe shadok The Standard Model contains every elementary particle that has ever been seen experimentally. With the discovery of the Higgs boson, the converse is now also true: every particle predicted by the Standard Model has been discovered. However, a complete theory of nature should, among other things, account for neutrino masses, dark matter and dark energy and provide an explanation for the origin of the matter- antimatter asymmetry of the Universe. Thus, the Standard Model has to be embedded into a bigger picture. Intheleptonsector, low energy phenomena could provide an interesting probe of newphysics. For instance, the observation of neutrinoless double beta decay would be an unmistakable sign that neutrinos are Majorana particles, while flavour violating processes inthesector of charged leptons are so much constrained inthe Standard Model that their observation would be a very clear signal of newphysics. A good example of the links that can exist between high energy and low energy physics is leptogenesis, that provides a common origin for neutrino masses and the matter-antimatter asymmetry of the Universe. Leptogenesis is in general difficult to probe directly due to the large mass of particles involved, but it can be related to properties of neutrinos: for instance, any indication that neutrinos are Majorana fermions would strongly advocate for such scenarios. In chapter 3 , we showed the importance of flavour effects inthe context of leptogenesis with a scalar triplet, even in a temperature regime in which the Yukawa couplings of leptons do not allow to distinguish them. With respect to that matter, leptogenesis with a scalar triplet differs from scenarios involving hierarchical right-handed neutrinos. These flavour effects significantly enlarge the parameter space available for successful leptogenesis. We also studied a model in which the CP violation responsible for thelepton asymmetry can be expressed straightforwardly in terms of neutrino parameters, which makes it very predictive.
PACS numbers: 12.15.Hh,12.15.Ji, 12.60.Fr,13.20.-v,13.38.Dg
Flavour physics looks back to a quarter-century of pre- cision studies at the B-factories with a parallel theoretical effort addressing the Standard Model (SM) predictions for the measured quantities . With the parameters of the Cabibbo-Kobayashi-Maskawa (CKM) matrix  overconstrained by many measurements one can predict yet unmeasured quantities . Still, the global fit to the CKM unitarity triangle reveals some discrepancies with the SM, driven by a conflict between B(B → τ ν) and sin(2β) measured from Bd → J/ΨK [4, 5]. Furthermore, in May 2010 the DØ experiment reported a deviation of the semileptonic CP asymmetry (dimuon asymmetry) in Bd,s decays from its SM prediction [6, 7] by 3.2 σ . In June 2011 this discrepancy has increased to 3.9 σ . In summer 2010 the data could be interpreted in well- motivated scenarios with NewPhysics (NP) in B − B mixing amplitudes . In this letter we present novel analyses which include thenew data of 2011, in particu- lar from the LHCb experiment.
III. PIC MCC MODEL AND SIMULATION CONDITIONS
A. Simulation conditions
In this paper we consider conditions that are similar (inthe sense of discharge similarity laws) to those of the magnetron discharge experiment of Ito et al. 10 at Stanford University. The conditions of the experiments of Ito et al. were simulated in a recent letter 39 . In this relatively simple experiment, Ito et al. observed, with the help of a fast CCD camera, very nice and well-defined, self-organized structures rotating inthe azimuthal direction of a miniature magnetron. These experiments were performed in a small, 2 mm gap dc magnetron discharge at a pressure of 20 Pa (0.15 torr) in argon, and with a magnetic field decaying from about 1 T at the cathode surface to 0.1 T at the anode surface. In this miniature magnetron discharge, several regions of enhanced luminosity were seen to rotate inthe −𝐸 × 𝐵 direction at velocities on the order of 10 km/s. The number of rotating structures (or mode number) was 5 for an applied voltage of 260 V and decreased to 3 for an applied voltage of 274 V. We consider here discharge dimensions and pressure that are closer to those used in practical applications of magnetrons than inthe discharge of Ito et al. 10 . The dimensions are multiplied by a factor of 10 with respect to those of the discharge of Ito et al. , and the pressure and magnetic field are divided by 10. According to the classical similarity laws, gas discharges with same pd product (pressure times dimensions) and same B/p ratio (magnetic field over pressure) are “similar”, i.e., the distribution of n/p 2 , j/p 2 (ratio of charged particle densities and current densities to pressure squared), E/p (electric field strength over pressure) as a function a px, py (product of pressure and spatial coordinates, x and y coordinates) are identical. The relation between the time scales in two similar discharges is such that the product pt (pressure x time) is conserved. The different velocities (thermal, drift, etc…) and the particle energy are identical in similar discharges.
Howard and Vance (2007) mentioned that a successful virtual assembly environment requires virtual parts to emulate real world parts behaviour. According to Seth et al. (2011) this can be achieved by means of physics-based modelling (PBM), which uses physics simulation engines (PSEs) to simulate real world physics properties, such as friction, gravity and contact forces to perform the assembly. The use of PBM results in better appreciation and understanding of part functionality and can also lead to improved training of manual tasks (Wang et al., 2001; Zerbato et al., 2011). However, there are several challenges when integrating haptics with PSEs, e.g. synchronization, non-effective collision detection, high computational cost and a negative impact on the performance of the application (Seugling and Ro ¨ lin, 2006), mainly because simulation engines have not been developed for haptic rendering, where the update frequency is over 1 kHz while thephysics simulation update rate is around 100 Hz (Ritchie et al., 2008a, b; Glondu et al., 2010). The aim of this paper is to present a methodology to evaluate the performance of PSEs by identifying their strengths, limitations and weaknesses when used in haptically
Another consequence of calorimetry-driven reconstruction is that stray ECAL clus- ters produced by mechanisms other than pp collisions can be misidentified as photons. In particular, beam halo muons that accompany proton beams and penetrate the detec- tor longitudinally, and the interaction of particles inthe ECAL photodetectors (“ECAL spikes”) have been found to produce spurious photon candidates at nonnegligible rates. To reject these backgrounds, the ECAL signal inthe seed crystal of the photon cluster is required to be within ±3 ns of the arrival time expected for particles originating from a collision. In addition, the candidate cluster must comprise more than a single ECAL crystal. Furthermore, the maximum of the total energy along all possible paths of beam halo particles passing through the cluster is calculated for each photon candidate. This quantity, referred to as the halo total energy, is required to be below a threshold defined to retain 95% of the true photons, while rejecting 80% of the potential halo clusters.
interviewers, the test period. Participants were assured that their answers would be treated with confidentiality and all respondents were chosen from the top management staff.
Ben Youssef et al. (2011) identified three waves of IT using the same database. All indicators (adoption level of IT 4 , depth of usage of IT 5 and time required to use particular IT 6 ) show up that there are three waves: The first wave is called “general used technologies” assumed to be relatively widespread (more than 80%), intensively used (between 4 and 5 on Likert scale) and rapidly introduced in all business sectors. These technologies are: fixed phones, telecopy, office, computers and general purpose software. The second wave is formed by “intermediary technologies” with high potential of use. Inthe mid of the nineties they were named « new » IT: Internet, E-mail, specific software, free software, and mobile phones. The third wave is based on networking technologies. They are among the latest technological generations of IT. Most of them need to optimize their use, costly investment, know-how, and qualified human resources. These technologies are Intranet, laptops, videoconference (VC) and Electronic Data Interchange (EDI). All firms use the technologies of the first wave. We focused on the second and third waves of these technologies.
Titre : Construction d’observables en cosmologie : vers de nouvelles sondes pour le secteur sombre
Résumé : La nature de l’énergie noire et de la matière noire est encore un mystère. De futures missions spatiales nous permettront d’observer les propriétés et la distribution de millards de galaxies. Mais quelle est la meilleure manière de contraindre la physique de ces composantes inconnues avec une telle quantité de données ? Le but de cette thèse est de chercher de nouvelles sondes du secteur sombre de l’univers dans le régime linéaire et non-linéaire de la formation des structures. La physique du secteur sombre laisse des em- preintes dans la distribution des grandes structures à un temps donné (espace « réel »). Cependant leur distribution apparente telle que vue par un observateur (espace des red- shifts) est légèrement différente de celle dans l’espace réel. En effet, les messagers (comme la lumière) sont perturbés pendant leur trajet depuis leur source vers l’observateur. Quelle est alors la relation entre espace réel et espace des redshifts ? Comment extraire des in- formations cosmologiques de cette transformation ? L’essentiel de mon travail a été de simuler des observables tout en prenant en compte tous les effets relativistes au premier ordre dans l’approximation de champs faible. Le lentillage gravitationnel faible modifie la position apparente des sources ainsi que leurs propriétés (forme, luminosité) tandis que les perturbations en redshift changent la distance radiale apparente des objets. Pour aborder ces questions, nous avons réalisé une simulation N-corps de grande taille et très résolue, idéale pour étudier les halos avec une taille allant de celle de la Voie Lactée à celle des amas de galaxie. Ensuite, nous avons suivi le trajet de photons dans la simulation en intégrant directement les équations des géodésiques avec pour seule approximation l’approximation de champs faible. Nous avons aussi développé un algorithme qui nous permet de connecter un observateur à des sources via des géodésiques nulles. La matrice de lentillage est ainsi évaluée grâce à la déformation d’un faisceau lumineux tandis que le décalage spectral vers le rouge est directement calculé via sa définition donnée par la relativité générale. Grâce à cette bibliothèque de suivi de rayons lumineux, nous avons pu construire des catalogues de halos qui prennent en compte les effets relativistes.
G ENERAL C ONCLUSION
In this thesis, we have shown the importance of the uncertainties related to dark matter searches. First we have presented a study of the constraints derived from dark matter searches and SUSY searches at colliders applied to the Minimal Supersymmetric extension of the Standard Model, focusing on neutralino dark matter. It showed that the various types of dark matter constraints, namely from the relic density, direct and indirect detections, are very complementary, as they exclude neutralinos of different natures. More precisely, the upper bound of the relic density ex- cludes mostly bino-like neutralinos, whereas direct and indirect detection rather excludes Higgsi- nos and winos respectively. Concerning direct detection, the constraints are limited, in particular, by the uncertainties on the local dark matter density. As for indirect detection, the constraints suf- fer from our poor knowledge of the dark matter density proﬁle and of the propagation of charged cosmic rays through the galactic medium. When combined with collider constraints, which are obtained in an environment under control, direct detection constraints become quite robust with respect to the mentioned uncertainties. This is not the case for indirect detection, whose con- straints are still undermined by cosmic ray propagation uncertainties. Nevertheless, even inthe most conservative case, indirect detection excludes compressed scenarios which evade collider constraints.
Thenew regulation on financial stability lists global systematically important finan- cial institutions (G-SIFIs), which have to comply with specific regulatory require- ments. One of the criteria for a bank to be identified as systematically important is its interconnectedness. In this respect our paper studies how an external adverse shock will impact the financial situations of the banks and insurance companies and how it will diffuse among these companies. In particular we explain how to disen- tangle the direct and indirect (contagion) effects of such a shock, how to exhibit the contagion network and how to detect the most important firms involved inthe contagion process, that are the ”superspreaders”, especially the institutions, which are ”too interconnected to fail”.
Interestingly, most, if not all, active particle systems found in nature take place, at all scales, inthe heterogeneous media: from bacterial motion in natural habitats , such as the gastrointestinal tract and the soil, among other complex environments, to the migration of herd of mammals across forests and steppes . Despite this evident fact, active matter research has focused almost exclusively, at the experimental and theoretical level, on homogeneous active systems [25,26,27,23,28]. Non-equilibrium, large-scale properties of active systems such as long-range order in two-dimensions as Vicsek et al.  reported in their pioneering paper, the emergence of high-order, high-density traveling bands [30,31], and the presence of giant number fluctuations in ordered phases [26,32,33] are all non-equilibrium features either predicted or dis- covered in perfectly homogeneous systems. Here we show that most of these non- equilibrium features are strongly affected by the presence of spatial heterogeneities. Moreover, we show that these properties vanish in strongly heterogeneous media. More specifically, we extend previous results  on the large-scale collective prop- erties of interacting self-propelled particles (SPPs) moving at constant speed in an heterogeneous space. We model the spatial heterogeneity as a random distribution of undesirable areas or “obstacles” that the SPPs avoid. The degree of heterogeneity
Composite Higgs models
All the measurements inthe Higgs sector so far are aligned with the SM predictions. Yet, it is not known whether at small distances the Higgs boson is a fundamental scalar field, or a composite bound state like all the other scalar particles observed thus far. Composite Higgs models are the particle physics version of the BCS theory of superconductivity. They also solve the hierarchy problem of the standard model owing to compositeness form factors taming the divergent growth of the Higgs boson mass upon quantum effects. Furthermore, the measured Higgs boson mass could well be consistent with the fact that such a (now composite) object arises as a pseudo Nambu-Goldstone Boson (pNGB) from a particular coset of a global symmetry breaking [ 133 , 134 ]. Models with a Higgs state as a pNGB generally also predict modifications of its couplings to both bosons and fermions of the SM, hence the measurement of these quantities, at either a hadronic or a leptonic collider, repre- sents a powerful way to test its possible non-fundamental nature [ 135 ]. In addition to deviations inthe Higgs couplings, composite Higgs models also predict vector res- onances at a scale of a few TeV; heavy vector-like fermionic top partners that could mix with the top quarks and induce some sizeable deviations inthe EW couplings of the top quark; and heavy vector-like top partners with exotic charges that could be searched for directly, for instance in same-sign di-lepton channels. The synergy and complementarity between these direct and indirect signatures have been discussed inthe literature [ 136 , 137 ].
Parallel to these achievements, recent technological developments potentially suggest that the future Linear Collider will be an ambitious project concerning the expected detector efficiency, as well as electron and positron beam polarisations. It is also expected that such a collider will run at (very) large centre of mass energies, leading to a very high luminosity - see e.g. [45, 46]. In view of all the above, in this study we revisit the potential manifestations of cLFV in a future LC assuming a type I SUSY seesaw as a unique source of LFV, discussing how the possible direct signals and their synergy with other strongly correlated cLFV observables can contribute to probe the mechanism of neutrino mass generation.
This latter case leads us to organizations found inthe overlap between the state and the third
sector, where we can note the increased blurring between public and private results innew or
hybrid organizations. One example is found in quasi-public organizations, or what Streeck and Schmitter (1985) refer to as “private interest government”, where the division between public and private almost disappears. Private non-profit organizations are explicitly given official public responsibilities in terms of defining, deciding and implementing public policy. Such quasi-public organizations often comprise the nexus of networks of public and private bodies with strong mutual interest in regulating a certain field (Kenis, 1990). This overlapping area also integrates the increasingly important partnerships between TSOs and public authorities. For instance, inthe fields of education and health as well as various others, it is quite common that the state prefers to delegate the provision of social services and to have contractual arrangements with private non-profit schools, hospitals or mutuals that it heavily finances. The strict regulation and supervision it imposes on them explain why such TSOs appear closer to the public sector than at the very centre of the third sector.
However, in less exotic circumstances, for instance, for systems displaying a collective motion or for materials that can be described by thermodynamics, it is possible to select among the huge set of microscopic variables some subset of relevant variables which obey autonomous equations of motion. The lat- ter dynamical equations are often identified with phenomenological equations. Their establishment from microphysics involves approximations, such as the short-memory approximation, but they can hold with a high precision if there is a clear-cut separation of time-scales which allows the elimination of irrele- vant variables. The existence of these variables manifests itself through quasi- instantaneous dissipative effects. Such a reduction of the description brings innew features, which differ qualitatively from those of microphysics, especially for macroscopic systems: continuity and extensivity of matter at our scale, ir- reversibility and non-linearity of the equations of motion, existence of phase transitions, enormous variety of behaviours in spite of the simplicity of the ele- mentary constituents and in spite of the unicity of the microscopic Laws. Due to the change of scale, the statistical fluctuations of the relevant observables are small compared to experimental or computational errors. The very nature of our description is thus changed. Although the underlying, more fundamental description yields predictions having subjective features due to the necessary use of probabilities, it gives way to a reduced description that no longer involves the observer. Since the variances are negligible, the physical quantities do not need to be regarded as random variables. The expectation values D ˆ A i
primarily explained by the fact that some countries, like France, are sources of CO 2 , while others are
carbon sinks. Thus, the net global balance of CO 2 emissions from agricultural soils is relatively low.
Bio-energies are the prime mitigation catalyst, both in France and the World
A major portion of the mitigation potential comes from bio-energies, half of which is accounted for by bio- fuels. This estimate, which reflects the targets of the French Climate Plan adopted in 2004, probably underestimates the mitigation potential of bio-fuels. This is because Leseur (2006) takes into account the targets set by the 2003 European Directive, which aimed to include 5.75% of bio-fuels inthe fuel mix by 2010. In fact, this target was broadly met, and thenew target is to include 10% of bio-fuels between now and 2020. The other half of the mitigation potential corresponds to the use of crop residues, such as straw, and to growing dedicated crops like miscanthus, in order to produce heat or electricity.
The principle result of this work is to show the impact of the full nuclear calculation on the spectral shape. Thus we compare our 214 Pb and 212 Pb spectra to those used inthe background model of Ref. [ 9 ], shown here in blue. As only rank-1 operators contribute in these transitions, β electrons can only be created with κ = ±1; i.e., the atomic exchange correction as for an allowed transition in Eq. ( 10 ) is a good ap- proximation. However, the spectra were calculated as allowed in Ref. [ 9 ], without any nuclear shape adjustment, unlike the present work.
We propose two different and complementary observables for singling out possible
signals of physics beyond the standard model inthe semi-leptonic decays Λ b → Λ c `¯ ν ` ,
both with the τ lepton and with a light lepton. The two observables are the partial decay width and a T-odd asymmetry, whose respective sensitivities to scalar and/or pseudo-scalar coupling are calculated as functions of the parameters characterizing new
• k — the ratio of expected background inthe control region to the expected back- ground inthe signal region. If the uncertainty δk on k is not negligible, it should also be included.
Inthe case of a multi-bin analysis, the above numbers should be given for each bin. An important complication to address is how to account for systematic uncertainties. For the single-bin analysis, numbers should be reported with and without the inclusion of systematic uncertainties. The same holds for theoretical uncertainties of various types: it would be useful if the experiments also provided results obtained without the inclusion of the theoretical uncertainties for well-specified theoretical inputs (such as parton distribution functions (PDFs), top mass, etc.). In particular, since theoretical uncertainties are not static, this has the advantage of facilitating their re-assessment at a later stage in a straightforward manner. Systematic uncertainties on the signal, δS, should be given separately for detector specific sources, and for SM theory uncertainties, such as PDFs. The systematic uncertainty stemming purely from the calculation of the signal model prediction should be left out.