Haut PDF Formation of large-scale structure in various cosmological scenarios

Formation of large-scale structure in various cosmological scenarios

Formation of large-scale structure in various cosmological scenarios

Introduction 1.1 Dark energy problem and Λ-CDM model Before discussing possible modifications to Einstein’s General Relativity (GR), it is worth stressing which astonishing accomplishment GR is, both in terms of its profound theoretical foundations and of the huge variety of phenomena that it can describe. On the first hand, its description of a Lorentz invariant space-time couched in the language of differential geometry it is meaningful and elegant and remains unchanged after more than one century from its first formulation. On the other hand, GR has proven to be spectacularly successful [ 1 ] when tested against experiments and observations, which range from millimeter scale laboratory tests to Solar System tests, including also strong regime tests such as binary pulsars dynamics. Within the standard model, GR governs the expansion of the Uni- verse, the behavior of black holes, the propagation of gravitational waves, and cosmological structure formation from planets and stars to the galaxy clusters. Having such an outstanding theory of gravity, one may wonder why there is such a huge number of alternative theories in the literature, and why there are different experimental and observational projects to test GR. Despite its success, there are (at least) two major reasons why it is interesting to study possible modifications of GR : the first one is the lack of a widely accepted quantum field theory of gravity (QFTG). In fact, even if different proposals for a QFTG exist, none of them has proven to be completely satisfactory in reconciling GR to quantum field theory [ 2 ] . Secondly, most of our current results in cosmology are based on a huge extrapolation of our knowledge of gravity up to scales where GR has never been tested.
En savoir plus

207 En savoir plus

K-mouflage Cosmology: Formation of Large-Scale Structures

K-mouflage Cosmology: Formation of Large-Scale Structures

(Dated: September 22, 2018) We study structure formation in K-mouflage cosmology whose main feature is the absence of screening effect on quasilinear scales. We show that the growth of structure at the linear level is affected by both a new time dependent Newton constant and a friction term which depend on the background evolution. These combine with the modified background evolution to change the growth rate by up to ten percent since z ∼ 2. At the one loop level, we find that the nonlinearities of the K-mouflage models are mostly due to the matter dynamics and that the scalar perturbations can be treated at tree level. We also study the spherical collapse in K-mouflage models and show that the critical density contrast deviates from its Λ-CDM value and that, as a result, the halo mass function is modified for large masses by an order one factor. Finally we consider the deviation of the matter spectrum from Λ-CDM on nonlinear scales where a halo model is utilized. We find that the discrepancy peaks around 1 hMpc −1 with a relative difference which can reach fifty percent. Importantly, these features are still true at larger redshifts, contrary to models of the chameleon- f (R) and Galileon types.
En savoir plus

25 En savoir plus

Bayesian large-scale structure inference and cosmic web analysis

Bayesian large-scale structure inference and cosmic web analysis

Participating in various international conferences and summer schools has always been a fruitful and en- joyable experience. I want to thank the organizers of the ICTP summer school on cosmology and workshop on large-scale structure (2012), the Varenna summer school (2013), the Les Houches summer school (2013), the Rencontres de Moriond (2014, Cosmology session), the IAU symposia 306 and 308 in Lisbon and Tallinn (2014), the CCAPP workshop on cosmic voids (2014), COSMO 2014 in Chicago, the MPA-EXC workshop on the dynamic Universe (2014, Garching), the ICTP workshop on cosmological structures (2015), the ESO-MPA- EXC large-scale structure conference (2014, Garching), and the Rencontres du Vietnam in Quy Nhon (2015, Cosmology session). Best greetings, in particular, to the Les Houches’ cosmologists group; thanks also to the organizers of the student conferences I attended: Elbereth 2012, 2013, 2014, and the SCGSC 2013. On various occasions, I have had the chance to have friendly and interesting discussions (even if sometimes short) within the cosmology community. In particular, my work benefited from interactions with Niayesh Afshordi, Raul Angulo, Stephen Bailey, Robert Cahn, Olivier Doré, Torsten Enßlin, Luigi Guzzo, Oliver Hahn, Jean-Christophe Hamil- ton, Alan Heavens, Shirley Ho, Mike Hudson, Eiichiro Komatsu, Ofer Lahav, Mark Neyrinck, Nelson Padilla, Bruce Partridge, Will Percival, David Schlegel, Uroš Seljak, Sergei Shandarin, Ravi Sheth, Svetlin Tassev, and Rien van de Weygaert (among many others). At this point, it also seems needed to acknowledge the decisive contribution of a familiar ∼ 20 Mpc/h void (at coordinates x ≈ −100, y ≈ 200 in the slice that I usually show), which very nicely makes my point during presentations.
En savoir plus

238 En savoir plus

Probing Cosmology with the homogeneity scale of the Universe through large scale structure surveys.

Probing Cosmology with the homogeneity scale of the Universe through large scale structure surveys.

different species X, C[f X ]. For the interaction between photons and leptons, we consider the classical Thomson scattering non-relativistic approach, l ∓ + γ ↔ l ∓ + γ with an interaction rate Γ ' n l σ T , where σ T ' 2 × 10 −3 M eV −2 is the Thomson cross section. For cold dark matter, we consider a collisionless non-relativistic approach, as done in various famous structure formation history models. This are the simplest models that agree with observational large scale structure data. For baryons and leptons interactions, we assume a Coulomb Scattering, b ± + l ∓ ↔ b ± + l ∓ in the Quantum ElectroDynamic (QED) approach. While for neutrini, we only consider them as a massless relativistic particle fluctuation overdensity and therefore we assume that they do not interact with matter. This is true only in the linear regime at large scales. Adopting a Fourier transform framework to simplify the equations in question, we end up with a set of 6 linear differential equations describing the non linear evolution of the 3 different species of density fluctuations (baryons, photons and neutrinos and Dark Matter) and their corresponding velocities at large scale as a function of conformal time 15 , η, and wavenumber, ~k. However, this system is coupled to the 2 degrees, Φ(η) & Ψ(~k), of freedom defined by the perturbations of the curved metric. Thus, in order to completely specify the system one may solve the time-time component and the spatial trace of the Einstein equations using the perturbed metric defined via Eq. 1.22 . Thus we end up with the coupled Boltzmann-Einstein equations that completely specify the system on large scale structures, i.e. the evolution of the density and temperature fluctuations,
En savoir plus

214 En savoir plus

The XMM Large Scale Structure Survey and its multi-lambda follow-up

The XMM Large Scale Structure Survey and its multi-lambda follow-up

Sloan, 2dF and VIRMOS surveys. This gives the best possible mapping of structures traced by galaxies, together with strong constraints on models for structure evolution. Unfortunately, it is extremely data-intensive. Moreover, the results depend on both the global cosmological parameters and the de- tails of galaxy formation. Breaking the degeneracy between these two factors is nontrivial. The study of structure us- ing only clusters of galaxies can offer significant advantages both because it is easier to define a complete sample of objects over a very large volume of space and because the objects them- selves are in some respects “simpler” to understand (at least in terms of their formation and evolution). Consequent- ly, with currently available observation- al resources, larger volumes of the uni- verse can be studied to substantially greater depth, and the interpretation of the results is less dependent on models of how galaxies form. Such studies can independently check cosmological pa- rameter values determined from the CMB and SN studies, can break the de- generacy between the shape of the power spectrum and the matter density, and can check other fundamental as- sumptions of the standard paradigm (e.g. that the initial fluctuations were gaussian). Unfortunately, clusters of galaxies become increasingly difficult to identify optically with increasing dis- tance because their contrast against foreground and background galaxies is strongly reduced. This has greatly ham- pered investigations of high-redshift op- tically selected clusters.
En savoir plus

5 En savoir plus

Precision cosmology with the large-scale structure of the universe

Precision cosmology with the large-scale structure of the universe

The experiments on neutrino flavor oscillations demonstrating that neutrinos are indeed massive are thus of crucial importance and it is necessary to examine minutely the impact of those masses on various cosmological observables. Understandably, such a discovery has triggered a considerable e↵ort in theoretical, numerical and observational cosmology to infer the consequences on the cosmic structure growth. The first study in which massive neutrinos are properly treated in the linear theory of gravitational perturbations dates back from ref. [ 3 ] (see also its companion paper ref. [ 4 ]). The consequences of these results are thoroughly presented in ref. [ 5 ], where the connection between neutrino masses and cosmology - in the standard case of three neutrino species - is investigated in full detail. It is shown that CMB anisotropies are indirectly sensitive to massive neutrinos whereas the late-time large-scale structure growth rate, via its time and scale dependences, o↵ers a much more direct probe of the neutrino mass spectrum. To a large extent current and future cosmology projects aim at exploiting these dependences to put constraints on the neutrino masses. Indeed, the impact of massive neutrinos on the structure growth has proved to be significative enough to make such constraints possible, as shown for instance in [ 6 – 11 ]. These physical interpretations are based on numerical experiments, the early incarnations dating back from the work of ref. [ 12 ], which have witnessed a renewed interest in the last years [ 13 – 16 ], and also on theoretical investigations such as [ 17 – 20 ], where the e↵ect of massive neutrinos in the non- linear regime is investigated with the help of Perturbation Theory. An important point is that it is potentially possible to get better constraints than what the predictions of linear theory o↵er. Observations of the large-scale structure within the local universe are indeed sensitive to the non-linear growth of structure and thus also to the impact of mode-coupling e↵ects on this growth. Such a coupling is expected to strengthen the role played by the matter and energy content of the universe on cosmological perturbation properties. This is true for instance for the dark energy equation of state [ 21 ] or for the masses, even if small, of the neutrino species, as shown in numerical experiments [ 14 ].
En savoir plus

240 En savoir plus

K-mouflage Cosmology: Formation of Large-Scale Structures

K-mouflage Cosmology: Formation of Large-Scale Structures

the one of Λ-CDM. At the perturbative level, and first in the linear theories, deviations from GR occur on scales lower than the Compton wavelength of the scalar field [29]. As Solar System tests and the screening of the Milky Way imply that the cosmological range of the scalar must be less than 1 Mpc [30], the effects of these models on linear scales are suppressed and only in the quasilinear to mildly nonlinear regimes one can expect to see signif- icant deviations. Symmetrons and dilatons screen grav- ity in a stronger way in the local environment imply- ing that constraints on these models are less severe than on chameleon-f (R) theories. This implies that the ef- fects of the symmetron and to a lesser extent of the dila- ton on large-scale structures are enhanced compared to chameleon-f (R) models. Typically, one expects to see a peak in the deviations from GR on the scales correspond- ing to the range of the scalar field, especially in the power spectrum of density fluctuations [9]. On small and large scales, the models converge towards GR. On small scales, this is due to the screening effect and on large scales this is also the screening property outside the Compton ra- dius.
En savoir plus

25 En savoir plus

Rearrangement Scenarios Guided by Chromatin Structure

Rearrangement Scenarios Guided by Chromatin Structure

pulicani@lirmm.fr Abstract. Genome architecture can be drastically modified through a succession of large-scale rearrangements. In the quest to infer these re- arrangement scenarios, it is often the case that the parsimony principal alone does not impose enough constraints. In this paper we make an initial effort towards computing scenarios that respect chromosome con- formation, by using Hi-C data to guide our computations. We confirm the validity of a model – along with optimization problems Minimum Local Scenario and Minimum Local Parsimonious Scenario – de- veloped in previous work that is based on a partition into equivalence classes of the adjacencies between syntenic blocks. To accomplish this we show that the quality of a clustering of the adjacencies based on Hi-C data is directly correlated to the quality of a rearrangement sce- nario that we compute between Drosophila melanogaster and Drosophila yakuba. We evaluate a simple greedy strategy to choose the next rear- rangement based on Hi-C, and motivate the study of the solution space of Minimum Local Parsimonious Scenario.
En savoir plus

17 En savoir plus

Consequences of various landscape-scale ecosystem management strategies and fire cycles on age-class structure and harvest in boreal forests

Consequences of various landscape-scale ecosystem management strategies and fire cycles on age-class structure and harvest in boreal forests

The issue of “landscape legacies” (Wallin et al. 1994) is highlighted by our findings. The cumulative impacts of past disturbance and management (logging, fire suppression, etc.) have resulted in the present age-class structure. This legacy may pose challenges and (or) opportunities for management objectives (Östlund et al. 1997). In all of the AC scenarios, there is a time lag of over a century before the targeted AC structure is reached, as there is a large discrepancy between the initial age-class structure and any of the target distribu- tions. This long time lag required to shape the age-class structure implies a need for proactive management, since it has significant consequences for the stand ages and spatial pattern of harvest and may lead to a potential conflict be- tween the harvest flow and target AC structure objectives. Given the uncertainty of changes in climate, economies, and social values over such a long time frame (Kaufmann et al. 1994; Chapin and Whiteman 1998), the focus should be on the transition period and how the current forest state can be shaped into a desired condition. If short-term costs (eco- nomic or ecological) are too significant, a plan is unlikely to be acceptable regardless of the long-term benefits. In the study area, the period 50 years in the future is most critical to conservation objectives, since all scenarios that do not specify hard AC targets pass through a phase of very little old forest on the landscape as the current old forest is de- pleted before the young crop ages into older classes.
En savoir plus

13 En savoir plus

Cosmological simulations of galaxy formation

Cosmological simulations of galaxy formation

where star formation is most efficient. For higher and lower halo masses, the star formation rates are reduced due to feedback processes. Modern large volume simulations reproduce the stellar to halo mass relationship at low and high redshifts reasonably well 124 , 309 . Gas around galaxies: One of the key advantages of hydrodynamical simulations compared to semi- analytic models (see Box 1) is their ability to make detailed predictions for the distribution and properties of gas around galaxies including the circumgalactic medium, the intracluster medium, and the intergalac- tic medium. The circumgalactic and intergalactic media are quite diffuse (n ∼ 10 −3 − 10 −7 cm −3 ) and cool (T ∼ 10 4−6 K) and observations in emission, like Lyman-α and metal lines, are therefore rather challenging. However, absorption line observations from background quasars can probe the distribution, enrichment, and ionization state of this gas. One of the first successes of hydrodynamical simulations has been the reproduction of the declining trend of the number of absorbing clouds per unit redshift and linear interval of H I column density with column density in the Lyman-α forest 288 . Reproducing properties of the circumgalactic medium, however, is significantly more challenging. Observations of this gas indicate that it features a rich multi-phase structure where individual lines of sight simultaneously contain highly ionized, warm, and cool atomic species 310 , 311 . The coolest and densest parts of this gas have spatial scales of 10 − 100 pc 312 , although the coherence scale can reach up to ∼ 1 kpc 313 . These spatial scales are below the typical circumgalactic gas resolution limits of galaxy formation simulations. More recently, cosmological simulations with special circumgalactic gas refinement schemes have been employed to overcome some of the resolution limitations. Such simulations increase the numerical resolution in the circumgalactic gas reaching smaller spatial scales 314 – 317 . At z = 2 such simulations can reach a spatial resolution below ∼ 100 pc 316 , and at z = 0 below ∼ 1 kpc within the circumgalactic medium 315 . In addition to resolution
En savoir plus

35 En savoir plus

Generation of frazil ice in large-scale laboratory experiments

Generation of frazil ice in large-scale laboratory experiments

~0.5 º C to its freezing point. • No other external seeding was required to initiate the generation of frazil ice • When supercooling started, frazil production increased rapidly and large amounts of crystals were produced over a short period of time

2 En savoir plus

A statistical mechanics framework for the large-scale structure of turbulent von Kármán flows

A statistical mechanics framework for the large-scale structure of turbulent von Kármán flows

= γ(σ 0 ). (A.6) The formulation (A.4)–(A.6) is however merely formal, as the optimization problem (A.4) is in this case ill-defined. The trouble comes from the poloidal degrees of freedom being in a sense not constrained enough by the macro state constraints of Equation (A.2). The problem is apparent in the definition of the partition function (A.5) : the integral R ξ there involved does not in general converge. We think that this behavior is an avatar of the UV catastrophe encountered in the statistical theories of ideal 3D flows. A phenomenological taming of the problem can be achieved by further constraining the set of macro state fields over which the optimization problem (A.4) is solved. This requires the use of additional ansatz, some of which we below discuss. In order to carry out some explicit calculations and retrieve the equations of Table (2), we will consider two simplified sets of macro-constraints (A.2). (i) In the Two-Level modeling, we prescribe the toroidal field to be a two-level, symmetric distribution, viz., α(σ) = pδ(σ − 1) + (1 − p)δ(σ + 1). Only five constraints then remain from the set of constraints (A.2) : the energy, two toroidal areas A ± , and two partial circulations Γ ± .
En savoir plus

31 En savoir plus

Model Averaging in Large Scale Learning

Model Averaging in Large Scale Learning

Other applications are of paramount importance in our society. The healthcare industry has strongly benefited from statistical algorithms. The diagnosis of various diseases can be auto- mated or assisted. The review in Kononenko ( 2001 ) mentions several statistical methods such as naive and semi-naive Bayesian classifiers, k-nearest neighbors, neural networks or decision trees. The article Dreiseitl et al. ( 2001 ) compares algorithms such as logistic regression, arti- ficial neural networks, decision trees, and support vector machines on the task of diagnosing pigmented skin lesions, in order to distinguish common nevi from dysplastic nevi or melanoma. The authors of Shipp et al. ( 2002 ) describe a method to ease the detection of blood cancer. The discovery of new drugs in the pharmaceutical industry is a growing challenge. The more active compounds are discovered, the less likely it is to discover a new drug with positive impact. In order to keep innovating, the pharmaceutical industry needs to increase the capacity of screening active compounds. Standard high-throughput screening methods become more and more costly as the number of active compounds already tested increases. As in ranking
En savoir plus

179 En savoir plus

Large-scale survey of lithium concentrations in marine organisms

Large-scale survey of lithium concentrations in marine organisms

3 1. Introduction Numerous trace metals are released to rivers, coastal areas, and ultimately the oceans by human activities, and their behaviors in many marine taxa such as bivalves, cephalopods, crustaceans, and fish have been thoroughly investigated in past decades (e.g. Bustamante et al., 2003; Eisler, 2010; Metian et al., 2013). However, lithium (Li) concentrations in marine organisms have received little attention, despite the exponentially increasing use of this element in high- tech industries due to its unique physicochemical properties. Li in the oceans is dominantly derived from two natural sources, i.e. high-temperature hydrothermal fluxes at mid-ocean ridges and river inputs. It exits the ocean mainly via the formation of marine authigenic aluminosilicate clays on the seafloor (Chan et al., 2006). Due to its long oceanic residence time (~1.2 million years) and its weak capacity to adsorb onto marine particles (Decarreau et al., 2012), Li is homogeneously distributed throughout the water column (Misra and Froelich, 2012). Thus, the oceanic concentration of dissolved Li is constant at 0.183 ± 0.003 µg/mL, irrespective of latitude and depth (Aral and Vecchio-Sadus, 2011; Misra and Froelich, 2012; Riley and Tengudai, 1964).
En savoir plus

46 En savoir plus

Large-Scale Quantum Monte Carlo Electronic Structure Calculations on the EGEE Grid

Large-Scale Quantum Monte Carlo Electronic Structure Calculations on the EGEE Grid

Quantum Monte Carlo (QMC) methods are known to be powerful stochastic approaches for solving the Schr¨odinger equation.[1] Although they have been widely used in computational physics during the last twenty years, they are still of marginal use in computational chemistry.[2] Two major reasons can be in- voked for that: i) the N -body problem encountered in chemistry is particularly challenging (a set of strongly interacting electrons in the field of highly-attractive nuclei) ii.) the level of numerical accuracy required is very high (the so-called “chemical accuracy”). In computational chemistry, the two standard approaches used presently are the Density Functional Theory (DFT) approaches and the various post-Hartree-Fock wavefunction-based methods (Configuration Interac- tion, Coupled Cluster, etc.) In practice, DFT methods are the most popular approaches, essentially because they combine both a reasonable accuracy and a favorable scaling of the computational effort as a function of the number of electrons. On the other hand, post-HF methods are also employed since they lead to a greater and much controlled accuracy than DFT. Unfortunately, the price to pay for such an accuracy is too high to be of practical use for large molecular systems.
En savoir plus

14 En savoir plus

Large scale precision

Large scale precision

telescope mount. However, instead of a telescope there is a special projector, which sends out a narrow, well-defined light beam. The radio telescope is equipped with detectors to sense this light beam, and to follow it. It does not matter if there are irregularities in the gears; the telescope just follows the light beam. In the days when computers were expensive, this system worked well without needing one. However, there was one area where a computer would have been helpful; helping the telescope to find the light beam. Maintenance and other needs often result in the Master Equatorial pointing in one direction and the telescope pointing in another. To start making observations the telescope has to be slaved to the light beam. This was accomplished using a device known as the Coordinate Converter, or Co-Co. This unit, a strange mixture of motors, synchros, gears and other devices, converts the antenna position information into the same units as those used by the Master Equatorial. The Telescope Operator, sitting in the Control Room, can see displays of the telescope and Master Equatorial coordinates, and then drive the telescope until it “sees the light” and locks in.
En savoir plus

2 En savoir plus

Large-scale validation of SCIAMACHY reflectance in the ultraviolet

Large-scale validation of SCIAMACHY reflectance in the ultraviolet

Ideally, the relative di fference between observation and simulation dR=0, though in practice we expect some scatter, caused by inaccuracies of the RTM input. Biases are 20 either caused by the calibration of the measurement, or by systematic deviations of the input data. The probability of the latter has been minimised by the selection of the spectral range of the study, where the sensitivity to most input parameters is small and by a very strict cloud mask. The only ingredient of the simulation that is di fficult to filter is the ozone profile. A deviation in the ozone profile, however, gives a clear spectral 25
En savoir plus

27 En savoir plus

Large-scale confinement and small-scale clustering of floating particles in stratified turbulence

Large-scale confinement and small-scale clustering of floating particles in stratified turbulence

2 Universit´ e de Nice Sophia Antipolis, CNRS, LJAD, UMR 7351, 06100 Nice, France (Dated: September 14, 2015) We study the motion of small inertial particles in stratified turbulence. We derive a simplified model, valid within the Boussinesq approximation, for the dynamics of small particles in presence of a mean linear density profile. By means of extensive direct numerical simulations, we investigate the statistical distribution of particles as a function of the two dimensionless parameters of the problem. We find that vertical confinement of particles is mainly ruled by the degree of stratification, with a weak dependency on the particle properties. Conversely, small scale fractal clustering, typical of inertial particles in turbulence, depends on the particle relaxation time and is almost independent on the flow stratification. The implications of our findings for the formation of thin phytoplankton layers are discussed.
En savoir plus

6 En savoir plus

Decision process in large-scale crisis management

Decision process in large-scale crisis management

3.1.5 Preferences systems Actions cannot be compared one by one because of their generic definition. To accomplish this comparison, decision- makers, or the analyst judging by their names, must develop a relational preference system. This system reflects diverse views that can be opposed, or even contradictory. Thus, the system must tolerate ambiguity, contradiction, and learning wherever possible (Roy 1985 ). Preference systems are also called ‘‘approach and the dominant culture’’ (Merad 2010 ). They are set of beliefs, attitudes and assumptions shared by a group as a result of past experiences (Merad 2010 ). We have determined the preference system for decision-makers in Table 2 .
En savoir plus

12 En savoir plus

The Anatomy of Large Scale Systems

The Anatomy of Large Scale Systems

Flexibility Flexibility has not been studied to the same extent as complexity, and has not developed as varied a set of definitions. For us a system is flexible if one can implement classes of changes in its specifications with relative ease. Note that we do not claim that even small changes in specifications always result in ease of implementation, just that the system is designed to handle certain classes of changes in an easy manner. How does this translate to system implementation? Again we do not yet know the general answer. However, if we consider the paths of interconnections in a system, a system with many alternate paths from the inputs to the outputs will usually be able to handle certain changes in specifications relatively well. Moreover, a system with many alternate paths can increase the number of paths a great deal with a small increase in the number of interconnections and additions of new nodes or modifications of existing ones. We shall therefore define the flexibility of a system as the number of paths in it, counting loops just once. In some architectures, notably the network structure and the layered hierarchy, the number of potential paths that keeps the generic structural form unchanged is very high. This, however, cannot be said of the tree structured hierarchy.
En savoir plus

17 En savoir plus

Show all 10000 documents...