• Aucun résultat trouvé

Future galaxy surveys will probe scales comparable to the horizon. The come of all this new data will make necessary very careful analyses and appropriate modelling of the statistical properties of the matter density field. In this chapter we have inves-tigated the impact of lensing convergence on constraints of cosmological parameters.

We have utilised both MCMC and Fisher matrix approaches to study the importance of including lensing convergence when analysing galaxy clustering data. Although the Fisher matrix formalism provides qualitative information about degeneracies in the parameter space, it badly fails in determining quantitative estimations of shifts in the best fitting parameters. When performing the analysis with a MCMC approach we have found that biases of best fitting cosmological parameters might reach sev-eral standard deviations when lensing convergence is neglected in the analysis. Since the Fisher matrix technique is reliable only for shifts relatively small, this explains why Fisher matrix results wrongly estimate the bias on cosmological parameters: for shifts bigger than 2‡ the linear approximation in the Fisher formalism breaks down.

A MCMC method is therefore suitable as we have shown.

We have seen that analysis neglecting lensing convergence lead to biased cosmolog-ical constraints. The spectral index ns and the neutrino mass m are particularly affected. When neglecting lensing a degeneracy between the amplitude of scalar fluctuationsAs and the galaxy bias parameter b0 appears. One understands this by noting that the product b20As determines the amplitude of the matter power spec-trum, sob20As increases in analysis neglecting lensing to enhance power on all scales.

Since the density-lensing contribution to the number counts angular power spectrum can become negative for small red-shift bins, therefore decreasing power on small scales, its effect can be mimic by changes on ns and m: decreasing the spectral in-dex and increasing the neutrino mass one obtains a damped matter power spectrum on small scales.

Finally, we conclude by noting that the specific shifts on the best fitting cosmological parameters depend on the details of the survey. Although in this project we have used specifications for an Euclid-like survey, the important point is that a consistent

analysis of galaxy clustering data must take into account the magnification bias defined by

s(z)© ˆlog10N(z, m < mú)

ˆmú , (2.24)

where mlim is the limiting magnitude of the survey and N(z, m)is the galaxy lumi-nosity function of the survey at redshiftz. Our results show that a correct modelling of number counts might lead to detection of lensing with high significance as in the case of CMB. The fact that deep galaxy surveys are so sensitive to lensing, however, is not only a curse but also a blessing. It means that these surveys will allow us to determine a map of the lensing potential at different redshifts, i.e. perform ‘lensing tomography’ with galaxy clustering. This will be a very interesting alternative to lensing tomography with shear measurements proposed, e.g. in [112]. Both tech-niques are challenging but they have different systematic errors and allow valuable cross-checks. So clearly both paths should be pursued.

Chapter 3

Determining H 0 with Bayesian hyper-parameters

3.1 Introduction

Pinning down the Hubble constant H0 is crucial for the understanding of the stan-dard model of cosmology. It sets the scale for all cosmological times and distances and it allows to tackle cosmological parameters, breaking degeneracies among them (e.g., the equation of state for dark energy and the mass of neutrinos). The expansion rate of the universe can either be directly measured or inferred for a given cosmolog-ical model through cosmologcosmolog-ical probes such as the cosmic microwave background (CMB). Although accurate direct measurements of the Hubble constant have proven to be difficult (e.g., control of systematic errors, relatively small data sets, fully consistency of different methods for measuring distances), significant progress has been achieved over the past decades [113, 114]. The Hubble Space Telescope (HST) Key Project made possible to measure H0 with an accuracy of 10% by significantly improving the control of systematic errors [115]. More recently, in the HST Cycle 15, the Supernovae and H0 for the Equation of State (SHOES) project has reported measurements ofH0 accurate to4.7% (74.2±3.6 km s≠1Mpc1) [116], then to3.3%

(73.8±2.4 km s1Mpc≠1) [78] and very recently to2.4% (73.24±1.74 km s1Mpc≠1) [32]. This remarkable progress has been achieved thanks to both an enlarged sample of SN Ia having a Cepheid calibrated distance, a reduction in the systematic un-certainty of the maser distance to NGC4258, an increase of infrared observations of Cepheid variables in the Large Magellanic Cloud (LMC).

The 2015 release in temperature, polarization and lensing measurements of the CMB by the Planck satellite leads to a present expansion rate of the universe given by H0 = 67.81±0.92 km s≠1Mpc1 for the base six-parameter CDM model [21]. As it is known, the derived estimation of H0 from CMB experiments provide indirect and highly model-dependent values of the current expansion rate of the universe (requiring e.g., assumptions about the nature of dark energy, properties of neutrinos, theory of gravity) and therefore do not substitute a direct measurement in the local universe. Moreover, indirect determinations (in a Bayesian approach) rely on prior probability distributions for the cosmological parameters which might have an impact on the results.

The Planck Collaboration used a ‘conservative’ prior on the Hubble constant (H0 = 70.6±3.3 km s≠1Mpc1) derived from a reanalysis of the Cepheid data used in [78], done by G. Efstathiou in [117]: in this reanalysis, a different rejection algorithm was used (with respect to that in [78]) for outliers in the Cepheid period-luminosity relation (the so-called Leavitt Law); in addition, [117] used the revised geometric maser distance to NGC4258of [118]. Although consistent with the Planck TT esti-mate at the1 level, this determination of H0 assumes that there is no metallicity dependence in the Leavitt Law. Furthermore, it discards data (i) from both Large Magellanic Cloud (LMC) and Milky Way (MW) Cepheid variables (ii) from the sample of Cepheid variables in [78] using an upper period cut of60 days.1

As discussed in [114], the sensitivity to metallicity of the Leavitt Law is still an open question. In fact, due to changes in the atmospheric metal abundance, a metal-licity dependence in the Cepheid period-luminosity is expected. Discarding data involves somehow arbitrary choices (e.g., chauvenet’s criterion, period cut, threshold T in [117]) and might hinder our understanding of the physical basis behind the in-compatibility of data sets (if any) [119]. Therefore, neither no metallicity dependence in Leavitt Law nor disregarding data seem to be very conservative assumptions.

Once systematics are under control (like the presence of unmodeled systematic errors or biases in the outlier rejection algorithm for Cepheid variables), a reliable estimate of H0 is very important also on theoretical grounds. Confirmation of significant

1In [117] G. Efstathiou also shows results utilizing the rejection algorithm for outliers used in [78], but with the revised geometric maser distance toNGC4258[118] which is about4%higher than that adopted by [78] in their analysis. Note that in [78] (see their page 13) the authors

discrepancies between direct and indirect estimates of H0 would suggest evidence of new physics. Discrepancies could arise if the local gravitational potential at the position of the observer is not consistently taken into account when measuring the Hubble constant. Nevertheless, an unlikely fluctuation would be required to explain an offset as big as 2.4 [120]. Second-order corrections to the background distance-redshift relation could bias estimations of the Hubble constant derived from CMB [121]. However, it was shown in [122] that those corrections are already taken into account in current CMB analyses.

It is clear from [117] that the statistical analysis done when measuring H0 plays a part in the final result (for instance, through the outlier rejection algorithm, data sets included, anchors distances included, the period cut on the sample of Cepheid variables, the prior on the parameters of the Period-Luminosity relation). Given the relevance of the Hubble constant for our understanding of the universe, it is neces-sary to confirm previous results and prove them robust against different statistical approaches.

The goal of this project is to determine the Hubble constant H0 by using Bayesian hyper-parameters to reanalyse the Cepheid data set used in both [78] and [32]. In Section 3.2 we explain both our notation and the statistical method employed. We then apply the method to different data sets and determine H0 in Section 3.3. We conclude and discuss our results in Section 3.4.

Documents relatifs