• Aucun résultat trouvé

Black Holes and the Dark Sector

N/A
N/A
Protected

Academic year: 2021

Partager "Black Holes and the Dark Sector"

Copied!
142
0
0

Texte intégral

(1)

Black Holes and the Dark Sector

Doctoral Thesis in Theoretical Physics

Fabio Capela

Université Libre de Bruxelles

Service de Physique Théorique

Academic advisor:

Prof. Peter Tinyakov

Deposit: 24 March 2014

Private Defense: 29 April 2014

(2)
(3)

Black Holes and the Dark Sector

Doctoral Thesis in Theoretical Physics

Fabio Capela

Université Libre de Bruxelles

Service de Physique Théorique

Academic advisor:

Prof. Peter Tinyakov

Deposit: 24 March 2014

Private Defense: 29 April 2014

(4)

Prof. Glenn Barnich

Service de Physique Mathématique des Intéractions Fondamentales, ULB, Brussels

Prof. Malcolm Fairbairn

Theoretical Particle Physics and Cosmology Group, Physics Department, King’s College London

Prof. Christophe Ringeval

Center of Cosmology, Particle Physics and Phenomenology, Institute of Mathematics and Physics, Louvain University

Prof. Michel Tytgat

Service de Physique Théorique, ULB, Brussels

(5)

Acknowledgements

It’s time to thank the people who have helped me to grow throughout these years. First, I would like to thank my supervisor Prof. Peter Tinyakov. During the last four years, I have enjoyed all the physics that I have learned from him and the fact that he always has the door open. Peter has been able to guide me, while giving me the freedom to pursue my own research projects. I was able to have a very good time during my PhD thanks to Peter, who is always smiling while doing physics. Peter, I don’t have many words to say but thank you. You always pushed me to do better. I was lucky to have you as a supervisor. I would also like to thank my collaborators Germano Nardini and Maxim Pshirkov. From Germano, I learned to think more deeply about physics and to be meticulous. I have also enjoyed the activities that we had during my first year in Brussels, in particular the PES parties. From Maxim, I have learned how to do quick estimates in astrophysics and that vodka does not give any headache if drunk appropriately. Thanks to Maxim, the discussions in any kind of subject are always lively. This is certainly something that I am missing. The corridor of the seven floor would certainly be less enjoyable without its Professors. A special thank you goes to Prof. Michel Tytgat, Prof. Thomas Hambye and Prof. Jean-Marie Frère for sharing with us their savoir-faire in physics in particular and in life in general. I would like to thank Michel in particular for all the time that I stole from him during the postdoctoral application and the administrative work. I have to say that I enjoy his presence and the discussions that we had so far. His ears are open and I feel that he cares about my future. Thank you. I would also like to thank Prof. Malcolm Fairbairn, Prof. Michel Tytgat, Prof. Christophe Ringeval and Prof. Glenn Barnich for their availability to be part of the jury and their interest on my work.

Of course, I would like to thank all the past and present members of the Service: Mikael, Michael, Lorenzo, Germano, Yong, Jose Mi, Federico, Federica, Simon, Chaimae, Maxim, Laura, Chiara, Fu-Sin, Narendra, Isabelle . . . for all the parties, the discussions and the badminton matches. I will miss you (and I am already missing some of you).

(6)

ti Arlindo pelas discussões sobre o nosso futuro que está a tocar a porta. Para terminar, quero dizer um grande obrigado aquela pessoa que me atura todos os dias (e ela sabe que não é fácil). Certamente, esta tese não teria tido a mesma qualidade sem a tua paciência, carinho e amor. Obrigado a ti, minha amante, amor e amiga.

(7)

Não sou nada. Nunca serei nada.

Não posso querer ser nada.

À parte isso, tenho em mim todos os sonhos do mundo.

Fernando Pessoa

I am nothing.

I shall never be anything. I cannot wish to be anything.

Aside from that, I have within me all the dreams of the world.

(8)
(9)

Black Holes and the Dark Sector

Short abstract:

This thesis is divided in two parts: the first part is dedicated to the study of black hole solutions in a theory of modified gravity, called massive gravity, that may be able to explain the actual stage of accelerated expansion of the Universe, while in the second part we focus on constraining primordial black holes as dark matter candidates.

In particular, during the first part we study the thermodynamical properties of specific black hole solutions in massive gravity. We conclude that such black hole solutions do not follow the second and third of law of thermodynamics, which may signal a problem in the model. For instance, a naked singularity may be created as a result of the evolution of a singularity-free state.

In the second part, we constrain primordial black holes as dark matter candi-dates. To do that, we consider the effect of primordial black holes when they interact with compact objects, such as neutron stars and white dwarfs. The idea is as follows: if a primordial black hole is captured by a compact object, then the accretion of the neutron star or white dwarf’s material into the hole is so fast that the black hole destroys the star in a very short time. Therefore, observations of long-lived compact objects impose constraints on the fraction of primordial black holes. Considering both direct capture and capture through star formation of primordial black holes by compact objects, we are able to rule out primordial black holes as the main component of dark matter under certain assumptions that are discussed.

To better understand the relevance of these subjects in modern cosmology, we begin the thesis by introducing the standard model of cosmology and its problems. We give particular emphasis to modifications of gravity, such as massive gravity, and black holes in our discussion of the dark sector of the Universe.

(10)
(11)

Contents

1 Introduction 1

1.1 The Big Bang model and its problems . . . 2

1.1.1 Facing observations: the need for a dark universe . . . . 4

1.1.2 The challenges . . . 10

1.1.3 Modifying gravity: the hope for a solution? . . . 12

1.2 Massive gravity . . . 13

1.2.1 The vDVZ discontinuity . . . 15

1.2.2 The Vainshtein mechanism . . . 17

1.2.3 Massive gravity: Lorentz invariant & Lorentz breaking . 18 1.3 Black holes . . . 25

1.3.1 No-hair, thermodynamics and cosmic censorship . . . . 27

1.3.2 Primordial black holes and dark matter . . . 31

I

BLACK HOLES AND MASSIVE GRAVITY

35

2 Massive gravity and black hole mechanics 37 2.1 Black hole solutions in massive gravity . . . 38

2.2 Properties of the black hole solutions . . . 41

2.2.1 The black hole temperature . . . 41

2.2.2 The Noether’s charge as the entropy . . . 42

2.2.3 The second and third laws of thermodynamics . . . 44

2.3 Discussion . . . 49

II

BLACK HOLES AND DARK MATTER

51

3 Constraints on black holes as dark matter: direct capture 53 3.1 Capture of black holes by compact stars . . . 55

3.1.1 Energy loss . . . 55

3.1.2 Capture rate . . . 61

3.2 Constraints . . . 62

3.2.1 Globular clusters . . . 62

3.2.2 Dwarf spheroidal galaxies . . . 63

3.2.3 Results . . . 64

3.3 Conclusions . . . 65

4 Constraints on black holes as dark matter: star formation I 67 4.1 Capture of dark matter during star formation . . . 68

(12)

4.1.2 Adiabatic contraction . . . 69

4.1.3 DM bound to a baryonic cloud . . . 72

4.1.4 Parameters for constraints. . . 73

4.2 Constraints . . . 74

4.2.1 Star model . . . 76

4.2.2 Sinking of the black hole in the star . . . 78

4.2.3 Results . . . 81

4.3 Conclusions . . . 82

5 Constraints on black holes as dark matter: star formation II 85 5.1 Adiabatic contraction during star formation . . . 87

5.2 Constraints on PBHs . . . 89

5.3 Conclusions . . . 93

6 Discussion and Conclusions 95

A Parameters and acronyms 99

(13)

Preface

Our current understanding of the Universe reveals two serious unresolved questions: why is the Universe in a phase of accelerated expansion and what is its matter content. The aim of this thesis is to address these two issues through the study of black holes. Our task is double:

1. Infrared modifications of gravity may potentially account for the accel-erated expansion of the Universe without introducing a "dark energy" component. Massive gravity, as we will explain, is a rather interesting candidate for such modified gravity models.

In this thesis, we study the thermodynamical properties of specific black hole solutions in massive gravity. If black holes cannot be described as thermodynamical systems, then it is probable that a UV completion of the low-energy effective theory has rather unusual quantum properties, such as lack of unitarity. Indeed, the second law of thermodynamics, which states that the entropy of an isolated system never decreases, is the result of the unitary evolution in quantum mechanics.

Closely related to the thermodynamical properties of black holes is the weak cosmic censorship conjecture, which states that no naked singu-larities can be observed in the Universe, since they are hidden by event horizons. If an infrared modification of gravity predicts the creation of a naked singularity from a gravitational collapse, the classical theory becomes inapplicable since quantum effects have to be taken into account and a UV completion is needed. We test the weak cosmic censorship con-jecture in the framework of massive gravity and discuss the implications of our results for such class of modified theories of gravity.

(14)

stars and white dwarfs. The idea is as follows. If a black hole is captured by a compact object, then the accretion onto the hole becomes sufficiently fast to destroy the neutron star or white dwarf in a very short time. As a consequence, the region of parameter space where this happens with large probability is excluded by observations of the existing compact objects. The mechanisms of capture that we study are of two sorts: black holes may be captured during the formation of a main-sequence star or directly captured by compact objects. If black holes are captured during the formation of a main-sequence star, like our Sun, nothing happens until the star dies and reaches the stage of a neutron star or a white dwarf. The two mechanisms of capture lead to different and complementary constraints. The reliability of the resulting constraints and the working assumptions will be discussed thoroughly.

Note that the black holes discussed in this part are believed to be formed in the early universe, and are, as a consequence, called primordial black holes. They are usually considered to emerge when the high amplitude of some density fluctuations in the early Universe exceeded some threshold value, leading to an unstoppable collapse.

To better understand the relevance of the previous subjects in modern cos-mology and astrophysics, we first introduce the standard model of coscos-mology and its problems. We emphasize the role that modifications of gravity (in particular massive gravity) and black holes may play for our understanding of the dark sector of the Universe. We also give a detailed discussion about the thermodynamical aspects of black holes and an introduction to primordial black holes and their already existing constraints.

(15)

1

Introduction

Somewhere, something incredible is waiting to be known. Carl Sagan

Contents

1.1 The Big Bang model and its problems. . . . 2

1.1.1 Facing observations: the need for a dark universe . . . 4

1.1.2 The challenges . . . 10

1.1.3 Modifying gravity: the hope for a solution? . . . 12

1.2 Massive gravity. . . . 13

1.2.1 The vDVZ discontinuity . . . 15

1.2.2 The Vainshtein mechanism . . . 17

1.2.3 Massive gravity: Lorentz invariant & Lorentz breaking 18 1.3 Black holes . . . . 25

1.3.1 No-hair, thermodynamics and cosmic censorship . . . 27

(16)

Every single model in physics relies on some working assumptions. In the case of the Big Bang model, one of the axioms is the cosmological principle, meaning the premise that the Universe is spatially homogeneous and isotropic at large scales. The second postulate of the Big Bang model is that the evolution of space-time is described by Einstein’s theory of general relativity (GR) [5]. Based upon those assumptions, a very clear picture of the evolution of the Universe and its constituents has been constructed.

In the next section, we are not aiming to give a precise picture of the history of the Universe, but rather to present in words what the standard model of cosmology is telling us. We will first present a brief history of the Universe, then we will delve into the evidences pointing to the presence of a dark sector in the Universe.

1.1

The Big Bang model and its problems

At very high temperatures, the Universe was in a symmetric phase where the electromagnetic and weak force were the same long-range force, called the electroweak interaction. Moreover, at that point the elementary particles were massless. As the Universe expanded the temperature dropped. When the temperature reached a value of the order of 100 GeV (1015 K), a symmetry-breaking phase transition happened that led the standard model scalar (SMS) to acquire a vacuum expectation value. This had immediate consequences: all the particles that were interacting with the SMS acquired a mass. The second consequence is that the electromagnetic and weak interaction became two distinct forces, as we conceive them today. This generation of masses of the elementary particles is the well-known Brout-Englert-Higgs mechanism [6,7,8]. Since elementary particles acquired a mass, the most massive of them began to annihilate efficiently. First, the top quark annihilated followed by the SMS, then by the gauge bosons W±, Z0.

(17)

1.1. The Big Bang model and its problems 3

As the Universe continued to cool down, the weak interaction became weaker and weaker. Eventually, the interaction rate of neutrinos turned smaller than the expansion rate, and they fell out of thermal equilibrium, meaning they were able to move freely without interactions. This is called the neutrino decoupling. It’s completely similar to the photon decoupling that produces the cosmic microwave background (CMB). Theoretically, it would therefore be possible to observe a cosmic neutrino background that would give us a picture of the Universe when it was two seconds old. However, this cosmic neutrino background unobserved so far is notoriously difficult to detect due to the very weak interactions of neutrinos with matter.

Just after neutrino decoupling, the temperature of the Universe dropped below 1 MeV, which is the necessary energy to form electron-positron pairs. Therefore, very shortly after the decoupling of neutrinos, the electron-positron annihilation into photons was not compensated by the creation of pairs anymore. This implied a huge decrease in the number of electrons and positrons. Moreover, the temperature didn’t drop as much as before during a short time due to the injection of energy from the electron-positron annihilation.

The photon’s energy decrease with the expansion of the Universe. At some point photons were not able to disrupt deuterium anymore (protons and neutrons tend to produce it easily), since their energy dropped below the binding energy. Consequently, deuterium began to be stable enough time to produce other elements, like helium-4. This phase is called the Big Bang nucleosynthesis. As the Universe continued to expand and cool, nuclear fusion processes became inefficient to produce light elements and the abundances freezed. Today, most of the mass of baryonic matter is in the form of hydrogen (∼ 75 %) and helium (∼ 25 %) produced during this epoch. Other much heavier elements are produced in a much latter phase during the natural evolution of stars, called stellar nucleosynthesis. There is a number of astrophysical observations that can put constraints on the abundances of light elements at the primordial nucleosynthesis epoch.

Until the Universe was 60’000 years old, radiation was dominating over matter, meaning that most of the energy of the Universe was in the form of photons and neutrinos. Particles like electrons were constantly scattering photons and the Universe was therefore opaque. When the temperature of the Universe attained the binding energy of hydrogen atoms, free electrons and protons begin to be bound together, letting the photons travel freely in a neutral environment. This is the origin of the CMB that we can measure today.

(18)

structures in the Universe, like galaxies and clusters of galaxies. Our present understanding is that the smallest structures formed first, i.e. stars, through gravitational instabilities. As these objects began to radiate energy, they ionized the plasma (mainly composed of hydrogen) that was neutral at that moment. As the Universe evolved, larger and larger structures were formed: galaxies, clusters and superclusters of galaxies.

As we approach redshift zero, our Universe begins to accelerate. The cause of the accelerating expansion of the Universe is unknown so far.

Table 1.1: Some major events in the history of the Universe

Events Temperature Time

Electroweak phase transition 100 GeV 20 ps

QCD phase transition 150 MeV 20 µs

Neutrino decoupling 1 MeV 1 s

Electron-positron annihilation 0.5 MeV 10 s

Nucleosynthesis 50-100 keV 10 min

Radiation-matter equality 0.8 eV 60’000 yrs

Photon decoupling 0.3 eV 380’000 yrs

Today 1 meV 14 × 109yrs

This is the general picture of the history of the Universe. However, confronted with observations the standard model of cosmology needs two otherwise unde-tected components filling the Universe: dark matter and dark energy. Dark matter is needed to explain some of the astrophysical and cosmological obser-vations, while dark energy is mainly required to account for the accelerating expansion of the Universe.

Let us review some of the observations that have led the standard model of cosmology to be called nowadays ΛCDM, Λ standing for a cosmological constant and CDM meaning cold dark matter.

1.1.1

Facing observations: the need for a dark universe

(19)

1.1. The Big Bang model and its problems 5

considered as standard candles and are used to measure astronomical distances. This is possible because of the relation between apparent magnitude m and the luminosity distance dL:

m − M = 5 log10  d L Mpc  + 25. (1.1)

The luminosity distance is an important quantity in the previous relation. It is a function of the redshift z but it is also dependent on the density parameter Ω ≡ ρ/ρc of fluids filling the Universe with equation of state w = p/ρ, where

p corresponds to the pressure of the fluid, ρ to the energy density and ρc to

the critical density required to have a spatially flat Universe. By observing the apparent magnitude and the redshift of the SN Ia, the two research teams (the Supernova Cosmology Project and High-z Supernova Search Team) were able to deduce independently that the expansion of the Universe is accelerating. To be precise, the best fit for all the SN Ia observations is given by an actual matter density parameter Ω0

m= 0.28

+0.09

−0.08 (1σ statistical) +0.05 −0.04 (identified systematics) [12] and a cosmological-constant-like fluid (w = −1)

that constitutes around 70% of the energy density of the Universe.

The age of the Universe Another convincing way to conclude that the Universe is indeed not only filled with matter nowadays is by estimating its age. Some stars in globular clusters orbiting the Milky Way have been estimated to be 12.7 ± 0.7 Gyr (2σ) old [15]. Therefore, the Universe has to be older than that, i.e. tU > 12 − 13Gyr.

One can make a simple estimate of the age of the Universe by neglecting the radiation, which is justified since the radiation epoch lasted much less than the age of the Universe. Doing so, we are able to rule out a Universe only filled with matter, since in that case tU ' 9.7 Gyr [16], which is less than the age

of the stars in globular clusters. On the other hand, if we consider a spatially flat Universe with Ω0m= 0.3, an actual density parameter for a cosmological

constant Ω0

Λ = 0.7 and the Hubble’s constant from the observations of the Planck satellite [16], then

tU = 1 H0 Z ∞ 0 dz (1 + z)Ω0 m(1 + z)3+ Ω0Λ 1/2 ' 14 Gyr, (1.2)

(20)

CMB The observations of the CMB have separately confirmed the need for an extra contribution in the form of a cosmological constant, totalizing 70 % of the energy density of the Universe.

As we already said, the CMB is extremely homogeneous and isotropic. However, small temperature fluctuations with observed amplitude of ∆T /T ∼ 10−5 are present across the whole sky [16,17,18]. Such fluctuations are considered to have their origin in an early phase of the Universe called inflation. They turned out to be a very powerful tool to understand the content, the geometry and the origin of the Universe.

Table 1.2: Cosmological parameters from Planck Collaboration [16]

Parameter Best fit 68% limits

bh2 0.022068 0.02207 ± 0.00033

ΩΛ 0.6825 0.686 ± 0.020

m 0.3175 0.314 ± 0.020

H0(km s−1Mpc−1) 67.11 67.4 ± 1.4

Age/Gyr 13.819 13.813 ± 0.058

Before recombination, i.e. when the Universe was filled with electrons and protons that were not bound together, baryons and photons formed a unique fluid with a high rate of interactions between photons and electrons acting as a glue. Two main effects were contributing to the dynamics of this fluid: gravity which compresses the fluid, and photon pressure which is counteracting gravity. Therefore, perturbations, instead of growing like they do after photon decoupling, were undergoing acoustic oscillations. When the photons decou-pled finally from the fluid, the patterns produced by these sound waves were imprinted onto the last scattering surface. By studying the characteristics of the anisotropy spectrum, a rich structure appears. The position and heights of the acoustic peaks give crucial information about cosmological parameters. The first peak corresponds to an acoustic wave that had time to compress once before recombination and is maximally compressed. The higher order peaks have gone through several oscillations and are damped with respect to the first one. Even peaks are at maximal rarefication and odd peaks are at maximal compression.

The study of the CMB is a powerful way of constraining the most interesting cosmological parameters. Put together with the SN Ia observations, we have a convincing set of evidences to believe that our Universe is spatially flat and dominated by a cosmological-constant-like fluid.

(21)

1.1. The Big Bang model and its problems 7

matter power spectrum [19,20]. Useful information can be extracted from this spectrum. In particular, the imprints of the baryonic acoustic oscillations (BAO) have been identified in the matter power spectrum [21]. Comparing the BAOs at recombination time with its present value in the matter power spectrum can give useful information about the late-time accelerated expansion of the Universe.

From the CMB observations, something rather strange has emerged: the value of the baryonic density Ωb is different from the value of the matter density

m(see Table 1.2). This means that most of the matter in the Universe turns

out to be non baryonic. Such non-baryonic component is called dark matter (DM) and its nature is one of the most intriguing problems in high-energy

physics nowadays.

Big Bang Nucleosynthesis Another way to see that Ωb doesn’t have the

same value as the matter density parameter Ωm is by considering the Big

Bang nucleosynthesis (BBN). The standard model of cosmology offers reliable predictions for the abundance of the light elements, D,3He,4He,7Li. In the Big Bang model, such abundances are directly related to the baryon to photon ratio η ≡ nb/nγ. The computation to obtain the final abundance of deuterium

and other elements reduces to solving coupled Boltzmann equations numerically. The results of the final abundances as a function of the baryon to photon ratio η [22] is: XD = 2.60 × 10−5(1 ± 0.06)  η 6 × 10−10 −1.6 , (1.3) X7Li = 4.82 × 10−10(1 ± 0.10)  η 6 × 10−10 2 . (1.4)

As the abundances are a function of η, getting the observed values of relic abundances serves as a baryometer. To observe the abundances, we need to look at sources with low metallicities to have observations as near as possible to the primordial ones.

To infer the primordial abundance of4He, observations have been focused in dwarf galaxies. Based on a considerable amount of data [23], the value

X4He= 0.249 ± 0.009 (1.5)

has been inferred. Deuterium is rather difficult to observe, since it is easily destroyed. However, D has been found in high-redshift quasar absorption systems [24, 25]. The seven observations have led to a value:

(22)

0.22 0.24 0.26 WMAP Bh 2 Mass fraction 4 He 10-2 10-6 10-5 10-4 10-3 3He/H, D/H D 3 He 10-10 10-9 1 10 7Li/H 7Li Planck ×1010

Figure 1.1: Relic abundances as a function of the baryon to photon ratio η. Both

the results of the Planck and WMAP collaboration are shown (vertical bands), with the horizontal green bands corresponding to the spectroscopic measurements. The red dot-dashed lines represent the extreme values of the effective number of neutrino families coming from the Planck collaboration. Taken from [26]

where the error is statistical only. As it is shown in Fig.1.1, there is a good agreement between predictions and observations if a value of η = 5.1−6.5×10−10 is taken. In the case of the Lithium, observations favour a lower value of η compared to the other elements. This may point to new physics or may simply be related to sources of systematic error that have not been taken into account. In any case, observations of metal-poor population II stars of our galaxy give a value [27]:

X7Li= (1.7 ± 0.06 ± 0.44) × 10−10. (1.7)

(23)

1.1. The Big Bang model and its problems 9

with CMB studies [27]:

0.019 ≤ Ωbh2≤ 0.024 (95%CL). (1.8)

As a consequence, we have two independent evidences coming from different epochs that the baryonic density fraction is indeed much less than the matter density. The conclusion is unambiguous: most of the matter in the Universe is non-baryonic. Several observed effects in astrophysical systems allow us to infer the presence of this non-baryonic component.

Astrophysics There are several observables that are considered to require the presence of dark matter. We will list some of the most compelling of them.

1. Flat rotation curves: Several observations concluded that the radial ve-locity of stars in galaxies is constant at very large distances from the center of galaxies [28], while Newtonian gravity predicts that stars should have a radial velocity that scales with the radius as v(r) ∝ 1/r. To solve this apparent paradox, we may well consider that gravity is modified at large distances [29,30] (the so-called MOND models) or that extra (dark) matter is contributing much more than the luminous matter at these scales.

2. Lensing: Clusters of galaxies curve spacetime around them so that light emitted by objects behind such clusters travels along curved geodesics and appears several times in our telescopes. The deflection angle of light is related to the mass of the lensing cluster and the impact parameter. By measuring both the impact parameter and the deflection angle one is able to infer that the total mass of the cluster is much larger than the baryonic mass indicated by X-rays from the gas.

This line of reasoning has been applied to the so-called bullet cluster [31]: two colliding clusters where the location of the baryonic gas does not trace the gravitational potential obtained from lensing. This observation is telling us that dark matter is collisionless, contrary to the gas. 3. Hydrostatic equilibrium: after relaxation, a cluster has a gradient of

pressure that follows the equation of hydrostatic equilibrium dp

dr = −G

M (r)ρgas(r)

r2 . (1.9)

The pressure p(r) ∝ −ne(r)kBT (r) can be inferred since we can get the

(24)

ne(r) from X-ray luminosity. Out of this, the total mass enclosed by a

ra-dius r can be obtained and compared with the baryonic mass. Once again, the baryonic component is not dominating the matter component [32,33]. Unless we believe that gravity is modified at large distances, there is a convinc-ing set of evidences pointconvinc-ing to the presence of a non baryonic dark matter component. From what has been said, any dark matter candidate needs to have the following properties: it needs to be stable enough to survive until nowadays, “dark”, i.e. interacting (very) weakly with standard model particles, and collisionless, from the bullet cluster observations. Based on N-body simula-tions, we are also able to conclude that dark matter should not be relativistic (“hot”) since in that case the top-down scenario is favoured, which states that large structures formed first. Rather, comparison between simulations and observations support a bottom-up scenario [34,35]. Such a scenario of structure formation is well fitted in simulations considering cold (non-relativistic) dark matter.

1.1.2

The challenges

(25)

1.1. The Big Bang model and its problems 11

Dark Energy There are two big problems related to the accelerated expan-sion of the Universe: the cosmological constant problem and the coincidence problem.

• Cosmological constant problem If we consider that dark energy cor-responds to a cosmological constant, then the vacuum energy density needed to fit the data is given by [38]:

ρΛ = ρ(obs)vac ' 10−47GeV

4. (1.10)

However, from quantum field theory estimations we would expect a much bigger value for the vacuum energy density. Summing up the zero point-energies of all the normal modes of an electromagnetic field, we easily conclude from quantum mechanics that ρ(theory)vac = 12ω

4

max. We can get an order of magnitude estimate to this theoretical estimation by taking the ultraviolet (UV) cut-off to be of the order of ωmax' 100 GeV, where the electromagnetic interaction is believed to merge with the weak interactions in one single electroweak force. By doing so, we obtain that ρ(theory)vac = 1053ρ(obs)vac . Depending on whether we take other UV cut-offs, we may alleviate or aggravate such mismatch, but in any case the theoretical prediction doesn’t match at all (by many orders of magnitude) the observed value. This is referred as the cosmological constant problem.

• Coincidence problem A rather interesting coincidence is that we are at an age of the Universe where the matter and the cosmological constant have similar abundances. As the Universe expands, we have that

ΩΛ Ωm

= ρΛ ρm

∝ a3, (1.11)

which implies that at early times the contribution coming from the cosmological constant to the energy density of the Universe was completely negligible compared to the matter contribution. On the other hand, in the future Λ will take a predominant role. As a consequence, we are nowadays witnessing a sharp transition between the period where ΩΛ was close to zero and the period where it will be close to 100%. Why is that the case?

(26)

1.1.3

Modifying gravity: the hope for a solution?

All the previous problems rely on the initial assumptions upon which lies the standard model of cosmology. One of the hypothesis is that the evolution of our Universe is well described by GR. Of course, this is somehow questionable, since the scales where the problems begin to appear are far beyond all the scales where GR has been tested so far. Therefore, it appears natural to modify gravity at large scales in order to be able to explain the accelerating expansion of the Universe and/or some of the astrophysical observations without any dark matter or dark energy.

As it turns out, some models like Modified Newtonian Dynamics (MOND) [39], or a relativistic version of it [29], are able to explain with high accuracy the galaxy rotation curves without the need to introduce dark matter. More recently, a model that modifies GR has been proposed and claims to reproduce cold dark matter, imitated by a conformal degree of freedom present in the gravitational sector [40]. As a consequence, modifications of gravity may provide a plausible explanation to the dark matter problem.

On the dark energy side, an interesting approach to explain the accelerating expansion of the Universe is also to resort to new gravitational degrees of freedom. In particular, in such framework a new low energy scale appears, which is technically natural, i.e. stable to quantum corrections. Such approach offers technically challenging problems, and is moreover falsifiable since able to provide specific observational predictions. Among the few interesting examples of modifications of gravity belonging to this category, there are models like DGP, where it is assumed that we reside on a 3-brane embedded in an infinitely large five-dimensional spacetime [41]. In the DGP model, self-accelerating cos-mological solutions appear due to the 0-helicity component of a five-dimensional graviton that is perceived as a massive resonance [42].

Even though such self-accelerating solutions of the DGP model are plagued by ghost instabilities [43], trying to construct a theory of massive gravitons may still be a viable solution to get rid of dark energy (or dark matter). Indeed, some other examples of theories of massive gravity exist and, as we will see in the next section, are consistent with all the observations so far. Moreover, unlike DGP models, they have their self-accelerating cosmological solution with no instabilities [44].

(27)

1.2. Massive gravity 13

graviton produces an infrared modification of gravity, which may be a possible explanation to the dark sector of the Universe.

In the next section, we will present two examples of theories with massive gravitons: a Lorentz invariant model [45] and a Lorentz violating one [46, 47]. The Lorentz invariant model will be shown for completeness and the Lorentz violating massive gravity will be of particular interest for us. Both are self-consistent, healthy and phenomenologically acceptable. More importantly, they may provide a conceivable explanation to the dark sector of the Universe.

1.2

Massive gravity

Attempts of constructing a healthy and self-consistent theory of massive gravi-tons have a long history. The first model of a massive graviton is due to the original work of Pauli and Fierz [48]. They constructed a Lorentz-invariant model of massive gravity in the linear regime with the following action [46]:

S = M2 pl Z d4x  LEH− m2 4 hµνh µν− h2  (1.12)

where small fluctuations about a Minkowski spacetime, i.e. gµν = ηµν +

hµν, |hµν|  1, have been considered at the quadratic level in the action. In

the previous equation, LEHcorresponds to the Einstein-Hilbert action at the quadratic level in hµν and h ≡ hµµ. The Fierz-Pauli action (1.12) has been

constructed in such a way that it doesn’t have ghosts around the Minkowski background. Indeed, when trying to construct a theory of massive gravity we must be cautious about the propagating degrees of freedom.

(28)

m2 h

µνhµν+ αh2, where α 6= −1, then apart from the 5 degrees of freedom,

we would also have an extra scalar mode that would necessarily be a ghost [46], i.e. a kinetic term with a negative sign is present. Ghosts should be avoided since they correspond to modes with negative energy and induce a vacuum instability, since in their presence the vacuum can decay into ghosts and normal particles. The case α = −1 of the Fierz-Pauli model is unique, since it gives rise to an extra constraint that kills the ghost, such that the only scalar degree of freedom is the helicity 0 mode with a good sign for the kinetic term. A simple way of counting the degrees of freedom is to recast the action (1.12) in terms of Hamiltonian variables, i.e. the canonical momentum πij, the spatial

slices hij, the temporal component h00 and the spatio-temporal component h0i of the metric. In this ADM formulation [49], the Lagrangian density in (1.12) takes the form [50]:

LFP = πij˙hij− H + 2h0i(∂jπij)

+ m2h20i+ h00 ~∇2hii− ∂i∂jhij− m2hii



(1.13) where dot denotes the time derivative and the sum over repeated indices is assumed. Moreover, we have that

H = 1 2π 2 ij− 1 4π 2 ii+ 1 2∂khij∂khij− ∂ihjk∂jhik+ ∂ihij∂jhkk − 1 2∂ihjj∂ihkk+ 1 2m 2 h ijhij− h2ii . (1.14)

From the previous equation (1.13), we see that h00 is appearing linearly multiplying a term that doesn’t have time derivatives, and is therefore a Lagrange multiplier enforcing a constraint C = ~∇2h

ii− ∂i∂jhij− m2hii = 0.

Out of the 12 degrees of freedom (6 of hijand 6 of πij) spanning the phase space

at each point, the constraint C reduces such number to 11 dynamical degrees of freedom. However, the Hamiltonian H = R d3xH is not first class and a second constraint arises by imposing that the first constraint is independent of time, i.e. the poisson bracket {H, C} ≈ 0 vanishes on the constraint surface. Out of the 12 dimensional phase space, the two constraints forces the number of degrees of freedom to 10, the 5 helicity modes of the massive graviton and their conjugate momenta.

On the other hand, if the structure of the mass term is not the one of Fierz-Pauli in (1.12), then h00 appears quadratically in the action and doesn’t impose any constraint. As a consequence, there are 12 degrees of freedom spanning all the phase space at each space point, where the extra 2 degrees freedom are related to the ghost and its conjugate momentum.

(29)

1.2. Massive gravity 15

case, not only is h00a Lagrange multiplier but also h0i, since it appears linearly in front of ∂jπijand we eliminated the quadratic term in h0iby sending the mass

m to zero. This implies 4 constraints ∂jπij = 0 and ~∇2hii−∂i∂jhij−m2hii= 0.

Moreover, after imposing such constraints, 4 gauge transformations can still be used to eliminate 4 degrees of freedom. Out of the 12 dimensional phase space, only 4 degrees of freedom remain, corresponding to the two polarizations of the massless graviton and their conjugate momenta.

1.2.1

The vDVZ discontinuity

Having a healthy model of massive gravity is good, but at the end of the day one should be able to test it and it should agree with experiments. In gravity, one of the simplest tests that we can carry out is to measure the strength of interaction between two massive bodies or the angle of deflection of light by a massive body like the Sun. To estimate such interaction, we have to couple the spin-2 massive graviton to matter by a term Tµνhµν, where Tµν is some

stress-energy tensor. If we vary the action (1.12) with the extra coupling term Tµνhµν, then the equations of motion are [51]:

Eµν = −

1 2m

2

(hµν− hηµν) + M−2pl Tµν, (1.15)

where Eµν corresponds to the linearization around the Minkowski background

of the Einstein tensor Gµν

Eµν = − 1 2∂µ∂νh − 1 2hµν+ 1 2∂ρ∂µh ρ ν + 1 2∂ρ∂νh ρ µ− 1 2ηµν(∂ ρσh ρσ− h) . (1.16)

From Bianchi identities, it turns out that Eµν is divergenceless and assuming

that the stress-energy tensor is conserved, i.e. ∂µT

µν = 0, we obtain by taking

the divergence of the equations of motion (1.15):

∂ρhρµ= ∂µh. (1.17)

Such an equation is in fact a constraint that eliminates 4 components out of the 10 components of the symmetric metric hµν. If we take once again

a derivative in the previous equation, we obtain a vanishing linearized Ricci scalar, which when reinserted inside the trace of the equations of motion (1.15) gives a relation for h

(30)

We therefore see that the trace of the metric doesn’t propagate and we recover in the absence of sources our previous Hamiltonian analysis with a massive graviton propagating 5 degrees of freedom. Reintroducing the previous two equations (1.17) and (1.18) into the equations of motion (1.15), we are able to obtain an expression for the propagator of a massive graviton, by going to Fourier space. By excluding terms dependent on momenta, since when contracted with a conserved energy momentum tensor give no contributions, the propagator takes a rather unusual form:

D(m6=0) µνρσ ∼ 1 k2+ m2  ηρµησν+ ηρνησµ− 2 3ηρσηµν  , (1.19)

which when compared to the propagator of the massless graviton of GR D(m=0)

µνρσ

1

k2[ηρµησν+ ηρνησµ− ηρσηµν] , (1.20) reveals that the third term inside the parenthesis is different from GR even when the graviton mass is vanishing. This is the source of the so-called vDVZ discontinuity [52, 53,54] (vDVZ standing for van Dam-Veltman-Zakharov). Such vDVZ discontinuity implies that however small is the mass of the graviton, a subset of predictions of the Fierz-Pauli model will always differ from the ones of GR. As for example, the gravitational potential of a point source turns out to be 4/3 larger in Fierz-Pauli model than in GR, reflecting the extra contribution from the exchange of the scalar degree of freedom. The vectorial degrees of freedom decouple when the limit m → 0 is taken. A rather simple way to see this is just to take two sources to which are related two conserved currents Tµν and Tµν0 and compute the three level amplitude A for both the

massive and massless case. The result reads as follows A(m=0) = Z d4x TµνDµναβ(m=0)T0αβ = Z d4x2 k2  TµνTµν0 −1 2T T 0  , A(m6=0) = Z d4x TµνD(m6=0) µναβ T 0αβ =Z d4x2 k2  TµνTµν0 −1 3T T 0  .

For the massive case, we took the large momentum limit. If we take two massive bodies with Tµν ∝ diag(M

1, 0, 0, 0) and T0µν ∝ diag(M2, 0, 0, 0), we obtain that A(m6=0)=4

3A

(31)

1.2. Massive gravity 17

approximation upon which relies the construction of the action (1.12). Even if GR is rather well described by a linear approximation for Solar system scales, it is not the case for the Fierz-Pauli model. Indeed, as the graviton mass approaches zero, the linear approximation breaks down and non linear interactions have to be taken into account. Let’s explain more thoroughly what is happening.

1.2.2

The Vainshtein mechanism

In GR, by constructing spherically symmetric solutions, we are able to isolate the small parameter that controls the validity of the linearized approximation to be the Schwarzschild radius rs = 2M/M2pl. Since the metric around the Sun is well described by a spherically symmetric solution, it turns out that the linear approximation is valid only for distances

r  2M

M2pl ∼ 3km. (1.21)

The radius of the Sun is much larger than 3 km, and as such, the linear approximation is a good approximation in GR to describe the gravitational field around the Sun. To see if that’s the case for massive gravity or not, we need to have a full non-linear theory whose linear expansion around Minkowski background corresponds to the Fierz-Pauli model to guarantee the absence of ghosts. Since the addition of the graviton mass breaks the gauge invariance of GR, a non-linear theory with massive gravitons is not defined uniquely. However, a rather simple example of a non-linear model of massive gravity that approaches the Fierz-Pauli model is just given by a deformation of the fully non-linear GR with the addition of a Fierz-Pauli term [50]:

L =√−gR −p−g(0)1 4m

2g(0)µαg(0)νβ(h

µνhαβ− hµαhνβ) (1.22)

gµν(0)corresponds to a fixed metric on which the massive graviton propagates, and

therefore the indices of hµν are raised and lowered with it, i.e. gµν = g

(0)

µν + hµν.

The study of spherically symmetric solutions in this model has first been worked out by Vainshtein [56]. For spherically symmetric solutions, the fixed metric gµν(0) is taken as the Minkowski metric ηµν. The conclusion was rather

surprising.

(32)

following form

gµνdxµdxν= −F (r)dt2+ G(r)dr2+ J (r)r2dΩ2 (1.23)

an expansion of the type F (r) = F0(r) + εF1(r) + . . . was carried out, revealing that the expansion parameter ε = rV/r is singular in the graviton mass m,

with rV = 2M M2plm4 !1/5 (1.24) being called the Vainshtein radius. Out of this, the conclusion that emerges is that for very small graviton mass, the linear expansion can only be trusted in the region r  rV  rs, which means that the quadratic action (1.12) cannot

accurately describe the Solar system. Indeed, if one takes the value for the graviton mass to be of the order of the actual Hubble scale m ∼ 10−33eV, which is an interesting value to explain the accelerating expansion of the Universe, then rV ∼ 100kpc for M = M .

As a consequence, one may expect that the vDVZ discontinuity is related to the illegitimate use of the linear expansion. Several numerical studies have obtained the full non-linear solutions in certain setups, and concluded that non-linearities can indeed restore continuity with GR for r  rV with solutions entering the

linear regime for r  rV [57, 58]. In the next section, we present two non

trivial examples of non-linear theories of massive gravity with Lorentz-invariant terms and breaking of Lorentz invariance.

1.2.3

Massive gravity: Lorentz invariant & Lorentz breaking

Constructing a non-linear theory of massive gravity is rather non trivial, because even if the linear expansion around Minkowski background is the Fierz-Pauli model with no ghosts, it turns out that ghosts generically reappear when non-linearities are taken into account. As an example, let’s take the previous action (1.22) with g(0)µν being the Minkowski metric ηµν. If we put such an

action in terms of the ADM variables, where

gµν = −N2+ NiNjg ij gijNj gijNj gij  (1.25)

then the Lagrangian density L takes the form L =  πij˙gij− N C − NiCi



(33)

1.2. Massive gravity 19

N , Ni are called the lapse and shift, respectively. The functions N and Ni are

Lagrange multipliers in GR, which enforce the constraints C and Ci leading

to two real propagating degrees of freedom. Instead, in our non-linear model of massive gravity, the action is not linear in the lapse and shift functions. Therefore, the non-linear action has no constraints or gauge symmetries and propagate 6 real degrees of freedom. As it was argued by Boulware and Deser [59], the extra degree of freedom leads to an Hamiltonian unbounded from below. Given that, the extra mode is generically called the Boulware-Deser ghost.

From this example, we see that non-linearities indeed change the constraint structure of the theory, since at linear level the theory (1.22) is just the Fierz-Pauli model with no ghosts. Boulware and Deser considered a rather general class of mass terms and concluded that the ghost is always propagating [60], but they didn’t work out the most general case.

It turns out that a covariant non-linear theory of massive gravity exists [61,62,

63] and has been proved to be free of Boulware-Deser ghosts [64].

Lorentz invariant massive gravity As we have discussed so far, one of the difficulties that we face when constructing a non-linear theory of massive gravity is the presence of a ghost mode. This is linked to the fact that a mass term breaks diffeomorphism invariance, which leads to the lapse N and shift Ni

being non-dynamical degrees of freedom enforcing no constraints. Integrating out N and Ni leaves a theory of massive gravitons with 6 propagating degrees of freedom with most probably the sixth degree of freedom being a ghost. A way of getting rid of the ghost degree of freedom is to construct a mass term such that a constraint remains always present. Such extra constraint is used to eliminate the ghost mode. In general terms, the mass term is a complex function of the lapse N , the shift Ni and hij, Lm(N, Ni, hij). If it is

not possible to solve the equations leading to the direct determination of the lapse N and shift Ni, then there is hope that when plugging back the shift

function Ni(N ) in the action a constraint remains. This is equivalent to state

that the Hessian

Hab= ∂Na ∂NbLm(N, N i, h ij) (1.27)

has a vanishing determinant, i.e. det(Hab) = 0 [61, 62,63]. In practical terms,

once the algebraic equations for the shift function Niare obtained, they are

(34)

constraint imposes that the first constraint is independent of time on the constraint surface, similarly as in the Fierz-Pauli model.

All the problem reduces to find the good mass term that will indeed have det(Hab) = 0, leaving a non-linear theory of massive gravitons with 5

propagat-ing degrees of freedom. In order to construct the most general realization of the non-linear Fierz-Pauli mass term, we need to introduce an extra non-dynamical metric fµν, which may be taken to be the Minkowski metric ηµν. The basic

building block of the ghost-free massive gravity model is pg−1f , where the square root matrix is defined by pg−1fp

g−1f = gµλf

λν. We have to pay

attention to the convention√−g ≡p− det(g) that still applies, even though for other cases√E doesn’t represent the determinant but rather the square-root matrix.

The most general non-linear massive gravity theory that turns out to be ghost-free is given by [64] S = M2pl Z d4x−g " R + 2m2 4 X n=2 αnen(K) # , (1.28)

where the matrix K is defined such thatpg−1f = I + K and α

2= 1. In the previous action, we have en(K) that are polynomials of the eigenvalues of K:

e0(K) = 1, e1(K) = [K], e2(K) = 1 2 [K] 2 − [K]2 , e3(K) = 1 6 [K] 3 − 3[K][K2 ] + 2[K3] , e4(K) = 1 24 [K] 4 − 6[K]2 [K2] + 3[K2]2+ 8[K][K]3− 6[K4] , ek(K) = 0 for k > 4 (1.29)

where the square bracket indicates the trace. Based on the action (1.28), models of bigravity have also been constructed [65]. Even though the structure of the action (1.28) may seem complicated, there are only 4 terms and 2 free parameters α3 and α4.

(35)

1.2. Massive gravity 21

However, Lorentz invariant massive gravity is rather difficult to handle, since strong nonlinearities are present around macroscopic sources, spoiling the per-turbative control of the theory in the solar system [68]. Moreover, several issues in this model still need clarification, such as problems of causality [69], but also the absence of healthy homogeneous and isotropic cosmological solutions [70]. These are among the reasons why we will be focusing on a theory of massive gravity with a breaking of Lorentz invariance, where such problems are not present.

Lorentz breaking massive gravity There is another interesting class of models of massive gravity that is healthy and that has compelling phenomenological consequences [46,47, 71]. Such models are based on a rather audacious hy-pothesis: giving up Lorentz invariance in the gravitational sector. It may seem ludicrous to abandon Lorentz invariance, since it is one of the fundamental ingredients of the standard model of particle physics, which has been verified with enormous accuracy by several tests [72,73]. By breaking Lorentz invari-ance, different species propagate with different maximum velocities even in flat space [72]. The extremely tight bounds on the different maximum velocities for the standard model particles [74] can easily be satisfied by not coupling directly the Lorentz breaking fields with the standard model ones, apart from graviton loops.

Breaking Lorentz invariance may even be the good path to produce a theory of quantum gravity, as it has been suggested by the Horava-Lifshitz model of gravity [75]. In such a model, Lorentz invariance is considered to be an emergent symmetry of spacetime at low-energies that doesn’t exist in the UV regime. With that in mind, a model has been constructed that achieves power-counting renormalizability by adding terms with higher order spatial derivatives of the metric, without introducing higher order time derivatives, so to preserve unitarity.

Instead of breaking explicitly Lorentz invariance leading to the non conserva-tion of the matter stress-energy tensor, it is more judicious to break Lorentz invariance dynamically with fields that have equations of motion leaving a conserved matter stress-energy tensor. An explicit example of model that employs such mechanism is the ghost condensate model [76]. In that case, Lorentz symmetry is broken by the time-dependent vacuum expectation value of a scalar field, although the graviton is massless in that case.

(36)

by [46,47] Lm= 1 4 m 2 0h00h00+ 2m21h0ih0i− m22hijhij+ m23hiihjj− 2m24h00hii  (1.30) The Fierz-Pauli model is recovered if we take m2

0= 0, m21= m22= m23= m24. By doing a (3+1) decomposition, it is rather easy to conclude that the mass term in the tensor sector is given by [46, 47]:

Lm= − m2 2 4 h T T ij h T T ij . (1.31)

TT stands for transverse and traceless, as usual. By taking into account the kinetic term coming from the Einstein-Hilbert action, we have two tensor degrees of freedom propagating with a relativistic dispersion relation ω2= p2+ m2 2, where m2 is the mass of the tensorial modes. The requirement to have no tachyonic instabilities leads to the condition m22≥ 0.

Interestingly enough, if we take m1 = 0 but generic mi with i 6= 1, then

constraints lead to no propagating modes in the scalar sector, but also in the vectorial sector . This implies that the only propagating modes are the tensorial ones with a mass m2. This leads to two immediate consequences: there is no ghosts and no vDVZ discontinuity since both originate from the extra degrees of freedom present when a mass term is added.

Of course, those conclusions are true as long as we are at the level of the quadratic action with perturbations about the Minkowski background. However, as we saw for the case of the Fierz-Pauli model, the fine-tuning m1 = 0 is generically spoiled by non-linear interactions, leading to the reappearance of the Boulware-Deser ghost and the extra degrees of freedom.

There is however a graceful exit to this fine-tuning problem. Instead of imposing some fine-tuned relations, they may just be the consequence of an unbroken part of the gauge invariance of GR. In this way, we may protect the Lorentz breaking model to become pathological for curved backgrounds, leading to an healthy model of massive gravity. There are several residual symmetries, which are interesting in different contexts. However, the residual symmetry that ensures m1= 0, while letting the other masses unconstrained, is [46, 47]:

xi→ xi+ ξi(t). (1.32)

(37)

1.2. Massive gravity 23

space-time, there are four reparametrizations related to the four coordinates, then each of these symmetries could be broken by the vacuum expectation value of scalar fields which depend on a particular coordinate. In this framework, the scalar fields correspond to the Goldstone bosons of the Brout-Englert-Higgs mechanism. In Lorentz breaking massive gravity, we have therefore four scalar fields whose space-time dependent vacuum expectation values break the Lorentz symmetry. These fields are minimally coupled to gravity through a derivative coupling and will be referred to as the Goldstone fields.

To ensure that m1= 0, we may only break some specific reparametrization symmetries, while still keeping the residual invariance (1.32). This residual symmetry (1.32) translates into the following symmetry in the Goldstone sector:

φi→ φi+ Ξi0) (1.33)

with arbitrary functions Ξi and where φα, α = 0, 1, 2, 3 are the Goldstone fields. So we start with a generally covariant theory with extra Goldstone fields φα. Then, these fields acquire background values that depend on space-time

coordinates, breaking the Lorentz symmetry. Therefore, in this framework Lorentz invariance is broken spontaneously.

By imposing homogeneity and isotropy of space, we require the action to be invariant under SO(3) rotations of the fields, i.e. φi → Λi

j, and under the

shift symmetry φα(x) → φα(x) + λαwith constant λα. The most general action

invariant under the residual symmetry (1.32) and the Euclidean symmetry of 3-dimensional space is given by [46,47]

S = Z

d4x−g−M2 plR + Λ

4F (X, Wij) , (1.34)

where the functions X and Wij are given by

X = Λ−4gµν∂µφ0∂νφ0, (1.35)

Wij = Λ−4∂µφi∂µφj

∂µφi∂µφ0∂νφj∂νφ0

Λ8X . (1.36)

The first term in the action (1.34) is the usual Einstein-Hilbert term and the second one is a function of the space-time derivatives of the four scalar fields φαminimally coupled to gravity. This model of massive gravity is viewed as a low-energy effective theory valid below the cutoff scale Λ, where the graviton mass is of the order m ∼ Λ2/M

pl [46, 47]. As already said, the model (1.34) admits a vacuum solution

¯

(38)

that breaks spontaneously the Lorentz symmetry. As expected from the initial requirements, the vacuum possesses rotational symmetry.

From nonperturbative Hamiltonian analysis in [77,78], it has been proved that the model (1.34) doesn’t propagate ghosts and there is a total absence of the vDVZ discontinuity. As a consequence, the Vainshtein mechanism doesn’t need to be introduced to have a phenomenologically viable theory.

It is also interesting to see that the cutoff of the effective theory (1.34) is rather low, Λ = (mMpl)1/2 ∼ (10−2 cm)−1, when compared to the cutoff scale of the ghost-free Lorentz invariant massive gravity (1.28), which is Λ = (m2M

pl)1/3 ∼ (108cm)−1 [46]. This makes the Lorentz invariant massive gravity theory hard to handle, since quantum corrections are rather important at macroscopic scales [68], but also strong nonlinearities are present around macroscopic sources, turning inappropriate the use of the perturbation theory in the solar system. Such technical inconvenients disappear for the Lorentz breaking massive gravity.

From the phenomenological side, it has been found that the cosmological expansion is driven to an attractor point that, in a certain range of parameters, gives rise to the accelerating expansion of the Universe [79]. More refined studies on the cosmology of the model (1.34) may lead to a better understanding of the recent phase of accelerating expansion of the Universe and may provide some enlightenment about the cosmological constant problem.

To conclude this section on massive gravity, we should point out a rather partic-ular property of the model (1.34): the presence of instantaneous gravitational interactions. Their existence in a model with Lorentz-breaking is relatively easy to understand. In GR, the gravitational potentials are all instantaneous. However, there is no instantaneous interactions due to a subtle cancellation between them in the graviton propagator. This cancellation does not occur in Lorentz breaking massive gravity, because of the presence of Lorentz breaking fields. Instantaneous gravitational interactions have been studied in detail in this model [80].

(39)

1.3. Black holes 25

1.3

Black holes

As we will see in the next chapters, black holes may lead to interesting insights about models of modified gravity, but are also interesting candidates for dark matter. Before we introduce the main subjects that will be treated in this thesis, we should first define what is a black hole.

One of the unambiguous definitions that is considered in the literature is: a spacetime that contains a black hole is a spacetime with two distinct regions, the interior and exterior of the black hole. Such two regions are causally disconnected, and therefore whatever happens inside of the black hole can never reach an observer which is outside of it, even if he waits an infinite amount of time. A more precise mathematical definition could be given. However, we will not follow this path. The interested reader can have a look at [82] and references therein.

Out of this definition of black hole, a question emerges: how to define the event horizon, i.e. the surface that separates the two causally disconnected regions? There are several definitions of the event horizon. However, for the cases that we will be interested in, i.e. static and stationary black holes, it has been shown by Carter [83] and Hawking and Ellis [84], that the event horizon must be a Killing horizon. In order to understand what a Killing horizon is, we first need to define the notion of a Killing field.

If a spacetime has an isometry, i.e a diffeomorphism (coordinate transformation) that leaves the metric gµν invariant, then the metric stays unchanged when

moving along a vector field corresponding to the infinitesimal generator of such symmetry (Lξgµν = 0). Such vector field is called a Killing vector field and

satisfies the following equation

Lξgµν ≡ ∇µξν+ ∇νξµ= 0. (1.38)

A Killing horizon is simply a hypersurface H where the Killing vector field is null, i.e. ξµξ

µ|H= 0. It is important to point out that not all Killing horizons

correspond to event horizons, an example being the Minkowski spacetime that contains Killing horizons but no event horizons.

(40)

very near the center of our galaxy with a period of 15.2 years and a pericenter of 1.8 × 1013cm from the center of an object that has the characteristics of a supermassive black hole [86]. Out of the motion of the star S2, we are able to estimate the mass of the central object to be of the order of 4.1 × 106M [87]. On the other hand, the central object needs to have radius (much) less than the pericenter of S2, otherwise the star would be accreted. This leads to the conclusion that the central object called Sagittarius A* has to be a black hole, since no astrophysical entity is known to be able to contain such amount of matter in such a restricted volume. Based on the study of radial velocities of X-ray binaries, several sources of stellar-mass black holes have also been observed [88]. Nowadays, black holes are a reality for astrophysicists and this opens an interesting window for the study of gravity in the strong-field regime.

It is believed that existing black holes may have very large range of possible masses from 10−5g to 1040g (107M ). Their masses are determined from the way they are formed, but also all the subsequent history of infalling matter. Certainly, the most common way of forming a black hole is after the evolution of a star. The history of a 25 M star leads naturally to the creation of a black hole. Basically what happens is that when the star, at the end of its life, has exhausted all of its energy, it collapses. Depending on the mass of the collapsing star, a force can counteract the effect of gravity, leading to the creation of objects like white dwarfs or neutron stars. In that case, it is the electron or neutron degeneracy pressure, which is the restoring force in a white dwarf or neutron star, respectively. However, such a pressure can only be effective if the mass of the object is not too high and neutron stars or white dwarfs have, as a consequence, a maximum allowed mass, which for the case of neutron stars is no more than 3 M [89]. Therefore, if the collapsing star is massive enough (≥ 20M ), then no force can defeat gravity and a black hole is the only endpoint of the collapsing process. For the case of supermassive black holes, the formation process is still unkown, but several models exist that consider for example the collapse of a cloud of gas in the early Universe or the collapse of a supermassive star that has accreted considerable amount of matter through time, or even mergers of several black holes [90].

(41)

1.3. Black holes 27

1.3.1

No-hair, thermodynamics and cosmic censorship

In GR, we know that stationary black holes are characterized by three parame-ters: the mass, the angular momentum and the electric charge [91,92,93]. This "no-hair" theorem means that black holes are described by the Kerr-Newman solutions [94], which for the case of astrophysical black holes simplifies to the Kerr solutions [95] (characterized by a mass and angular momentum), since there is a rapid neutralization of the charge. The no-hair theorem provides a way of testing GR by looking at the geometry of spacetime around astrophysical black holes. Some studies have considered to analyse specific spectral lines, which are seen in X-ray emissions from stellar-mass and supermassive black holes and are sensitive to the spacetime geometry around the black holes [96]. However, this is difficult since one needs to extract the relevant information that will tell us something about the black hole metric from the effects happening in the accretion disk. Gravitational waves are another way of probing the black hole metric. In particular, the time dependence of the emitted gravitational waves after a merger and with a detailed study of the multipole moments of the black hole may lead to precision tests of the geometry around the black hole. This opens fascinating perspectives about the study of black hole solutions and their possible deviations from GR.

It turns out that Lorentz violating models of massive gravity evade such no-hair theorem. Indeed, as we previously pointed out, there are instantaneous interactions that are present in such a class of models. As a consequence, it is not a surprise that black holes in Lorentz breaking massive gravity are characterized by other parameters than the mass, the electric charge and the angular momentum [97, 98]. This leads to the conclusion that higher multipole moments of the black holes are no more universal in this model and the geometry around these hairy black holes could be tested by gravitational wave observatories or other experiments sensitive to the black hole metric. It is important to point out that testing long distance modifications of gravity using the black hole metric may be rather tricky (and useless when the no-hair theorems [97] hold), and in this sense the Lorentz breaking theory of massive gravity stands up as a promising candidate to be tested.

(42)

of quantum field theory in curved spacetime, i.e. he considered quantum fields in a classical black hole background [99, 100]. The origin of this thermal radiation may be understood as follows: the gravitational field near the horizon may create a virtual pair of particle-antiparticle. As a consequence, the positive energy particle escapes, while the particle with negative energy gets absorbed. The net effect seen from an exterior observer is the emission of a real particle, and since the negative energy particle has been absorbed the black hole loses mass. The Hawking temperature associated with this blackbody radiation is given by

TH=

2π, (1.39)

where κ is the so-called surface gravity and ~ is the reduced Planck constant. Notice the presence of ~ in the Hawking temperature, telling us that THhas been obtained using quantum physics. The surface gravity is defined as the magnitude of the acceleration of a Killing field ξµ, i.e.

ξµµξν= κξν (1.40)

where the equation has been evaluated at the horizon H. The surface gravity is basically the strength of the force per unit mass applied from infinity that is needed to maintain an object at the event horizon. It is important to note that the surface gravity can only be defined for stationary black holes so that the event horizon is a Killing horizon, and therefore there is no notion of surface gravity for a non-stationary black hole.

The theoretical discovery of the Hawking radiation had a tremendous impact, and the natural idea was that black holes may be systems following the laws of thermodynamics. However, to be so a quantity is required to play the role of entropy for the case of a black hole. Moreover, such quantity needs to be non decreasing, to respect the second law of thermodynamics. Hawking proved a theorem which basically states that the area related to the event horizon of an isolated stationary black hole can never decrease with time as long as the matter stress-energy tensor satisfies the null energy condition, i.e if Tµνkµkν ≥ 0 for any null kµ [101]. From this theorem, it seems that the

Références

Documents relatifs

A moins qu'elle ne perde assez de matiere pour amener sa masse en dessous d e la limite critique, il semble qu'une etoile massive doive finalement subir une contraction

We study these Gauss - Bonnet black holes (and their properties) which could be formed at future colliders if the Planck scale is of order a TeV, as predicted by some modern brane

This thesis is divided in two parts: the first part is dedicated to the study of black hole solutions in a theory of modified gravity, called massive gravity, that may be able

It has already been mentioned that the numerical method proposed in the present section 4.3 can be seen as the variational form of the classical equivalent inclusion method

If, by chance, it is known that all singular lines of the Euler potentials (separatrices) lie on a surface S (the surface of the Sun, for example), then the magnetic helicity can

Bien que les résultats de calcul théorique en utilisant la fonctionnelle MPW1PW91 avec des jeux de base bien adaptés 6-311G et Lanl2DZ aient trouvé des valeurs acceptables dans

Le fichier de sortie est une matrice au format ASCII de même dimension que le fichier de départ mais avec un encadrement de données manquantes (lié à