• Aucun résultat trouvé

Self-Testing and Device-Independent Quantum Random Number Generation

N/A
N/A
Protected

Academic year: 2021

Partager "Self-Testing and Device-Independent Quantum Random Number Generation"

Copied!
162
0
0

Texte intégral

(1)

Université libre de Bruxelles Faculté des Sciences

Self-Testing and Device-Independent Quantum Random Number Generation

with Nonmaximally Entangled States

Cédric Bamps

Thèse présentée en vue de l’obtention du grade de Docteur en Sciences.

Promoteur : Stefano Pironio Co-promoteur :

Serge Massar

Jury :

Pierre Gaspard (président) Remigiusz Augusiak Omar Fawzi

Raúl García-Patrón Sánchez

Yanbao Zhang

(2)
(3)

Acknowledgments

It has been a pleasure for me to work at the Laboratoire d’Information Quantique. Quantum information theory was introduced to me by my now-supervisor Stefano Pironio in 2012 in a course that I took during my master’s studies. I was quickly charmed by this topic, as it manages to touch both on our understanding of the foundations of nature, and on the more down-to-earth and practical considerations of computer science and cryptography. I am grateful to Stefano for letting me work under his supervision for the last four years or so, and for regularly exercising in our discussions his excellent ability to make almost any aspect of our research topic seem simple enough to grasp effortlessly. I am also grateful to Serge Massar, my co-supervisor and research group leader, whose unique enthusiasm for all kinds of knowledge—scientific and beyond—has made a great impression on me. Of course, I also wish to thank my PhD student and postdoc colleagues from the group for guiding me on this journey and for lighting up my working days; in order of appearance in my personal timeline: Piotr Antonik, Olmo Nieto Silleras, Erik Woodhead, Jonathan Silman, Manas Patra, Damián Pitalúa-García, Tom Van Himbeeck, and Ravishankar Ramanathan.

For their precious support, especially through the stressful last stretches of my completion of this thesis, I want to thank my parents and, in random order, 1 my dear friends and colleagues Tom Van Himbeeck, Erik Woodhead, Gaetan Friart, Piotr Antonik, David Houssart, Chaïmae El Aisati, Sylvie Vande Velde, Olmo Nieto Silleras, Jonathan Silman, Ravishankar Ramanathan, Christos Siopis, and Samara Hussain.

It is impossible for me not to give credit to the numerous authors of the free/libre software that I have had the pleasure of using in my work, which ranges from operating systems to scientific and office tools. Their benevolent hard work has contributed to ease much of my own. In particular, it seems appropriate to mention the wonderful L A TEX ecosystem, whose software has lately been spending an almost comical amount of CPU time on my computers, both in the preparation of the present dissertation and of my defense.

Finally, I thank the members of my jury for dedicating their time to reading and evaluating my thesis.

This thesis was financed thanks to a Research Fellowship granted by the Fonds de la Recherche Scientifique (F.R.S.-FNRS), as well as a contribution from my supervisor’s BB2B grant during my first year.

1

Shuffled using shuf (GNU Core Utilities) with a Quantis-USB quantum random number generator as a

randomness source—unfortunately, despite my best efforts, device-independent quantum random number

generators were not yet commonly available at the time of writing this.

(4)
(5)

Contents

Résumé vii

Abstract ix

1 Introduction 1

1.1 Random number generators . . . . 1

1.2 Hardware RNGs, classical and quantum . . . . 3

1.3 The device-independent approach . . . . 6

1.4 Personal contribution . . . . 8

2 Technical introduction 9 2.1 Nonlocality . . . . 9

2.1.1 Setting . . . . 9

2.1.2 Local hidden variable models . . . . 10

2.1.3 Quantum behaviors: beyond locality . . . . 11

2.1.4 No-signaling behaviors . . . . 15

2.1.5 Geometry of the behavior sets . . . . 15

2.2 Bell inequalities . . . . 16

2.2.1 CHSH inequality . . . . 17

2.2.2 Tilted-CHSH inequalities . . . . 18

2.3 NPA hierarchy and semidefinite programming . . . . 19

2.3.1 NPA hierarchy of relaxations . . . . 22

2.3.2 Listing and solving the linear constraints . . . . 22

2.3.3 Optimization over quantum behaviors . . . . 24

2.3.4 Semidefinite and conic programming . . . . 25

2.4 Nonlocality and randomness . . . . 27

2.4.1 Guessing probability . . . . 28

2.4.2 Guessing probability and tilted-CHSH inequalities . . . . 33

2.5 Randomness generation . . . . 35

2.5.1 Security . . . . 37

2.5.2 Min-entropy and randomness extractors . . . . 39

3 Self-testing partially entangled qubit pairs 47 3.1 Goals and preliminaries . . . . 48

3.2 SOS decompositions for tilted-CHSH inequalities . . . . 50

3.3 Application to self-testing . . . . 56

3.4 Discussion . . . . 59

(6)

3.A Robust self-test . . . . 60

3.A.1 Introduction . . . . 60

3.A.2 Application of our SOS decompositions to robust self-testing . . . . 61

3.B Vertex SOS decompositions for CHSH . . . . 70

3.C Additional SOS for CHSH . . . . 71

3.D Shortcomings in previous results . . . . 72

3.D.1 CHSH self-test . . . . 72

3.D.2 Partially entangled state self-test . . . . 72

3.E Optimality of O ( √ ϵ ) self-testing bounds . . . . 74

4 Randomness generation with sublinear shared quantum resources 75 4.1 DIRNG protocol based on the tilted-CHSH expression . . . . 76

4.1.1 Motivation . . . . 76

4.1.2 Tilted-CHSH game . . . . 78

4.1.3 Protocol and assumptions . . . . 80

4.2 Security of the protocol . . . . 81

4.2.1 Entropy accumulation . . . . 81

4.2.2 Soundness of the protocol . . . . 83

4.2.3 Random bits from any partially entangled two-qubit state . . . . 86

4.2.4 Sublinear entanglement consumption . . . . 86

4.3 Using diluted Bell states . . . . 86

4.4 Discussion . . . . 88

4.A Robust min-entropy bound from self-testing . . . . 91

4.B Proof of Lemma 4.3 . . . . 94

5 Randomness generation from several Bell estimators 97 5.1 Introduction . . . . 97

5.2 Behaviors and Bell expressions . . . . 99

5.3 A general procedure for DIRNG against classical side information . . . 100

5.4 Estimation . . . 105

5.5 Bounding single-round randomness . . . 108

5.6 Summary of the protocol and security . . . 111

5.7 Discussion . . . 112

5.7.1 Bounding randomness for all inputs with one Bell expression ( X r = X , t = 1) . . . 115

5.7.2 Bounding randomness from a subset of all inputs ( X r ⊆ X ) . . . 117

5.7.3 Bounding randomness from several Bell expressions (t ≥ 1) . . . 120

5.8 Conclusion and open questions . . . 127

5.A Dual conic program for RB functions . . . 131

6 Conclusion and outlook 137

Bibliography 141

vi

(7)

Résumé

La génération de suites de nombres aléatoires, c’est-à-dire de suites imprévisibles et dépour- vues de toute structure, trouve de nombreuses applications dans le domaine des technologies de l’information. L’une des plus sensibles est la cryptographie, dont les pratiques modernes font en effet appel à des clés secrètes qui doivent précisément être imprévisibles du point de vue d’adversaires potentiels. Ce type d’application exige des générateurs d’aléa de haute sécurité.

Cette thèse s’inscrit dans le cadre de l’approche indépendante des appareils des méthodes quantiques de génération de nombres aléatoires (en anglais, Device-Independent Random Num- ber Generation ou DIRNG). Ces méthodes exploitent la nature fondamentalement imprévisible de la mesure des systèmes quantiques. En particulier, l’appellation “indépendante des appareils”

implique que la sécurité de ces méthodes ne fait pas appel à un modèle théorique particulier de l’appareil lui-même, qui est traité comme une boîte noire. Cette approche se distingue donc de méthodes plus traditionnelles dont la sécurité repose sur un modèle théorique précis de l’appareil et peut donc être compromise par un dysfonctionnement matériel ou l’intervention d’un adversaire.

Les contributions apportées sont les suivantes. Nous démontrons tout d’abord une famille de critères de “self-testing” robuste pour une classe de systèmes quantiques impliquant des paires de systèmes à deux niveaux (qubits) partiellement intriquées. Cette forme d’inférence particulièrement puissante permet de certifier que le contenu d’une boîte noire quantique est conforme à l’un de ces systèmes, sur base uniquement de propriétés statistiques de la boîte observables macroscopiquement.

Ce résultat nous amène à introduire et à prouver la sécurité d’une méthode de génération d’aléa basée sur ces boîtes noires partiellement intriquées. L’intérêt de cette méthode réside dans son faible coût en intrication, qui permet de réduire l’usage de ressources quantiques (intrication ou communication quantique) par rapport aux méthodes de DIRNG existantes.

Nous présentons par ailleurs une méthode de génération d’aléa basée sur une estimation

statistique originale des corrélations des boîtes noires. Contrairement aux méthodes de DIRNG

existantes, qui résument l’ensemble des mesures observées à une seule grandeur (la violation

d’une inégalité de Bell unique), notre méthode exploite une description complète (et donc

multidimensionnelle) des corrélations des boîtes noires qui lui permet de certifier une plus

grande quantité d’aléa pour un même nombre de mesures. Nous illustrons ensuite cette

méthode numériquement sur un système de qubits partiellement intriqués.

(8)
(9)

Abstract

The generation of random number sequences, that is, of unpredictable sequences free from any structure, has found numerous applications in the field of information technologies. One of the most sensitive applications is cryptography, whose modern practice makes use of secret keys that must indeed be unpredictable for any potential adversary. This type of application demands highly secure randomness generators.

This thesis contributes to the device-independent approach to quantum random number generation (DIRNG, for Device-Independent Random Number Generation). Those methods of randomness generation exploit the fundamental unpredictability of the measurement of quantum systems. In particular, the security of device-independent methods does not appeal to a specific model of the device itself, which is treated as a black box. This approach therefore stands in contrast to more traditional methods whose security rests on a precise theoretical model of the device, which may lead to vulnerabilities caused by hardware malfunctions or tampering by an adversary.

Our contributions are the following. We first introduce a family of robust self-testing criteria for a class of quantum systems that involve partially entangled qubit pairs. This powerful form of inference allows us to certify that the contents of a quantum black box conforms to one of those systems, on the sole basis of macroscopically observable statistical properties of the black box.

That result leads us to introduce and prove the security of a protocol for randomness generation based on such partially entangled black boxes. The advantage of this method resides in its low shared entanglement cost, which allows to reduce the use of quantum resources (both entanglement and quantum communication) compared to existing DIRNG protocols.

We also present a protocol for randomness generation based on an original estimation of the black-box correlations. Contrary to existing DIRNG methods, which summarize the accumulated measurement data into a single quantity—the violation of a unique Bell inequality—, our method exploits a complete, multidimensional description of the black-box correlations that allows it to certify more randomness from the same number of measurements.

We illustrate our results on a numerical simulation of the protocol using partially entangled

states.

(10)
(11)

1 Introduction

1.1 Random number generators

In the 1920s, the need for large volumes of random digits urged Leonard H.C. Tippett, an English statistician, to publish the first table of random numbers [1]. This table contained 41 600 random decimal digits; its purpose was to provide a convenient substitute to the tedious manual generation of the random digits used in statistical research. Soon enough, the table was found to be insufficient in volume, and further tables were produced using more sophisticated methods, in order both to improve the “quality” of the random numbers and to facilitate their production in larger quantities [2]. One of the last and most well-known examples, published by the RAND Corporation in 1955 (both in print and punched card form), contained a million random digits in addition to a hundred thousand normal variates [3].

In today’s information age, where computers have become an essential tool in research and communications, it would be unthinkable to obtain random numbers from such sources.

Monte-Carlo methods, which are used in many branches of science, are big consumers of random inputs, and today’s applications use quantities of random numbers that far exceed the million digits provided by RAND—which barely amount to about 62 700 double precision floating point variates! 1 With the evolution of our needs for random numbers, we have thus moved on from manual dice-rolling and urn-drawing to fast and automated technologies:

random number generators (RNG).

Our concern, however, will not be the increasing volume of random numbers used in various applications, but rather the need for increasing security. Indeed, the use of RNG goes beyond Monte-Carlo-type experiments, in which random numbers are used to solve computational problems. In that type of application, what matters is that the large batches of random numbers used in the experiment are well-distributed; in fact, what is commonly admitted as the defining feature of “randomness”—a character of unpredictability—is often seen as undesirable in Monte-Carlo methods [5]. Instead, deterministic algorithms called pseudorandom number generators (PRNG) are typically used, which ensures that Monte-Carlo experiments are perfectly reproducible.

In more sensitive areas of application of random numbers, security—that is, complete

1

10

6

digits amount to log

2

( 10 )× 10

6

' 3.32 × 10

6

random bits. Uniform floating point variates in software random number generators are usually output in the [ 0, 1 [ interval, though they are commonly sampled in the [ 1, 2 [ in- terval before subtracting 1. See for instance NumPy’s implementation of numpy.random.random_sample or Julia’s implementation of Base.Random.rand [4]. The reason for this is that contrary to the [ 0, 1 [ interval, the IEEE 754 floating point representation discretizes the [ 1, 2 [ interval in uniformly spaced values.

In particular, floating point numbers in double precision in [ 1, 2 [ are defined by 53 bits out of 64, the 11

remaining ones being fixed. Hence our estimation of log

2

( 10 ) × 10

6

/ 53 ' 62 700 uniform floating point

variates in double precision.

(12)

unpredictability for any involved party, including potential adversaries—is paramount. The most critical example is cryptography, which is a collection of processes that enable secure communication over a public channel. Cryptographic processes include the following classes of protocols [6]:

Encryption consists in dissimulating the contents of a message so that only the sender and the recipient are able to decipher it, protecting it from eavesdroppers.

Authentication is the certification of the origin and integrity of a message, preventing tampering or impersonation by an active adversary.

Key negotiation (also known as key distribution) establishes a shared secret key between two parties over a public channel.

Modern cryptographic protocols follow deterministic algorithms that are known to the enemy. Rather than basing their security on intricacies in the process itself as was the case with historical cryptographic methods (an approach now commonly named “security through obscurity”), modern methods are made secure by integrating random data in such a way that, even knowing the steps of a protocol and the messages communicated through the public channel, the adversary cannot guess the actors’ internal state and therefore has a limited capacity to attack the protocol. For instance, a symmetric encryption algorithm is meant to protect a message m using a secret key k. To communicate the message securely over a public channel, it is processed with an encryption map ( m, k ) → c to compute the ciphertext c, which can then be communicated over the public channel to its recipient and decrypted using a decryption map ( c, k ) → m. An eavesdropper only ever sees c, and if this protocol for encrypted communication is secure, she has no hope of learning the message m other than somehow guessing the key k . This approach of fully basing the reliability of cryptographic methods on the security of the secrets is called Kerckhoffs’s principle [6]. This principle encourages the development of performant cryptographic methods that are fault-tolerant in the sense that, ideally, their only mode of failure should be the use of insecure secrets.

We now understand the importance of properly choosing an encryption key k: the more unpredictable (i.e., random), the better. The same applies to authentication and key negotiation, which also make use of random secrets. Generally, if the internal state of a cryptographic protocol’s actors is easily guessable by an adversary, then the protocol can do nothing to protect the adversary from, e.g., tampering with an authentication method, deciphering an encrypted message, or guessing the product of a key negotiation. Thus, it is clear that secure cryptographic protocols call for secure random number generation. The importance of secure random number generation in cryptography has been demonstrated in practice: in 2012, a global survey of SSH and TLS (HTTPS) servers done by Heninger et al. uncovered thousands of cryptographic secrets that were fatally weakened by the use of RNGs of poor quality [7]. As a result, the authentication mechanisms protected by those secrets could be defeated easily.

Let us thus consider the foundations on which cryptographically secure RNGs are built.

Modern secure RNGs for cryptography usually combine an entropy source (that is, a source of unpredictability) 2 with algorithmic post-processing that serves to extract random bits from

2

Here, we use the term “entropy” to abstractly refer to a quantitative measure of randomness or unpredictability.

2

(13)

1.2 Hardware RNGs, classical and quantum the source’s raw output. Given that computers are inherently digital and deterministic, “true”

entropy is generally scarce to them; they must accumulate it from external sources of noise.

For instance, the Linux operating system extracts entropy from small fluctuations in the timing of hardware events [8]. Since those sources are themselves relatively scarce and often of questionable quality [7, 8], it is desirable to offload the task of random number generation onto a specialized device called a hardware RNG.

1.2 Hardware RNGs, classical and quantum

Hardware RNGs are purposefully built around noisy physical processes, such as amplified thermal noise in electronic circuits. Today’s CPUs even host integrated hardware RNGs, which means that they are now a common occurrence [9]. Though they constitute a great improvement over the previous sources of entropy we mentioned, hardware RNGs present the difficulty of being somewhat inscrutable due to their complexity. This makes it difficult to ensure that the underlying physical process functions correctly throughout the use of the generator. Special care must therefore be taken to ensure that their output remains random and secure.

We will now consider classical hardware RNGs and their shortcomings, and see how quantum mechanics allows us to solve some of them.

In essence, classical phenomena are deterministic: strictly speaking they are unable to create uncertainty. This is not to say that deterministic phenomena are unreliable sources of unpredictability: algorithmic pseudorandom generators still reign supreme in most of today’s practical uses of cryptography, including very sensitive financial or military applications.

Some phenomena of classical physics may nevertheless seem like decent alternatives to pseudorandom number generation. These phenomena typically involve the expression of microscopic degrees of freedom into observable but seemingly unpredictable macroscopic effects. They include chaotic processes (e.g., the double pendulum), thermal noise in electron- ics, turbulent flow (e.g., in atmospheric dynamics), instabilities (e.g., in lava lamps), or other stochastic phenomena such as Brownian motion. Despite the determinism of the laws of motion that govern their time evolution, our subjective perception of such systems displays much uncertainty.

Still, because a classical system behaves deterministically, its future state cannot be any more random than its initial state. As such, it could be said that using turbulent fluid dynamics or a chaotic system such as a double pendulum as sources of entropy in RNGs still effectively makes up a pseudorandom number generator. Of course, compared to algorithmic PRNGs, RNGs based on such physical systems still present the advantage of having a considerably larger space of configurations, which makes the task of reliably predicting their future outputs near impossible. Nevertheless, given that such systems create no true randomness, relying instead on the amplification of small-scale variations into large-scale uncertainty, classical hardware RNGs cannot be considered to be secure against all types of adversary. In particular they cannot achieve information-theoretic security, for which adversaries are assumed to be

The source’s entropy is usually measured by heuristic methods as a concrete value which determines the

number of uniformly random and independent bits that may be processed from it by the RNG device.

(14)

all-knowing (except of course for the necessary secret inputs of the cryptographic method at hand) and in possession of unbounded computational power. Indeed, such an adversary would be able to compute the evolution in time of the deterministic RNG based on the knowledge of its initial state, which would allow her to predict the RNG’s exact outcome.

Fortunately, in real-world applications of RNGs, adversaries are of course never so powerful.

Hardware and pseudorandom RNGs are therefore often considered secure enough, as we saw with the widespread use of PRNGs in sensitive applications despite their determinism.

However, even if we accept that the physical processes of a classical hardware RNG cannot be fully understood by adversaries, this does not necessarily imply that the RNG is secure:

there remains a problem of trust. To illustrate this, let us take an extreme example: when we use an off-the-shelf hardware RNG, how do we make sure that it is not simply outputting a properly seeded pseudorandom sequence—therefore being completely predictable to a well- informed adversary? If we look at the output of the generator rather than the process that is responsible for this output, true hardware RNGs are indistinguishable from such compromised ones. It therefore appears that we either need to trust the provider of the RNG device or examine the device in detail.

More realistically, even if we trust that our RNG is correctly implemented, that alone does not make it secure. Indeed, one difficulty of generating randomness from physical processes is to evaluate exactly how much randomness is produced. Ideally, the process should be described by a precise theoretical model that would for example account for self-correlation in the output of the process at different times. On the basis of such a model, we could then estimate the amount of randomness that can be extracted from the process. 3 But reality can deviate from these models, in which case the process may generate less randomness than estimated, biasing the generator’s outcome. Undetected biases can then be exploited by an adversary to guess parts of the generator’s output. As the survey of Heninger et al. [7] showed, the use of a biased RNG can lead to catastrophic consequences, without necessarily raising any alarms since the output of the RNG could still look completely random to the user.

As an example of how an adversary could stealthily bias the physical process of a hardware RNG, there has been a case study on the possibility of undetectably altering dopant levels in the transistors of the hardware RNG of Intel CPUs to lower its entropy output without triggering the built-in entropy checks [9]. In cases where the context allows, an adversary may also potentially bias a pristine generator’s output in her favor by controlling parts of its environment, for example by frequency injection in smart cards [10]. But even excluding any explicit tampering by an adversary, it is extremely difficult to predict and protect against all possible accidental failure modes of a hardware RNG, especially if this failure can appear gradually instead of suddenly.

We have thus highlighted two issues with the hardware RNGs considered so far. The first is that classical processes create no randomness and therefore cannot lead to information- theoretic security. The second is that assessing the randomness produced by a RNG device is difficult, given that it may deviate in many ways from the theoretical model it is expected to

3

Note however that the common practice does not go that far. A RNG device is usually deemed secure after having passed a number of statistical tests to detect deviations from a uniform distribution. Of course, good PRNGs are perfectly capable of passing the same tests.

4

(15)

1.2 Hardware RNGs, classical and quantum follow. To solve at least the first issue, we may turn to quantum mechanics as the physical source of randomness.

Quantum mechanics has been the best theoretical framework so far to describe natural phenomena occurring at small scales (of time, space or energy). One of the defining features of quantum theory is its appeal to nondeterministic measurement outcomes. When an observable quantity is measured on a quantum system whose state is not an eigenstate of the observable, the outcome of that measurement is uncertain. Unlike in classical mechanics, the uncertainty is not explained in full by a lack of knowledge in the system’s anterior configuration. It is in fact impossible to predict the outcome perfectly: Born’s rule in quantum mechanics postulates that the measurement outcome is an event of pure chance, which obeys some prescribed probability distribution that depends on the system and the measurement. 4

The inherent indeterminism of quantum measurements is a very attractive feature for hardware random number generation. Indeed, according to quantum mechanics, the stochastic part of the theoretical model for a quantum RNG is fundamentally random, unlike in classical RNGs where it is only the consequence of a lack of knowledge. This means that quantum RNGs can be information-theoretically secure. Furthermore, a precise assessment can be made on the quality of the randomness produced in the theoretical model. In contrast, pseudorandom generators only create an illusion of randomness, and the randomness of classical hardware generators is at best difficult to quantify objectively [11].

Even better, quantum RNGs can remain quite simple in design: the core randomness- generating process can simply consist of a photon source, a beam splitter and two optical detectors. For example, ID Quantique’s Quantis generator fits into a small form factor that can be integrated onto a PCIe card or a compact USB device [12].

On this basis, quantum RNGs appear to be good substitutes for classical hardware RNGs when high security is needed. (In demanding applications where high throughput matters more, current quantum RNGs may not be fast enough and algorithmic PRNGs may be preferred, but this is not a fundamental limitation: future quantum RNGs might well reach comparable speeds [11].)

However, quantum RNGs such as Quantis suffer from a problem which we already pointed out in classical hardware RNGs: they may become unreliable when they unexpectedly deviate from their intended design. Indeed, as we pointed out, the security of a hardware RNG typically holds only according to a theoretical model of the device, which the physical implementation is assumed to follow. We call such a level of security device-dependent: if a device departs from its model, for example because it suffered component failure or tampering by an adversary, it cannot be guaranteed to produce a secure output anymore. This is a problem because hardware RNGs are generally treated as black boxes, which we assume to be secure only because we put trust both in the manufacturer and in the reliability of the components. To protect against accidental failures of the device, practical implementations of quantum RNGs like Quantis test their own outputs by comparing a number of their statistical properties against their theoretical model. However, this does not change the fact that the generator

4

We will leave for later the question of whether this uncertainty is only a shortcoming of a theory that fails to

account for all the causes that would in fact determine a unique outcome. Let us thus accept for now that

this indeterminism is an inherent feature of quantum systems.

(16)

functions as a black box. Besides, as we saw with classical and pseudorandom generators, such heuristic tests do not guarantee the randomness of the generation process: they only verify that its output looks right.

In summary, while quantum mechanics can bring information-theoretic security to random number generation in theory, practical implementations of device-dependent quantum RNGs still retain some of the limitations of other hardware RNGs. In effect, those RNGs are black boxes, and the trust we decide to put in them can only rest on fallible checks and assumptions on the device functioning according to specification. Given that quantum RNGs typically target sensitive applications where high security is expected [12], it is fair to expect better security guarantees than such black-box devices can offer.

1.3 The device-independent approach

If we adopt the point of view of considering a RNG device as a black box, how can we hope to obtain more than device-dependent security from it? From what we have said so far, it seems like a black-box RNG is doomed to be untrustworthy. And indeed, it is: if a generator functions opaquely, we cannot distinguish it from an obviously insecure device that takes its outputs from a built-in algorithmic PRNG whose seed is known to an adversary. It does not matter whether a black-box RNG uses quantum processes: seen as a black box, there is nothing visibly quantum about its output that proves it to be trustworthy.

We can reach a solution to this issue by building our RNG out of two separate devices (or more) that are mutually isolated, and considering devices that take inputs instead of being pure sources. The motivation for this approach is as follows. If we use deterministic devices in such a task of distributed random number generation, their outputs will depend on their respective current state and provided inputs, and not on the inputs of other devices. Indeed, the devices are limited by their inability to communicate, which means that device A cannot know device B’s inputs and vice versa. This results in a constraint on the correlation between the two devices’ outputs, which is called Bell locality. A celebrated result of John Bell from 1964 [13]

showed that quantum mechanics, thanks to the characteristically quantum phenomenon of entanglement, enables physically isolated devices to be more strongly correlated than allowed by classical physics, reaching beyond the constraint of locality to produce so-called nonlocal correlations. (We will provide a more detailed explanation of this in Chapter 2.)

As a result, nonlocal devices distinguish themselves from deterministic devices in a way that can be witnessed from their outputs alone, i.e., without looking inside the black box. Thus, whereas the truly random character of quantum measurements in a single black box was not distinguishable from the pseudorandomness of a deterministic RNG, splitting the device into multiple parts makes this distinction possible. Nonlocality therefore enables the use black boxes for RNG, without requiring any trust in the integrity of the devices or even in their provider. The only assumptions that are made concern the laws of nature (“quantum mechanics is a correct description of quantum phenomena”), the initial absence of correlation between the devices and the inputs they receive, and the physical separation of the devices (which can be enforced from the outside). As such, this approach of using nonlocality to certify nondeterminism is called device-independent (DI).

6

(17)

1.3 The device-independent approach Of course, just knowing that the devices produce some amount of randomness in their output is not enough in practice; it remains to design a precise protocol to produce quantifiable randomness out of nonlocal black boxes.

The insight of using nonlocality for device-independent cryptographic protocols originally came from Ekert [14] and later Mayers and Yao [15] in the context of quantum key distribution (QKD, or in other words, key negotiation using quantum devices).

QKD protocols such as BB84 [16, 17] or E91 [14] convert quantum resources to a secret key shared by the two distant actors of the protocol, which can in turn be used to exchange encrypted messages securely. While the security of the BB84 protocol is unavoidably de- pendent on some element of trust in the implementation of the devices [18], Ekert’s E91 protocol already incorporated notions of device-independent self-checking, though more in the form of a heuristic check than in a rigorous proof of security [14]. A first proper DIQKD protocol was later introduced by Barrett et al. [19]. More practical protocols followed, at first proven secure only against constrained adversarial models (see for example [20–22]), and only recently against unlimited quantum adversaries [23–25].

A special case of device-independent randomness generation (DIRNG) is device-independent randomness expansion, where protocols are designed to produce more randomness than they consume. The first proposal for such a device-independent randomness expansion protocol was made by Colbeck and Kent [26, 27]. We should also mention the practical protocol of Pironio et al. [28], which was the first device-independent protocol to be implemented experi- mentally (thanks in particular to its tolerance to memory effects in the devices, which allows the sequential use of a single pair of devices), and also the first to introduce a superlinear expansion rate. Further works have aimed to improve the security and efficiency of DIRNG protocols—we will come back to this later.

Aside from DIQKD and DIRNG, the device-independent approach also includes self-testing, which will be the topic of Chapter 3. In short, self-testing consists in assessing the state (and possibly the measurement operators) of a quantum system device-independently. That is, some macroscopically observable properties of black-box devices may be used to certify that a given quantum system underlies the black-box behavior. For example, if a pair of black-box quantum devices display maximal violation of the CHSH inequality (see Chapter 2), it can be shown that the quantum system being measured is equivalent to a pair of maximally entangled qubits up to a local change of basis [29, 30]. It is also possible to prove robust self-testing statements, which provide approximate certificates depending for instance on how much a Bell inequality is violated [23, 31, 32].

Since self-testing essentially “opens the black-box”, 5 it may serve as the basis for other device-independent protocols. For example, Miller and Shi [34] developed a DIRNG protocol out of their previous self-testing results [35]. Self-testing may also be used to verify full quantum circuits [31, 36, 37], which also finds applications in the study of interactive proof systems [36, 38, 39].

Let us also mention that the device-independent approach can also find applications in other cryptographic tasks, for example bit commitment [40] or conference key agreement, an

5

In reference to the pre-print title of [33], “Opening the black box: physical characterization from nonlocal

correlations”.

(18)

N -partite generalization of key negotiation [41].

1.4 Personal contribution

It is in the context of device-independent random number generation that this thesis endeavors to make a contribution. Our focus will be placed on the extension of the current state of the art in DIRNG toward a more diverse form of evaluations of nonlocality. More specifically, we will focus our attention on the use of nonmaximally entangled states, with the explicit goal of producing more randomness out of a given amount of shared entanglement.

In Chapter 2, we will introduce elements of the framework of DIRNG used in the remainder of the thesis. This includes a technical introduction to Bell nonlocality, the CHSH and tilted- CHSH Bell inequalities, the NPA hierarchy of relaxations for the quantum set of behaviors, the link between nonlocality and randomness, and the security of randomness generation protocols.

In Chapter 3, we will show that nonmaximally entangled qubit pairs can be robustly self-tested using the tilted-CHSH expressions. As we explained above, self-testing is one of the building blocks of the device-independent approach. Our specific result in particular will be used to develop the protocol of Chapter 4. Chapter 3 is based on our published paper [42].

In Chapter 4, we will introduce a DIRNG protocol for two black-box devices that exploits a tilted-CHSH expression rather than the commonly-used CHSH expression. This will allow the protocol to leverage entanglement (seen as a resource) in a more efficient way: in contrast with the CHSH expression, an optimal violation of a tilted-CHSH expression calls for partially (instead of maximally) entangled qubit pairs. As a result, more randomness is produced per unit of entanglement. We will show that, compared to other DIRNG protocols and for the same amount of output randomness, this improvement in the consumption of entanglement also implies that our protocol requires less communication of quantum information in the preparation of the two black-box devices. Chapter 4 is based on our submitted paper [43].

In Chapter 5, we will introduce another DIRNG protocol that bases it analysis of random- ness production on a fuller picture of the black boxes’ nonlocality, with the aim of making a better use of the inherent randomness present in nonlocal outputs. Instead of taking the typical approach of measuring nonlocality from a single Bell expression chosen in advance of the protocol, our method will make use of multiple Bell expressions so as to better capture the nonlocal character of the observed black-box behavior, leading to tighter estimates of randomness. We will illustrate our methods numerically on a noisy nonmaximally entangled system of two qubits. Chapter 5 is based on our submitted paper [44].

8

(19)

2 Technical introduction

2.1 Nonlocality

We start this technical introduction with the concept of nonlocality, in the sense proposed in the celebrated work of John Bell [13, 45]. We will then show the core importance of this concept in the security of device-independent quantum cryptographic protocols. Throughout the chapter we will also introduce many important concepts and tools that we will use in later chapters.

2.1.1 Setting

Let us consider a so-called Bell scenario involving two parties called Alice and Bob. (The following discussion generalizes straightforwardly to any number of parties.) Each party interacts with a box through a classical interface, which allows them to choose one of several measurement settings (also termed inputs), upon which they will each observe a measurement outcome (also termed output). This is illustrated in Figure 2.1. The boxes are often called devices, and are sometimes also collectively referred to as a single multi-part device. The sets of inputs and outputs of each box are commonly finite, though there exist generalizations to infinite sets.

The measurement outcome of a physical device will generally not seem deterministic to Alice and Bob. They will therefore characterize the behavior of their boxes through a collection of probabilities, written

p ( ab | xy ) = Pr [ A = a, B = b | X = x, Y = y ] , (2.1) where A and B are random variables denoting Alice and Bob’s respective outcomes and X and Y their respective inputs. Throughout this thesis we will mostly use the shorter (albeit less explicit) left-hand side of (2.1) to denote behavior probabilities.

Bell scenarios are typically considered under a few core assumptions. The primary one, called the no-signaling assumption, is that Alice and Bob’s boxes cannot be used to transmit messages from one party to the other. Formally speaking, this means that there cannot be any causal relation from one box’s input to the other’s output. Physically, this can be (and, experimentally, has been) enforced by making sure that the local measurement events happen in spacelike-separated regions of spacetime. 1 Without this assumption, the boxes’ behavior

1

We should note that what constitutes a “measurement event” in quantum mechanics is a thorny issue which

we only mention in passing. Suffice it to say that in our case, the measurement starts when the classical

operator (be it a human or a computer) provides the input to their box, and ends when they acknowledge

the output. But again, what constitutes the boundary between the quantum box and the “classical” operator

is debatable. See e.g. [46] for a brief discussion of this.

(20)

[git] • e54c360 (2018-02-09)

A B

x y

a b

Figure 2.1: Illustration of a bipartite Bell scenario: inputs x and y are fed into their respective boxes A and B, yielding outputs a and b. The boxes are physically separated, as represented by the vertical dashed line, preventing them from communicating during the measurement. The boxes’ behavior may be correlated (represented by the wavy dotted line) through preexisting shared random variables.

could be arbitrarily correlated, the only constraints being normalization and nonnegativity of the probability weights p ( ab | xy ) .

A second assumption is that Alice and Bob can freely choose their measurement settings, that is, that their choice is made independently of the devices. This is impossible to enforce physically, seeing that the measurement event and the choice of measurement setting will always have common events in their past light cones that may have influenced both of them. 2 How do we characterize the source of Alice and Bob’s uncertainty in their measurement outcomes? In the following sections, we consider several possibilities.

2.1.2 Local hidden variable models

One possible kind of model underlying the behavior p ( ab | xy ) is a deterministic local hidden variable model. In these models, the behavior of Alice and Bob’s device is fully explained by two elements: the locally provided input x or y, and a local hidden variable (LHV) Λ, a random variable unknown to Alice and Bob, which may introduce some correlation between their devices. The behavior can therefore be decomposed as

p ( ab | xy ) = Õ

λ

q λ d ( a | x, λ ) d ( b | y, λ ) (2.2) where q λ = Pr [ Λ = λ ] characterizes the probability distribution of the LHV. Here we have written Pr [ A = a | X = x, Λ = λ ] = d ( a | x, λ ) using the letter d to denote determinism, i.e., d ( a | x, λ ) , d ( b | y, λ ) ∈ { 0, 1 } . Given the local and deterministic character of classical physics, this kind of behavior is sometimes also called “classical”. In fact, all local behaviors can be simulated within classical physics—using two isolated classical computers playing the roles of Alice and Bob, for example.

2

There exist relaxed versions of this assumption that allow almost arbitrary common influences of the inputs and devices and lead to a different concept of so-called measurement dependent locality [47].

10

(21)

2.1 Nonlocality Both Bell scenario assumptions we mentioned earlier appear in this decomposition. Indeed, the assumption of no-signaling is naturally satisfied by the requirement of local determinism (i.e., that the outcome depends deterministically on local variables). The assumption of independence of the inputs from the device is manifest in the fact that Λ is distributed according to weights q λ independently of X and Y , rather than q λ | x,y .

Thinking beyond deterministic LHV models, we may like to allow probabilistic local out- comes by allowing A and B to be proper random variables correlated with Λ and their respective inputs. However, the resulting local nondeterminism can be explained away simply by extending Λ with a second appropriately distributed local hidden variable Λ 0 . This second variable plays the role of a shared look-up table: each box consults its own local copy of the table to decide (deterministically) which outcome to provide for any given input and value of Λ. The apparent randomness of A and B given X , Y and Λ is then simply a consequence of the randomness of the look-up table Λ 0 ; for instance Pr [ A = a | X = x, Λ = λ ] = Í

λ

0

Pr [ A = a | X = x, Λ = λ, Λ 0 = λ 0 ] Pr [ Λ 0 = λ 0 ] where Pr [ A = a | X = x, Λ = λ, Λ 0 = λ 0 ] ∈ { 0, 1 } depending on whether a matches the designated value from the look-up table λ 0 for input x and the first LHV’s value λ.

Without loss of generality, we may therefore think of any LHV model (deterministic or not) as a deterministic process whose entire random character rests on an unknown local hidden variable Λ. By convention, we will nevertheless sometimes characterize the most general local behavior as a locally probabilistic one and write it as

p ( ab | xy ) = Õ

λ

q λ p ( a | x, λ ) p ( b | y, λ ) . (2.3)

2.1.3 Quantum behaviors: beyond locality

As we will now see, local hidden variable models, while they seem intuitively motivated, are not sufficient to describe the full range of possible physical realizations of a Bell scenario.

Indeed, in our quantum-mechanical understanding of nature, there exist some systems that escape any description in terms of local hidden variables. This phenomenon, called nonlocality, constitutes a wide area of research [45] and has been illustrated in numerous experiments progressively attempting to close any loopholes by which LHV models could potentially still reproduce the observed results [48–53].

Let us then consider how to model the behavior of a quantum Bell device. In a quantum realization of a Bell scenario, Alice and Bob’s boxes are generally described by a mixed joint quantum state ρ AB and the statistics of measurement outcomes are described by POVM theory [54]. Since Alice and Bob’s devices are physically separated, we will assume a tensor product structure, where the POVM describing the outcomes of a joint measurement setting ( x, y ) is a tensor product of local POVMs { M a | x } a and { N b | y } b . We may thus write

p ( ab | xy ) = Tr [ ρ AB M a | x ⊗ N b | y ] . (2.4)

In any given Bell scenario, we will write as Q the set of behaviors p ( ab | xy ) that can be

achieved using quantum systems, while L will describe the set of behaviors that admit a LHV

model.

(22)

Local quantum behaviors

While quantum behaviors may be nonlocal, there obviously exist quantum behaviors that admit local hidden variable models, since any such classical model can itself be described in the quantum formalism (i.e., L ⊆ Q ). The key to nonlocality in quantum mechanics is a combination of entanglement and incompatible measurements. If one of these two properties is missing, we will see that the quantum behavior admits a local model.

Let us first consider a separable state ρ AB = Í

λ q λ ρ λ A ⊗ ρ λ B . Equation (2.4) then becomes p ( ab | xy ) = Õ

λ

q λ Tr [ ρ λ A M a | x ] Tr [ ρ λ B N b | y ] , (2.5) which we immediately recognize as a local distribution, with λ acting as the local hidden variable. Therefore, the state ρ AB must be entangled in order to display nonlocality. In other words, assuming the validity of quantum mechanics, nonlocality is a certificate of entanglement.

To illustrate the importance of measurement incompatibility, let us consider projective measurements for Alice and write M a | x = Π a | x (a more general argument can be given for jointly measurable POVMs). If Alice’s projective measurements are compatible, there exists a basis of her local Hilbert space in which all Π a | x are diagonal. Writing in this basis Π a | x = Í

λ ∈ π ( a | x ) | λ ih λ | where π ( a | x ) is the subset of indices labeling the support of Π a | x , we find

p ( ab | xy ) = Õ

λ ∈ π ( a | x )

Tr [ ρ AB | λ ih λ | ⊗ N b | y ] (2.6)

= Õ

λ

d ( a | x, λ ) Tr [ ρ AB | λ ih λ | ⊗ N b | y ] (2.7)

= Õ

λ

q λ d ( a | x, λ ) Tr [ ρ λ B N b | y ] , (2.8) with d ( a | x, λ ) = 1 if λ ∈ π ( a | x ) and 0 otherwise, and the weights q λ and normalized reduced density matrices ρ λ B are defined such that q λ ρ B λ = Tr A [ ρ AB | λ ih λ | ⊗ I B ] . Equation (2.8) provides a LHV model for the behavior p ( ab | xy ) .

The CHSH game

Let us now consider one of the canonical examples of quantum nonlocality by introducing the CHSH nonlocal game, named after Clauser, Horne, Shimony and Holt [55] who introduced the related Bell inequality of the same name. Nonlocal games are specific computational tasks formulated in a Bell scenario, where a referee provides individual random inputs to Alice and Bob and expects them to produce specific outputs. Alice and Bob try to come up with a strategy that maximizes their probability of providing a right answer to the referee’s question.

Two obvious constraints are that Alice and Bob are not allowed to communicate during the game, and that they only learn their inputs when the game starts. Hence, their strategy must be determined in advance, the difficulty being that Alice must decide on her answer without knowing what input Bob received, and vice versa.

12

(23)

2.1 Nonlocality We can fully characterize any strategy by the joint conditional probability distributions of Alice and Bob’s outputs given their received inputs, p ( ab | xy ) . Given the assumptions (that is, no communication and no advance knowledge of the inputs), we may understand p ( ab | xy ) as the behavior of a device in a Bell scenario; then, the strategy of Alice and Bob, rather than making a conscious strategic decision, is to directly feed their respective inputs into their part of the device, and provide the referee with their measurement outcomes.

In the CHSH game, the inputs x, y ∈ { 0, 1 } are chosen uniformly at random by the referee, and Alice and Bob must provide outputs a, b ∈ { 0, 1 } such that a ⊕ b = xy in order to win the game.

We first consider a local deterministic strategy as modeled by equation (2.2). Given a value of Λ = λ and ( X , Y ) = ( x, y ) , the outputs a and b are predetermined: we may understand a = a x,λ and b = b y,λ as components of a fixed lookup table that fully characterizes the strategy.

The probability that Alice and Bob succeed at the game for a given value of Λ is Pr [ success | Λ = λ ] = Õ

xy

1 4 I [ a x,λ ⊕ b y,λ = xy ] (2.9) where I is the indicator function. If we attempt to satisfy a x,λ ⊕ b y,λ = xy for all ( x, y ) , we run into trouble. Indeed, taking the modulo-2 sum of each side of that relation over all four values of the pair ( x, y ) , we find the contradiction 0 = 1. Hence, the best we can hope for is for three out of the four relations to be satisfied. One possibility to achieve this is to take a x,λ = b y,λ = 0 for all ( x, y ) . This shows that the most successful local strategy for the CHSH game reaches a success probability of 3 / 4.

Quantum mechanics allows us to reach past this local limit. In the optimal quantum strategy, Alice and Bob share a Bell state 3 of two qubits, | ϕ + i AB = (| 0 i A | 0 i B + | 1 i A | 1 i B )/ √ 2. Their measurements are characterized by the following dichotomic observables, i.e., observables with eigenvalues in {± 1 } :

A 0 = σ z , A 1 = σ x , (2.10)

B 0 = √ 1 2 ( σ z + σ x ) , B 1 = √ 1 2 ( σ z − σ x ) , (2.11) with the understanding that A x = Π 0 | x − Π 1 | x and B y = Π 0 | y − Π 1 | y , i.e., that an observed eigenvalue of +1 or − 1 corresponds respectively to outcomes 0 and 1. 4 Writing the density operator as

| ϕ + ih ϕ + | = 1

4 (I ⊗ I + σ x ⊗ σ x − σ y ⊗ σ y + σ z ⊗ σ z ) , (2.12) we immediately find that the marginal expectation values vanish ( h A x i = h B y i = 0) and the joint outputs are correlated: 5

h A x B y i = (− 1 ) xy 1 2 . (2.13)

3

The four Bell states are | ϕ

±

i = (| 00 i ± | 11 i)/ √ 2 and | ψ

±

i = (| 01 i ± | 10 i)/ √ 2. They form a basis of the Hilbert space C

2

⊗ C

2

and share the property of being maximally entangled.

4

We will resolve this inconsistency of outputs vs. observable eigenvalues in Section 2.2.

5

For the sake of brevity, we will often implicitly write local operators such as A

x

⊗ I and I ⊗ B

y

as A

x

and B

y

,

when it is clear on which part of the Hilbert space the operators act.

(24)

Translating correlators and marginal expectations to probabilities using the relation

p ( ab | xy ) =

*

I + (− 1 ) a A x

2 ⊗ I + (− 1 ) b B y

2 +

(2.14)

= 1 4

1 + (− 1 ) a h A x i + (− 1 ) b h B y i + (− 1 ) a b h A x B y i

, (2.15)

we find that the success probability for the CHSH game is ( 2 + √ 2 )/ 4 > 3 / 4. We will later show that this value is in fact the maximal success probability achievable by quantum devices.

A note on purity

The state held by Alice and Bob is generally speaking a reduced state on their joint subsystem AB derived from a larger, pure state. We may however assume that they hold a pure state simply by including a purifying system E to the description (standing for “environment”).

In some cases, we may further consider the purifying system to be a part of either party’s subsystem (or both); we can then write their state without loss of generality as a pure state

| ψ i AB .

Similarly, the outcomes of a measurement process are generally characterized by a POVM, though again we may consider without loss of generality just the subset of projective mea- surements if we properly extend the subsystems A and B using Naimark’s theorem [54].

When we make no assumption on the dimension of the Hilbert space, we will therefore often consider without loss of generality that the underlying quantum realization of a device’s behavior results from projective measurements on a pure state.

Different flavors of the quantum set

So far, we have implicitly considered a restricted set of quantum behaviors. There is indeed a subtle distinction to be made between two different ways one can formalize a joint mea- surement in a Bell scenario [29, 56]. We previously assumed that the global Hilbert space had a tensor product structure, with identifiable components associated to each party, which allows us to think of each party’s measurement as a localized action on a global state. A more general formulation rests instead on the minimal requirement that the parties can make their measurements simultaneously. Formally, given projective measurements { Π a | x } and { Π b | y } , which now act indiscriminately on the entire Hilbert space, we assume the commutation relation

[ Π a | x , Π b | y ] = 0 . (2.16)

This is obviously a weaker requirement than a tensor product structure, which automatically implies commutation. In finite-dimensional Hilbert spaces, it can be shown that the two assumptions lead to the same set of behaviors [57].

Let us call Q 0 the set of quantum behaviors defined from the commutation assumption (2.16) rather than a tensor product structure as in (2.4). 6 The two quantum sets satisfy the

6

Note that we use the notation of [45], rather than [56] where the meanings of Q

0

and Q are interchanged.

14

(25)

2.1 Nonlocality inclusion relation Q ⊆ Q 0 . While it is now known that Q , Q 0 [58], a more operationally relevant question is whether ¯ Q , the closure of Q , is equal to Q 0 . Whether ¯ Q = Q 0 is still not known [59]; in other words, the existence of a finite gap between the boundaries of Q and Q 0 is not ruled out. We will briefly come back to this problem in Section 2.3.1.

Not all results concerning quantum behaviors require a Hilbert space structured as a tensor product of subspaces. Nevertheless, physical situations are still often modeled as such a tensor product, which makes the set Q the most common expression of the set of quantum behaviors. We will often follow this practice for convenience, as it helps us explicitly distinguish which parts of the global state each party has access to, which is of particular importance in adversarial Bell scenarios. As noted earlier in footnote 5 though, our notation will remain somewhat flexible, as we will sometimes write tensor products to emphasize the separation of subsystems, and at other times drop the tensor products and assume commutation in order to lighten the notation.

2.1.4 No-signaling behaviors

Beyond quantum mechanics, the most general behavior in a Bell scenario is constrained only by the no-signaling constraints. In the bipartite case, this simply means that the marginal distributions are independent from the other side’s measurement setting: for all a, b, x, y,

Õ

b

0

p ( ab 0 | xy ) = p ( a | x ) , (2.17a) Õ

a

0

p ( a 0 b | xy ) = p ( b | y ) . (2.17b) Without these constraints, one party’s choice of measurement setting could have an observable effect on the other party’s local behavior, which would allow transmission of information over spacelike intervals. Quantum behaviors in Q and Q 0 are readily seen to be no-signaling.

There exist no-signaling behaviors that are incompatible with quantum mechanics. The canonical example of this is the PR box, named after Popescu and Rohrlich [30], whose probabilities are

p ( ab | xy ) = ( 1

2 a ⊕ b = xy ,

0 otherwise. (2.18)

The box admits marginals p ( a | x ) = p ( b | y ) = 1 2 , satisfying the no-signaling constraints (2.17), and it allows Alice and Bob to win the CHSH game with probability 1, surpassing the quantum upper bound of ( 2 + √ 2 )/ 4.

2.1.5 Geometry of the behavior sets

In a given Bell scenario, the set of quantum behaviors Q is related to the no-signaling set N

and the local set L by the inclusion relation L ⊆ Q ⊆ N . The no-signaling set is defined by

a finite set of affine equalities and inequalities, namely, the no-signaling conditions (2.17),

(26)

along with the natural requirements of normalization and nonnegativity of the behavior probabilities

p ( ab | xy ) ≥ 0 ∀ a, b, x, y , (2.19) Õ

ab

p ( ab | xy ) = 1 ∀ x, y . (2.20)

Behaviors in the local set satisfy the same constraints as the no-signaling set, but they are further required to admit a local hidden variable model (2.3).

Both the local and no-signaling sets are bounded convex polytopes. Indeed, the no-signaling set is delimited by a finite set of hyperplanes, and each probability p ( ab | xy ) is obviously constrained to the [ 0, 1 ] interval. As for the local set, it is easier to understand it as the convex hull of a finite set of points: each local behavior can indeed be decomposed as a convex combination of local deterministic behaviors (2.2), which constitute the vertices of the local set since they are themselves not decomposable. The number of vertices is a finite function of the number of parties, inputs and outputs in the Bell scenario, hence L is also a bounded convex polytope.

The quantum set, whose boundary lies between those of L and N , is however generally not a polytope, due to its nonlinear structure. Indeed, a behavior is quantum if and only if there exists a Hilbert space H , a (pure) state ρ and (projective) local measurements M a | x ⊗ N b | y such that p ( ab | xy ) can be written as in equation (2.4) as the probability of a measurement outcome expressed in terms of the state and measurement operators. It is still easy to show that this set is convex: given two quantum behaviors p and p 0 realized respectively by ( H, ρ, M a | x ⊗ N b | y ) and ( H 0 , ρ 0 , M a 0 | x ⊗ N b 0 | y ) , the convex sum of behaviors qp + ( 1 − q ) p 0 is realized by ( H ⊕ H 0 , qρ ⊕ ( 1 − q ) ρ 0 , [ M a | x ⊗ N b | y ] ⊕ [ M a 0 | x ⊗ N b 0 | y ]) :

Tr

q ρ ⊕ ( 1 − q ) ρ 0

[ M a | x ⊗ N b | y ] ⊕ [ M a 0 | x ⊗ N b 0 | y ]

= q Tr [ ρ ( M a | x ⊗ N b | y )] + ( 1 − q ) Tr [ ρ 0 ( M a 0 | x ⊗ N b 0 | y )] (2.21)

= q p ( ab | xy ) + ( 1 − q ) p 0 ( ab | xy ) . (2.22) There is no known fully general closed-form characterization of the shape of the boundary of the quantum set, although there exists a hierarchy of characterizations that converge to the quantum set from the outside, as we will see in Section 2.3. We will later characterize the boundary of a projection of the quantum set on the two-dimensional plane whose axes correspond to p ( a | 0 ) and the CHSH success probability in the bipartite 2-input 2-output scenario (see Figure 2.2 on page 34). We should also mention the characterization of the projection of the quantum set in the 4-dimensional space of correlators discovered by Tsirelson [29] and given an alternative formulation by Masanes [60], which we will visualize in a two- dimensional projection in Figure 5.6 (page 127).

2.2 Bell inequalities

In the previous section, we introduced the CHSH nonlocal game. Its formulation rested on a predicate function: the game is either won or lost depending on the observed inputs and

16

(27)

2.2 Bell inequalities outputs. Formally, we can write the predicate function as V ( ab | xy ) ∈ { 0, 1 } , and the success probability of the game is then

Pr [ success ] = Õ

abxy

p ( xy ) p ( ab | xy ) V ( ab | xy ) , (2.23) where p ( xy ) is the joint input distribution specified by the game.

More generally, we could allow V to output a score in R , and (2.23) would then represent the average score rather than a probability. Letting f abxy = p ( xy ) V ( ab | xy ) , we may write this expected score as

f [ p ] = Õ

abxy

f abxy p ( ab | xy ) . (2.24)

This linear functional of the behavior p ≡ p ( ab | xy ) is called a Bell expression. The associated Bell inequality is the statement of the Bell expression’s upper (or lower) bound over all local behaviors.

In quantum realizations, we define the Bell operator for a Bell expression as the observable

I = Õ

abxy

f abxy M a | x ⊗ N b | y (2.25)

such that f [ p ] = Tr [ ρ AB I] = hIi .

2.2.1 CHSH inequality

Let us consider the Bell expression derived from the CHSH game. By convention, we will substitute the predicate function I [ a ⊕ b = xy ] with the scoring function

V ( ab | xy ) = 4 × (− 1 ) a b xy = 8I [ a ⊕ b = xy ] − 4 . (2.26) The resulting CHSH Bell expression is then

I chsh [ p ] = Õ

abxy

(− 1 ) a b xy p ( ab | xy ) = 8 Pr [ success ] − 4 . (2.27) The affine equivalence with the nonlocal game directly tells us that the CHSH inequality is I chsh [ p ] ≤ 2, while the quantum system we presented in Section 2.1.3 reaches I chsh [ p ] = 2 √ 2, a violation of the CHSH inequality.

This latter value of 2 √ 2 is known as Tsirelson’s bound. As it turns out, no quantum Bell device can reach a higher CHSH value than this. There are several ways to prove this, one of the simplest being the following one given by Khalfin and Tsirelson [61]. If we consider (without loss of generality) projective measurements and introduce the local observables A x = Π 0 | x − Π 1 | x and B y = Π 0 | y − Π 1 | y on Alice and Bob’s subsystems respectively, we find that the CHSH operator takes the form

I chsh = A 0 B 0 + A 0 B 1 + A 1 B 0 − A 1 B 1 . (2.28)

(28)

Using the property that the observables A x and B y have their eigenvalues in {± 1 } and that Alice’s observables commute with Bob’s, we can square the CHSH operator to find

I chsh 2 = 4 I − A 0 A 1 B 0 B 1 − A 1 A 0 B 1 B 0 + A 0 A 1 B 1 B 0 + A 1 A 0 B 0 B 1 (2.29)

= 4 I − [ A 0 , A 1 ][ B 0 , B 1 ] . (2.30)

Since both commutators in the last term are sums of two operators whose eigenvalues are upper-bounded by 1 in absolute value, we find that kI chsh 2 k ≤ 8. Hence, kI chsh k ≤ 2 √ 2, which implies f [ p ] ≤ 2 √ 2 for all quantum systems. In Chapter 3, we will show a different proof of Tsirelson’s bound that similarly uses the algebraic properties of the measurement operators to show that 2 √ 2 I − I chsh 0.

For ease of notation, the CHSH inequality is often formulated in a scenario where the outputs a and b take values in { +1, − 1 } instead of { 0, 1 } . This allows us to directly identify the measurement outcomes with the eigenvalues of the measurement operators: we can then say that given measurement settings x and y, the outcomes a and b directly result from the joint measurement of the (commuting) observables A x and B y . (In contrast, labeling the outputs in { 0, 1 } is more natural in the nonlocal game formation of the CHSH inequality, where the winning condition can be written as the Boolean expression a ⊕ b = xy instead of ab = (− 1 ) xy .)

As such, we may easily connect the quantum formulation of the CHSH operator to a generic formulation in terms of expected values,

I chsh [ p ] = E( AB | 00 ) + E( AB | 01 ) + E( AB | 00 ) − E( AB | 11 ) (2.31) where E( AB | xy ) is short for E( AB | X = x, Y = y ) .

Strictly speaking, out of the quantum (2.28) and generic (2.31) forms, only the latter extends to no-signaling correlations (which cannot be fully explained by quantum mechanics). How- ever, we will sometimes define Bell expressions in terms of the simpler quantum notation, with the understanding that it directly translates to a generic no-signaling form like (2.31).

Likewise, we will also sometimes borrow the quantum notation for expectation values: instead of I chsh [ p ] we may choose to write hI chsh i .

2.2.2 Tilted-CHSH inequalities

In the same scenario as before, with measurement outcomes in {± 1 } , the CHSH expression can be modified by adding a single marginal term. The resulting family of tilted-CHSH expressions I β was introduced in [62]. These expressions are defined by the family of Bell operators

I β = βA 0 + A 0 B 0 + A 0 B 1 + A 1 B 0 − A 1 B 1 , (2.32) where β ∈ [ 0, 2 [ is called the tilting parameter and β = 0 corresponds to the CHSH operator. 7 The tightest local upper bound for the tilted-CHSH expression is I β [ p ] ≤ 2 + β , which can be saturated using the subset of the deterministic strategies that saturate the CHSH inequality and prescribes a 1 = 1, for instance a 0 = a 1 = b 0 = b 1 = 1.

7

This is in fact only a subset of the family of expressions introduced in [62]; the subfamily we consider here is written I

1β

in the original notation.

18

Références

Documents relatifs

Sin embargo, hay plantas con flores hermafroditas en las que el órgano masculino solamente puede polinizar el órgano sexual femenino de otra flor.. En ciertos casos, el órgano

However, there are limited capabilities for strategic planning in health development in most countries, including in human resources development which is a priority in most

(3) Based on the entropy bound, a string of certified perfectly random bits is extracted from the output data.... Now consider the general case, where the statistics are not assumed

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Each time a new x bit is applied to the emitter, the FPGA checks if one or more detection events have occurred in the same time window: if one or more photons have been revealed by

Self-testing and semi-device independent protocols are becoming the preferred choice for quantum technologies, being able to certify their quantum nature with few assumptions and

Compared to the fully device-independent model, our scheme requires an extra natural assumption, namely, that the mean number of photons of the signal optical modes is bounded..

Once the result is obtained, however, the programmer should consider the decimal point to be at the extreme left in order to obtain random numbers distributed over