• Aucun résultat trouvé

We demonstrated a 24-hour relativistic bit commitment with standard computing power in a realistic and potentially useful scenario. Our implementation could in

4The transfer of data from HDD to FPGA was done with a PCIe Gen1 x1 link

1.5. Conclusion and Outlook 21 practice be extended to a large range of spatial configurations and longer commit-ment times. Limitations arise in scenarios with short distances between the two agents of Bob, which are the most demanding in terms of resources, e.g. large amount of data, fast data computing and very low timing uncertainty. Scenarios with largerL, such as case 2 of table1.1, require far fewer resources and allow more flexibility in the choice of the time constraintτ and time margin τM. Indeed, the case 2 could correspond to a scenario with one agent of Bob in the center of the US and the other one in the center of China. With τ and τM extended respectively to 20 ms and 1 ms, the agents of Alice could be located within a large region of these two countries. Following this example, we can imagine networks formed by several interconnected nodes allowing commitments from parties all over the world.

On the down side, the commitment of a single bit requires a large amount of random data, e.g 162 GB in our case. This could present a practical issue for applications which require many committed bits. Astring commitment which uses the same resources as itsbit version, would be compelling. The extension to string commitment was investigated in [50,54], but required different binding assumptions.

The security of the protocol relies on the perfect synchronization of the ex-changes between the agents of different parties. In our experiment, the communi-cation between agent Bi and Ai was made over a 1 m dedicated optical fibre. But in more realistic scenarios, these exchanges will be made through classical commu-nication network, which are prone to failures such as delays in the packet delivery.

If one agentAi fails, once, to reply within the time constraintτ, the whole protocol is aborted. To overcome this issue, K. Chakraborty et al. [55] proposed a mod-ified version of the multi-round relativistic protocol, with three agents per party.

Their protocol was proven robust to losses and delays of classical communication networks.

Finally, both bit and string multi-round protocols were proven secure against classical attacks. Security against quantum attacks was proven in the single round case, but is still an open question for the multi-round case [30, 49, 50].

Chapter 2

Measurement-Device-Independent Entanglement Witness

Quantum entanglement plays a key role in many of the most interesting applications of quantum communication and quantum information [2]. Developing ways to char-acterize the generation and distribution of entanglement is therefore crucial. How to be sure that entanglement was indeed generated? How to detect entanglement and quantify it? These questions can be challenging to answer and various methods were proposed to tackle this problem. The choice of the method will depend on the system to be characterised and on the desired properties for the entanglement ver-ification scheme, e.g. ease of implementation, efficiency, robustness against noise, etc.. An important property, which has often been overlooked in past experiments, is the sensitivity to measurement errors and its effect on the conclusion drawn about the entanglement. In this chapter, I present methods to certify entanglement which are robust to measurement errors and I demonstrate the practical implementation of a measurement-device-independent entanglement witness (MDI-EW) to certify the entanglement of all entangled states.

2.1 Resource-efficient MDI-EW

There are three standard approaches to characterize entanglement: Quantum State tomography (QST) [56, 57], entanglement witnesses [58], and Bell inequalities [9, 59]. I, now, briefly describe each approach and how they are affected by measure-ment imperfections.

Quantum state tomography techniques enable us to fully characterise the state of the quantum system. In QST, multiple copies of the unknown state are mea-sured in a set of tomographically complete bases. The density matrix of the state is then reconstructed from the acquired data. The degree of entanglement can be computed directly from the density matrix. The drawback of QST is that experi-mental errors often lead to the reconstruction of a non-physical density matrix [60].

Several correcting techniques have been developed to obtain a physical state which 23

matches the observed data as close as possible [56,61,62], but they can lead to an overestimation of the degree of entanglement [63].

Entanglement witnesses are the most frequently used tools for the certification of entanglement. The distinction between entangled states and separable ones is usually made as follows. For any entangled stateρ, there exists a Hermitian opera-torW, called an entanglement witness, such that tr[W ρ]<0, and tr[W σ]≥0 for all separable states σ. To be able to determine experimentally the value tr[W ρ], the operator W is decomposed as a linear combination of product Hermitian op-erators, which are then estimated independently from local measurements on the subsystems ofρ. Here too, measurement errors can lead to an estimation of tr[W ρ] which is inexact, and possibly to the wrong conclusion about the presence of en-tanglement [64, 65]. Moreover, entanglement witnesses are not quantitative.

Bell inequalities can only be violated when the measurements are performed on non-local states [66]. Hence, a violation of a Bell inequality faithfully certifies the presence of entanglement, regardless of the actual measurements performed and of possible errors on them. On the downside, this method cannot detect all entangled states. Indeed, all non-local states are entangled, however, certain entangled states generate correlations which do not violate Bell inequalities [66,67]. For example, a Werner state of the form:

ρ=λΨ+ Ψ++ (1−λ)I4, (2.1) where |Ψ+i is a maximally entangled state and I4 is white noise, is entangled for visibilities λ higher than 1/3. However, the correlations it generates only violate CHSH inequality [68] for λ≥1/√

2.

In 2012, F. Buscemi [69] proposed a solution to overcome the issue of mea-surement errors sensitivity, while detecting all entangled states. He introduced Bell-like tests where the classical inputs, which specify the measurements to be performed, are replaced by trusted quantum inputs. Interestingly, he showed that in such tests, called semi-quantum non-local games, all entangled states can be distinguished from separable ones. This result immediately led to the derivation of measurement-device-independent entanglement witnesses (MDI-EW) [19], which were shown tolerant to losses and robust against measurement errors. While these MDI-EWs relax the requirement on the measurement accuracy, they demand the extra generation of trusted quantum inputs, thereby, adding experimental chal-lenges.

Following semi-quantum non-local scenarios, a natural way to implement an MDI-EW for detection of bipartite photonic entanglement is shown in Figure2.1(top).

The quantum inputs are heralded single photons prepared by trusted Linear Op-tic Circuits (LOC). Alice, and similarly Bob, mixes the quantum input state with her part of the shared entangled state ρAB on a 50:50 beam-splitter to perform a Bell state measurement (BSM). The quantum input photons and the photons of the entangled state must be indistinguishable in all degrees of freedom, apart from the polarization, at each BSM. Thus, this implementation is very demanding