• Aucun résultat trouvé

Efficient Computation and Efficient Adversaries

Dans le document a Course in Cryptography (Page 33-38)

Computational Hardness

2.1 Efficient Computation and Efficient Adversaries

We start by formalizing what it means for an algorithm to com-pute a function.

.Definition21.1(Algorithm) Analgorithmis a deterministic Tur-ing machine whose input and output are strTur-ings over alphabet Σ = {0, 1}.

.Definition21.2(Running-time) An algorithmAis said to run in time T(n)if for all x ∈ {0, 1}, A(x)halts within T(|x|)steps. A runs inpolynomial timeif there exists a constant c such thatAruns in time T(n) =nc.

.Definition21.3(Deterministic Computation) An algorithmAis said tocomputea function f :{0, 1} → {0, 1}if for all x∈ {0, 1}, A, on input x, outputs f(x).

We say that an algorithm isefficientif it runs in polynomial time. One may argue about the choice of polynomial-time as a cutoff for efficiency, and indeed if the polynomial involved is large, computation may not be efficient in practice. There are, however, strong arguments to use the polynomial-time definition of efficiency:

21

1. This definition is independent of the representation of the algorithm (whether it is given as a Turing machine, a C program, etc.) because converting from one representation to another only affects the running time by a polynomial factor.

2. This definition is also closed under composition which may simplify reasoning in certain proofs.

3. Our experience suggests that polynomial-time algorithms turn out to be efficient; i.e. polynomial almost always means “cubic time or better.”

4. Our experience indicates that “natural” functions that are not known to be computable in polynomial-time require muchmore time to compute, so the separation we propose seems well-founded.

Note, our treatment of computation is an asymptoticone. In practice, concrete running time needs to be considered carefully, as do other hidden factors such as the size of the description ofA. Thus, when porting theory to practice, one needs to set parameters carefully.

2.1.1 Some computationally “hard” problems

Many commonly encountered functions are computable by ef-ficient algorithms. However, there are also functions which are known or believed to be hard.

Halting: The famous Halting problem is an example of an uncom-putableproblem: Given a description of a Turing machine M, determine whether or not M halts when run on the empty input.

Time-hierarchy: The Time Hierarchy Theorem from Complex-ity theory states that there exist languages that are de-cideable in time O(t(n)) but cannot be decided in time o(t(n)/ logt(n)). A corollary of this theorem is that there are functions f : {0, 1} → {0, 1}that are computable in exponential time but not computable in polynomial time.

2.1. Efficient Computation and Efficient Adversaries 23

Satisfiability: The notoriousSAT problem is to determine if a given Boolean formula has a satisfying assignment. SAT isconjectured not to be solvable in polynomial-time—this is the famous conjecture thatP6=NP. See Appendix B for definitions ofPandNP.

2.1.2 Randomized Computation

A natural extension of deterministic computation is to allow an algorithm to have access to a source of random coin tosses.

Allowing this extra freedom is certainly plausible (as it is con-ceivable to generate such random coins in practice), and it is believed to enable more efficient algorithms for computing cer-tain tasks. Moreover, it will be necessary for the security of the schemes that we present later. For example, as we discussed in chapter one, Kerckhoff’s principle states that all algorithms in a scheme should be public. Thus, if the private key generation algorithmGendid not use random coins in its computation, then Eve would be able to compute the same key that Alice and Bob compute. Thus, to allow for this extra resource, we extend the above definitions of computation as follows.

.Definition23.4(Randomized (PPT) Algorithm) Arandomized algorithm, also called a probabilistic polynomial-time Turing machine and abbreviated as PPT, is a Turing machine equipped with an extra ran-dom tape. Each bit of the ranran-dom tape is uniformly and independently chosen.

Equivalently, a randomized algorithm is a Turing Machine that has access to a coin-tossing oracle that outputs a truly random bit on demand.

To define efficiency we must clarify the concept ofrunning time for a randomized algorithm. A subtlety arises because the run time of a randomized algorithm may depend on the particular random tape chosen for an execution. We take a conservative approach and define the running time as the upper bound over all possible random sequences.

.Definition23.5(Running time) A randomized Turing machineA runs in time T(n)if for all x ∈ {0, 1}, and for every random tape,

A(x)halts within T(|x|)steps. Aruns inpolynomial time(or is an efficientrandomized algorithm) if there exists a constant c such that Aruns in time T(n) =nc.

Finally, we must also extend our notion of computation to ran-domized algorithms. In particular, once an algorithm has a random tape, its output becomes a distribution over some set. In the case of deterministic computation, the output is a singleton set, and this is what we require here as well.

.Definition24.6 A randomized algorithm A computes a function f : {0, 1} → {0, 1} if for all x ∈ {0, 1}, Aon input x, outputs f(x)with probability1. The probability is taken over the random tape ofA.

Notice that with randomized algorithms, we do not tolerate algorithms that on rare occasion make errors. Formally, this requirement may be too strong in practice because some of the algorithms that we use in practice (e.g., primality testing) do err with small negligible probability. In the rest of the book, however, we ignore this rare case and assume that a randomized algorithm always works correctly.

On a side note, it is worthwhile to note that a polynomial-time randomized algorithmAthat computes a function with proba-bility 12+ poly1(n) can be used to obtain another polynomial-time randomized machineA0 that computes the function with prob-ability 1−2n. (A0 simply takes multiple runs ofAand finally outputs the most frequent output ofA. The Chernoff bound (see Appendix A) can then be used to analyze the probability with which such a “majority” rule works.)

Polynomial-time randomized algorithms will be the principal model of efficient computation considered in this course. We will refer to this class of algorithms asprobabilistic polynomial-time Turing machine(p.p.t.) orefficient randomized algorithm interchange-ably.

Given the above notation we can define the notion of an efficient encryption scheme:

2.1. Efficient Computation and Efficient Adversaries 25

.Definition24.7(Efficient Private-key Encryption). A triplet of al-gorithms(Gen,Enc,Dec)is called anefficient private-key encryption schemeif the following holds:

1. k ←Gen(1n)is a p.p.t. such that for everyn∈N, it samples a keyk.

2. c ← Enck(m) is a p.p.t. that givenk and m ∈ {0, 1}n pro-duces a ciphertextc.

3. m←Deck(c)is a p.p.t. that given a ciphertext cand keyk produces a messagem∈ {0, 1}n∪ ⊥.

4. For alln∈N,m∈ {0, 1}n,

Pr[k←Gen(1n):Deck(Enck(m)) =m]] =1

Notice that theGenalgorithm is given the special input 1n—called the security parameter—which represents the string consisting ofncopies of 1, e.g. 14=1111. This security parameter is used to instantiate the “security” of the scheme; larger parameters correspond to more secure schemes. The security parameter also establishes the running time ofGen, and therefore the maximum size of k, and thus the running times of Enc and Dec as well.

Stating that these three algorithms are “polynomial-time” is always with respect to the size of their respective inputs.

In the rest of this book, when discussing encryption schemes we always refer to efficient encryption schemes. As a departure from our notation in the first chapter, here we no longer refer to a message space M or a key space Kbecause we assume that both are bit strings. In particular, on security parameter 1n, our definition requires a scheme to handlen-bit messages. It is also possible, and perhaps simpler, to define an encryption scheme that only works on a single-bit message spaceM = {0, 1}for every security parameter.

2.1.3 Efficient Adversaries

When modeling adversaries, we use a more relaxed notion of efficient computation. In particular, instead of requiring the adversary to be a machine with constant-sized description, we

allow the size of the adversary’s program to increase (polyno-mially) with the input length, i.e., we allow the adversary to be non-uniform. As before, we still allow the adversary to use random coins and require that the adversary’s running time is bounded by a polynomial. The primary motivation for using non-uniformity to model the adversary is to simplify definitions and proofs.

.Definition26.8(Non-Uniform PPT) A non-uniform probabilistic polynomial-time machine (abbreviated n.u. p.p.t.) A is a sequence of probabilistic machines A = {A1,A2, . . .}for which there exists a polynomial d such that the description size of |Ai| < d(i) and the running time of Ai is also less than d(i). We write A(x)to denote the distribution obtained by running A|x|(x).

Alternatively, a non-uniform p.p.t. machine can also be de-fined as auniformp.p.t. machine Athat receives an advice string for each input length. In the rest of this text, any adversarial algorithmAwill implicitly be a non-uniform PPT.

Dans le document a Course in Cryptography (Page 33-38)