• Aucun résultat trouvé

A Computational Approach to Cryptography

Dans le document II Private-Key (Symmetric) Cryptography 45 (Page 61-73)

Private-Key Encryption and Pseudorandomness

3.1 A Computational Approach to Cryptography

In the previous two chapters we have studied what can be calledclassical cryptography. We began with a brief look at some historical ciphers, with a focus on how they can be broken and what can be learned from these attacks. In Chapter 2, we then proceeded to present cryptographic schemes that can be mathematically proven secure (with respect to some particular definition of security), even when the adversary has unlimited computational power. Such schemes are calledinformation-theoretically secure, or perfectly secure, because their security is due to the fact that the adversary simply does not have enough1 “information” to succeed in its attack, regardless of the adversary’s computational power. In particular, as we have discussed, the ciphertext in a perfectly-secret encryption scheme does not contain any information about the plaintext (assuming the key is unknown).

1The term “information” has a rigorous, mathematical meaning. However, we use it here in an informal manner.

47

Information-theoretic security stands in stark contrast tocomputational se-curity that is the aim of most modern cryptographic constructions. Restrict-ing ourselves to the case of private-key encryption (though everythRestrict-ing we say applies more generally), modern encryption schemes have the property that they can be broken given enough time and computation, and so they do not satisfy Definition 2.1. Nevertheless, under certain assumptions, the amount of computation needed to break these encryption schemes would take more than many lifetimes to carry out even using the fastest available supercomputers.

For all practical purposes, this level of security suffices.

Computational security is weaker than information-theoretic security. It also currently2 relies on assumptions whereas no assumptions are needed to achieve the latter (as we have seen in the case of encryption). Even granting the fact that computational security suffices for all practical purposes, why do we give up on the idea of achieving perfect security? The results of Section 2.3 give one reason why modern cryptography has taken this route. In that section, we showed that perfectly-secret encryption schemes suffer from severe lower bounds on the key length; namely, that the key must be as long as the combined length of all messages ever encrypted using this key. Similar negative results hold for other cryptographic tasks when information-theoretic security is required. Thus, despite its mathematical appeal, it is necessary to compromise on perfect security in order to obtain practical cryptographic schemes. We stress that although we cannot obtain perfect security, this does not mean that we do away with the rigorous mathematical approach;

definitions and proofs are still essential, and it is only that we now consider weaker (but still meaningful) definitions of security.

3.1.1 The Basic Idea of Computational Security

Kerckhoffs is best known for his principle that cryptographic designs should be made public. However, he actually spelled out six principles, the following of which is very relevant to our discussion here:

A [cipher] must be practically, if not mathematically, indecipherable.

Although he could not have stated it in this way at the time, this principle of Kerckhoffs essentially says that it is not necessary to use a perfectly-secret encryption scheme, but instead it suffices to use a scheme that cannot be broken in “reasonable time” with any “reasonable probability of success” (in Kerckhoffs’ language, a scheme that is “practically indecipherable”). In more concrete terms, it suffices to use an encryption scheme that can (in theory) be broken, but thatcannot be broken with probability better than 1030 in 200

2In theory, it is possible that these assumptions might one day be removed (though this will require, in particular, proving that P 6=N P). Unfortunately, however, our current state of knowledge requires us to make assumptions in order to prove computational security of any cryptographic construction.

years using the fastest available supercomputer. In this section we present a framework for making formal statements about cryptographic schemes that are “practically unbreakable”.

The computational approach incorporates two relaxations of the notion of perfect security:

1. Security is only preserved againstefficient adversaries,and

2. Adversaries can potentially succeed with somevery small probability(small enough so that we are not concerned that it will ever really happen).

To obtain a meaningful theory, we need to precisely define what is meant by the above. There are two common approaches for doing so: the concrete approach and theasymptotic approach. We explain these now.

The concrete approach. The concrete approach quantifies the security of a given cryptographic scheme by bounding the maximum success probability of any adversary running for at most some specified amount of time. That is, lett, εbe positive constants withε≤1. Then, roughly speaking:

A scheme is (t, ε)-secure if every adversary running for time at mosttsucceeds in breaking the scheme with probability at most ε.

(Of course, the above serves only as a general template, and for the above statement to make sense we need to define exactly what it means to “break”

the scheme.) As an example, one might want to use a scheme with the guaran-tee that no adversary running for at most 200 years using the fastest available supercomputer can succeed in breaking the scheme with probability better than 1030. Or, it may be more convenient to measure running time in terms of CPU cycles, and to use a scheme such that no adversary running for at most 280 cycles can break the scheme with probability better than 264.

It is instructive to get a feel for values of t, ε that are typical of modern cryptographic schemes.

Example 3.1

Modern private-key encryption schemes are generally assumed to give almost optimal security in the following sense: when the key has length n, an ad-versary running in timet (measured in, say, computer cycles) can succeed in breaking the scheme with probabilityt/2n. (We will see later why this is in-deed optimal.) Computation on the order oft= 260is barely in reach today.

Running on a 1GHz computer, 260 CPU cycles require 260/109 seconds, or about 35 years. Using many supercomputers in parallel may bring this down to a few years.

A typical value for the key length, however, might be n= 128. The differ-ence between 260and 2128is a multiplicative factor of 268 which is a number containing about 21 decimal digits. To get a feeling for how big this is, note

that according to physicists’ estimates the number of seconds since the big bang is on the order of 258.

An event that occurs once every hundred years can be roughly estimated to occur with probability 230 in any given second. Something that occurs with probability 260 in any given second is 230 timesless likely, and might be expected to occur roughly once every 100 billion years. ♦ The concrete approach can be useful in practice, since concrete guarantees of the above type are typically what users of a cryptographic scheme are ultimately interested in. However, one must be careful in interpreting concrete security guarantees. As one example, if it is claimed that no adversary running for 5 years can break a given scheme with probability better than ε, we still must ask: what type of computing power (e.g., desktop PC, supercomputer, network of 100s of computers) does this assume? Does this take into account future advances in computing power (which, by Moore’s Law, roughly doubles every 18 months)? Does this assume “off-the-shelf” algorithms will be used or dedicated software optimized for the attack? Furthermore, such a guarantee says little about the success probability of an adversary running for 2 years (other than the fact that it can be at most ε) and says nothing about the success probability of an adversary running for 10 years.

From a theoretical standpoint, the concrete security approach is disadvan-tageous since schemes can be (t, ε)-secure but never just secure. More to the point, for what ranges of t, ε should we say that a (t, ε)-secure scheme is “secure”? There is no clear answer to this, as a security guarantee that may suffice for the average user may not suffice when encrypting classified government documents.

The asymptotic approach. The asymptotic approach is the one we will take in this book. This approach, rooted in complexity theory, views the running time of the adversary as well as its success probability asfunctions of some parameter rather than as concrete numbers. Specifically, a crypto-graphic scheme will incorporate a security parameter which is an integer n.

When honest parties initialize the scheme (e.g., when they generate keys), they choose some valuenfor the security parameter; this value is assumed to be known to any adversary attacking the scheme. The running time of the ad-versary (and the running time of the honest parties) as well as the adad-versary’s success probability are all viewed as functions ofn. Then:

1. We equate the notion of “efficient algorithms” with (probabilistic) algo-rithms running in timepolynomial inn, meaning that for some constants a, cthe algorithm runs in timea·ncon security parametern. We require that honest parties run in polynomial time, and will only be concerned with achieving security against polynomial-time adversaries. We stress that the adversary, though required to run in polynomial time, may be much more powerful (and run much longer) than the honest parties.

2. We equate the notion of “small probability of success” with success probabilities smaller than any inverse-polynomial in n, meaning that for every constantc the adversary’s success probability is smaller than nc for large enough values ofn (see Definition 3.5). A function that grows slower than any inverse polynomial is callednegligible.

We sometimes usepptto stand forprobabilistic, polynomial-time. A definition of asymptotic security thus takes the following general form:

A scheme issecureif everypptadversary succeeds in breaking the scheme with only negligible probability.

Although very clean from a theoretical point of view (since we can actually speak of a scheme being secure or not), it is important to understand that the asymptotic approach only “guarantees security” for large enough values ofn, as the following example should make clear.

Example 3.2

Say we have a scheme that is secure. Then it may be the case that an adversary running forn3minutes can succeed in “breaking the scheme” with probability 240·2n (which is a negligible function ofn). Whenn≤40 this means that an adversary running for 403 minutes (about 6 weeks) can break the scheme with probability 1, so such values of n are not going to be very useful in practice. Even for n = 50 an adversary running for 503 minutes (about 3 months) can break the scheme with probability roughly 1/1000, which may not be acceptable. On the other hand, whenn= 500 an adversary running for more than 200 years breaks the scheme only with probability roughly 2500.

♦ As indicated by the previous example, we can view a larger security param-eter as providing a “greater” level of security. For the most part, the security parameter determines thelength of the key used by the honest parties, and we thus have the familiar concept that the longer the key, the higher the security.

The ability to “increase security” by taking a larger value for the security parameter has important practical ramifications, since it enables honest par-ties to defend against increases in computing power as well as algorithmic advances. The following gives a sense for how this might play out in practice.

Example 3.3

Let us see the effect that the availability of faster computers might have on security in practice. Say we have a cryptographic scheme where honest parties are required to run for 106·n2cycles, and for which an adversary running for 108·n4cycles can succeed in “breaking” the scheme with probability 220·2n. (The numbers in this example are designed to make calculations easier, and are not meant to correspond to any existing cryptographic scheme.)

Say all parties are using a 1Ghz computer (that executes 109 cycles per second) and n = 50. Then honest parties run for 106·2500 cycles, or 2.5 seconds, and an adversary running for 108·(50)4 cycles, or roughly 1 week, can break the scheme with probability only 230.

Now say a 16Ghz computer becomes available, and all parties upgrade.

Honest parties can increasento 100 (which requires generating a fresh key) and still improve their running time to 0.625 seconds (i.e., 106·1002 cycles at 16·109 cycles/second). In contrast, the adversary now has to run for 107 seconds, or more than 16 weeks, to achieve success probability 280. The effect of a faster computer has been to make the adversary’s jobharder. ♦ The asymptotic approach has the advantage of not depending on any spe-cific assumptions regarding, e.g., the type of computer an adversary will use (this is a consequence of the Church-Turing thesis from complexity theory, which basically states that the relative speeds of all sufficiently-powerful com-puting devices are polynomially related). On the other hand, as the above examples demonstrate, it is necessary in practice to understand exactly what level of concrete security is implied by a particular asymptotically-secure scheme. This is because the honest parties must pick a concrete value of n to use, and so cannot rely on assurances of what happens “for large enough values ofn”. The task of determining the value of the security parameter to use is complex and depends on the scheme in question as well as other con-siderations. Fortunately, it is usually relatively easy to translate a guarantee of asymptotic security into a concrete security guarantee.

Example 3.4

A typical proof of security for a cryptographic scheme might show that any adversary running in time p(n) succeeds with probability at mostp(n)/2n. This implies that the scheme is (asymptotically) secure, since forany polyno-mialp, the functionp(n)/2nis eventually smaller than any inverse-polynomial inn. Moreover, it immediately gives a concrete security result for any desired value of n; e.g., the scheme withn fixed to 50 is (502,502/250)-secure (note that for this to be meaningful we need to know the units of time with respect

to whichpis being measured). ♦

From here on, we use the asymptotic approach only. Nevertheless, as the above example shows, all the results in this book can be cast as concrete security results as well.

A technical remark. As we have mentioned, we view the running time of the adversary and the honest parties as a function of n. To be consistent with the standard convention in algorithms and complexity theory, where the running time of an algorithm is measured as a function of the length of its input, we will thus provide the adversary and the honest parties with the security parameter inunary as 1n (i.e., a string ofn1’s) when necessary.

Necessity of the Relaxations

As we have seen, computational security introduces two relaxations of the notion of perfect security: first, security is guaranteed only against efficient (i.e., polynomial-time) adversaries; second, a small (i.e., negligible) probability of success is allowed. Both of these relaxations are essential for achieving practical cryptographic schemes, and in particular for bypassing the negative results for perfectly-secret encryption. Let us see why, somewhat informally.

Assume we have an encryption scheme where the size of the key space K is much smaller than the size of the message spaceM(which, as we saw in the previous chapter, means that the scheme cannot be perfectly secret). There are two attacks, lying at opposite extremes, that apply regardless of how the encryption scheme is constructed:

• Given a ciphertextc, the adversary can decryptc using all keysk∈ K. This gives a list of all possible messages to whichc can possibly corre-spond. Since this list cannot contain all ofM(because|K|<|M|), this leakssomeinformation about the message that was encrypted.

Moreover, say the adversary carries out a known-plaintext attack and learns that ciphertextsc1, . . . , c`correspond to the messagesm1, . . . , m`, respectively. The adversary can again try decrypting each of these ci-phertexts with all possible keys until it finds a keykfor whichDeck(ci) = mi for alli. This key will be unique with high3 probability, in which case the adversary has found the key that the honest parties are using (and so subsequent usage of this key will be insecure).

The type of attack is known asbrute-force search and allows the adver-sary to succeed with probability essentially 1 in time linear in|K|.

• Consider again the case where the adversary learns that ciphertexts c1, . . . , c` correspond to the messagesm1, . . . , m`. The adversary can guess a keyk∈ K at random and check to see whether Deck(ci) =mi

for alli. If so, we again expect that with high probabilitykis the key that the honest parties are using.

Here the adversary runs in polynomial time and succeeds with non-zero (though very small) probability roughly 1/|K|.

It follows that if we wish to encrypt many messages using a single short key, security can only be achieved if we limit the running time of the adversary (so that the adversary does not have time to carry out a brute-force search) and also allow a very small probability of success (without considering it a break).

An immediate consequence of the above attacks is that asymptotic security is not possible if the key space isfixed, but rather the key space must depend

3Technically speaking, this need not be true; if it is not, however, then the scheme can be broken using a modification of this attack.

onn. That is, a private-key encryption scheme will now be associated with a sequence {Kn} such that the key is chosen from Kn when the security parameter is n. The above attacks imply that if we want polynomial-time adversaries to achieve only negligible success probability then the size ofKn must growsuper-polynomially in the security parametern. Otherwise, brute-force search could be carried out in polynomial time, and randomly guessing the key would succeed with non-negligible probability.

3.1.2 Efficient Algorithms and Negligible Success

In the previous section we have outlined the asymptotic security approach that we will be taking in this book. Students who have not had significant prior exposure to algorithms or complexity theory may not be comfortable with the notions of “polynomial-time algorithms”, “probabilistic (or random-ized) algorithms”, or “negligible probabilities”, and often find the asymptotic approach confusing. In this section we revisit the asymptotic approach in more detail, and slightly more formally. Students who are already comfort-able with what was described in the previous section are welcome to skip ahead to Section 3.1.3 and refer back here as needed.

Efficient Computation

We have define efficient computation as that which can be carried out in probabilistic polynomial time (sometimes abbreviated ppt). An algorithm A is said to run in polynomial time if there exists a polynomial p(·) such that, for every input x ∈ {0,1}, the computation of A(x) terminates within at most p(kxk) steps (here, kxk denotes the length of the string x). A prob-abilistic algorithm is one that has the capability of “tossing coins”; this is a metaphorical way of saying that the algorithm has access to a source of randomness that yields unbiased random bits that are each independently equal to 1 with probability 12 and 0 with probability 12. Equivalently, we can view a randomized algorithm as one which is given, in addition to its input, a uniformly-distributed bit-string of “adequate length” on a special random tape. When considering a probabilistic polynomial-time algorithm with run-ning time p and an input of length n, a random string of length p(n) will certainly be adequate as the algorithm can only usep(n) random bits within the allotted time.

Those familiar with complexity theory or algorithms will recognize that the idea of equating efficient computation with (probabilistic) polynomial-time computation is not unique to cryptography. The primary advantage of work-ing with (probabilistic) polynomial-time algorithms is that this gives a class of

Those familiar with complexity theory or algorithms will recognize that the idea of equating efficient computation with (probabilistic) polynomial-time computation is not unique to cryptography. The primary advantage of work-ing with (probabilistic) polynomial-time algorithms is that this gives a class of

Dans le document II Private-Key (Symmetric) Cryptography 45 (Page 61-73)