• Aucun résultat trouvé

Exploring Active Adversaries

Dans le document Lecture Notes on Cryptography (Page 136-140)

Conventions Regarding Definitions

7.5 Exploring Active Adversaries

Until now we have focused mostly on passiveadversaries. But what happens if the adversaries are active?

This gives rise to various stronger-than-semantic notions of security such as non-malleability [71], security against chosen ciphertext attack, and plaintext awareness [23, 21]. See [21] for a classification of these notions and discussion of relations among them.

In particular, we consider security against chosen ciphertext attack. In this model, we assume that our adversary has temporary access to the decoding equipment, and can use it to decrypt some cyphertexts that it chooses. Afterwards, the adversary sees the ciphertext it wants to decrypt without any further access to the decoding equipment. Notice that this is different from simply being able to generate pairs of messages and cyphertexts, as the adversary was always capable of doing that by simply encrypting messages of its choice. In this case, the adversary gets to choose the ciphertext and get the corresponding message from the decoding equipment.

We saw in previous sections that such an adversary could completely break Rabin’s scheme. It is not known whether any of the other schemes discussed for PKC are secure in the presence of this adversary. However, attempts to provably defend against such an adversary have been made.

One idea is to put checks into the decoding equipment so that it will not decrypt cyphertexts unless it has evidence that someone knew the message (i.e., that the ciphertext was not just generated without knowledge of what the message being encoded was). We might think that a simple way to do this would be to require two distinct encodings of the same message, as it is unlikely that an adversary could find two separate encodings of the same message without knowing the message itself. Thus a ciphertext would be (α1, α2) whereα1, α2are chosen randomly from the encryptions ofm.

Unfortunately, this doesn’t work because if the decoding equipment fails to decrypt the ciphertext, the adversary would still gain some knowledge, i.e., that α1 and α2 do not encrypt the same message. For example, in theprobabilistic encryption scheme proposed last lecture, an adversary may wish to learn the hard-core bitBi(y) for some unknowny, where it hasfi(y). Given decoding equipment with the protection described above, the adversary could still discover this bit as follows:

(1) Pickm∈ M(1l), the message space, and letb be the last bit ofm.

(2) Pickα1∈E(i, m) randomly and independently.

Cryptography: Lecture Notes 137

(3) Recall thatα1 = (fi(x1), fi(x2), . . . , fi(xl)), withxj chosen randomly from Di forj = 1,2, . . . , l. Let α2= (fi(x1), . . . , fi(xl1), fi(y)).

(4) Use the decoding equipment onc = (α1, α2). If it answersm, thenBi(y) =b. If it doesn’t decryptc, thenBi(y) =b.

What is done instead uses the notion ofNon-Interactive Zero-Knowledge Proofs(NIZK) [41, 146]. The idea is that anyone can check a NIZK to see that it is correct, but no knowledge can be extracted from it about what is being proved, except that it is correct. Shamir and Lapidot have shown that if trapdoor functions exist, then NIZKs exist. Then a ciphertext will consist of three parts: two distinct encodingsα1, α2 of the message, and a NIZK thatα1 andα2encrypt the same message. Then the decoding equipment will simply refuse to decrypt any ciphertext with an invalid NIZK, and this refusal to decrypt will not give the adversary any new knowledge, since it already knew that the proof was invalid.

The practical importance of chosen ciphertext attack is illustrated in the recent attack of Bleichenbacher on the RSA PKCS #1 encryption standard, which has received a lot of attention. Bleichenbacher [34] shows how to break the scheme under a chosen ciphertext attack. One should note that the OAEP scheme discussed in Section 7.4.6 above is immune to such attacks.

A message authentication scheme enables parties in possession of a shared secret key to achieve the goal of data integrity. This is the second main goal of private-key cryptography.

8.1 Introduction

8.1.1 The problem

Suppose you receive a communication that purports to come from a certain entity, call itS. HereS might be one of many different types of entities: for example a person, or a corporation, or a network address.

You may know that it isS that purports to send this communication for several reasons. For example,S’s identifier could be attached to the communication. The identifier here is a public identity that is known to belong toS: for example, ifSis a person or corporation, typically just the name of the person or corporation;

if a network address, the address itself. Or, it may be that from the context in which the communication is taking place you are expecting the communication to be from a certain known entityS.

In many such settings, security requires that the receiver have confidence that the communicated data does originate with the claimed sender. This is necessary to implement access control, so that services and information are provided to the intended parties. The risk is that an attacker will “impersonate”S. It will send a communication withS’s identity attached, so that the receiver is lead to believe the communication is fromS. This can have various undesirable consequences. Examples of the damage caused abound; here are a few that one might consider.

An on-line stock brokerSreplies to a quote request by sending the value of a certain stock, but an adversary modifies the transmission, changing the dollar value of the quote. The person who requested the quote receives incorrect information and could be lead to make a financially detrimental action. This applies to any data being obtained from a database: its value lies in its authenticity as vouched for by the database service provider. Or considerS needing to send data of only two kinds, say “buy” and “sell”, or “fire” and

“don’t fire”. This might be encoded in a single bit, and if an adversary flips this bit, the wrong action is taken. Or consider electronic banking. S sends its bank a message asking that $200 be transferred from her account toA’s account. Amight play the role of adversary and change the sum to $2,000.

In fact the authenticity of data transmitted across a network can be even more important to security than privacy of the data when it comes to enabling network applications and commerce.

This ability to send data purporting to be from a source it is not requires anactive attackon the part of the adversary. That is, the adversary must have the means to modify transmitted communications or introduce

138

Cryptography: Lecture Notes 139

new ones. These abilities depend on the setting. It may be hard to introduce data into a dedicated phone line, but not on a network like the Internet. It would be advisable to assume adversaries do have such abilities.

The authentication problem is very different from the encryption problem. We are not worried about secrecy of the data; let the data be in the clear. We are worried about the adversary modifying it.

8.1.2 Encryption does not provide data integrity

We know how to encrypt data so as to provide privacy. Something often suggested (and done) is to encrypt to provide data integrity, as follows. Fix a symmetric encryption scheme SE = (K,E,D), and let partiesS andB share a keyKfor this scheme. WhenSwants to send a messageM toB, she encrypts it, transferring a ciphertextC generated viaC← ER K(M). B decrypts it, recoveringDK(C).

The argument that this provides data integrity is as follows. SupposeS transmits, as in the above example, a messageM to its bank B asking that $200 be transferred fromS’s account to A’s account. A wants to change the $200 to $2,000. IfM is sent in the clear,Acan easily modify it. But if M is encrypted so that ciphertextC is sent, how isA to modifyC so as to makeB recover the modified messageM0? It does not know the keyK, so cannot encrypt the modified messageM0. The privacy of the message appears to make tampering difficult.

This argument is fallacious. To see the flaws let’s first look at a counter-example and then the issues.

Consider, say the randomized CTR scheme, using some block cipherF, say RC6. We proved in the chapter on symmetric encryption that this was a secure encryption scheme assuming RC6 is a pseudorandom function.

For simplicity say that the message M above is a single 128 bit block, containing account information for the parties involved, plus a field for the dollar amount. To be concrete, the last 16 bits of the 128-bit block hold the dollar amount encoded as a 16-bit binary number. (So the amount must be at most $65,535.) Thus, the last 16 bits ofM are 0000000011001000, the binary representation of the integer 200. We assume that Ais aware that the dollar amount in this electronic check is $200; this information is not secret. Now recall that under randomized CTR encryption the ciphertext transmitted byShas the formC=hriy where y =FK(hr+ 1i)⊕M. A’s attack is as follows. It getsC=hriy and setsy0 =y⊕01120000011100001000. It setsC0 =hriy0 and forwardsC0 to B. B will decrypt this, so that it recovers the messageFK(hr+ 1i)⊕y0. Denoting it byM0, its value is

M0 = FK(hr+ 1i)⊕y0

= FK(hr+ 1i)⊕y⊕01120000011100001000

= M⊕01120000011100001000

= Mprefix0000011111000000

whereMprefixis the first 112 bits of the original messageM. Notice that the last 16 bits ofM0 is the binary representation of the integer 2000, while the first 112 bits ofM0 are equal to those ofM. So the end result is that the bank B will be misled into executing the transaction that S requested except that the dollar amount has been changed from 200 to 2000.

There are many possible reactions to this counter-example, some sound and some unsound. Let’s take a look at them.

What you should conclude from this is that encryption does not provide data integrity. With hindsight, it is pretty clear. The fact that data is encrypted need not prevent an adversary from being able to make the receiver recover data different from that which the sender had intended, for many reasons. First, the data, or some part of it, might not be private at all. For example, above, some information about M was known toA: as the recipient of the money,Acan be assumed to know that the amount will be $200, a sum probably agreed upon beforehand. However, even when the data is not known a priori, an adversary can make the receiver recover something incorrect. For example with the randomized CTR scheme, an adversary can effectively flip an bit in the message M. Even if it does not know what is the value of the original bit,

damage can be caused by flipping it to the opposite value. Another possibility is for the adversary to simply transmit some stringC. In many encryption schemes, including CTR and CBC encryption,C will decrypt to something, call it M. The adversary may have no idea what M will be, but we should still view it as wrong that the receiver acceptsM as being sent byS when in fact it wasn’t.

Now here is another possible reaction to the above counter-example: CTR mode encryption is bad, since it permits the above attack. So one should not use this mode. Let’s use CBC instead; there you can’t flip message bits by flipping ciphertext bits.

This is an unsound reaction to the counter-example. Nonetheless it is not only often voiced, but even printed.

Why is it unsound? Because the point is not the specific attack on CTR, but rather to recognize the disparity in goals. There is simply no reason to expect encryption to provide integrity. Encryption was not designed to solve the integrity problem. The way to address this problem is to first pin down precisely what is the problem, and then seek a solution. Nonetheless there are many existing systems, and places in the literature, where encryption and authentication are confused, and where the former is assumed to provide the latter.

It turns out that CBC encryption can also be attacked from the integrity point of view, again leading to claims in some places that it is not a good encryption mechanism. Faulting an encryption scheme for not providing authenticity is like faulting a screwdriver because you could not cut vegetables with it. There is no reason to expect a tool to solve a problem it was not designed to solve.

It is sometimes suggested that one should “encrypt with redundancy” to provide data integrity. That is, the senderS pads the data with some known, fixed string, for example 128 bits of zeros, before encrypting it.

The receiver decrypts the ciphertext and checks whether the decrypted string ends with 128 zeros. If not, the receiver rejects the transmission as unauthentic; else it outputs the rest of the string as the actual data.

This too can fail in general; for example it is easy to see that with CTR mode encryption, an attack just like the above applies. It can be attacked under CBC encryption too.

Good cryptographic design isgoal oriented. One must first understand and formalize the goal. Only then does one have the basis on which to design and evaluate potential solutions. Accordingly, our next step will be to come up with a definition of message authentication schemes and their security.

Dans le document Lecture Notes on Cryptography (Page 136-140)