• Aucun résultat trouvé

The Whole is Less Than the Sum of Its Parts: Constructing More Efficient Lattice-Based AKEs

N/A
N/A
Protected

Academic year: 2021

Partager "The Whole is Less Than the Sum of Its Parts: Constructing More Efficient Lattice-Based AKEs"

Copied!
30
0
0

Texte intégral

(1)

HAL Id: hal-01378005

https://hal.inria.fr/hal-01378005

Submitted on 8 Oct 2016

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

The Whole is Less Than the Sum of Its Parts:

Constructing More Efficient Lattice-Based AKEs

Rafael del Pino, Vadim Lyubashevsky, David Pointcheval

To cite this version:

Rafael del Pino, Vadim Lyubashevsky, David Pointcheval. The Whole is Less Than the Sum of Its

Parts: Constructing More Efficient Lattice-Based AKEs. SCN 2016 - 10th International Conference

Security and Cryptography for Networks, Aug 2016, Amalfi, Italy. pp.273 - 291, �10.1007/978-3-319-

44618-9_15�. �hal-01378005�

(2)

The Whole is Less than the Sum of its Parts:

Constructing More Efficient Lattice-Based AKEs

?

Rafael del Pino

1,2,3

, Vadim Lyubashevsky

4

, and David Pointcheval

3,2,1

1 INRIA, Paris

2 Ecole Normale Sup´´ erieure, Paris

3 CNRS

4 IBM Research Zurich

Abstract. Authenticated Key Exchange (AKE) is the backbone of internet security protocols such as TLS and IKE. A recent announcement by standardization bodies calling for a shift to quantum-resilient crypto has resulted in several AKE proposals from the research community. Be- cause AKE can be generically constructed by combining a digital signature scheme with public key encryption (or a KEM), most of these proposals focused on optimizing the known KEMs and left the authentication part to the generic combination with digital signatures.

In this paper, we show that by simultaneously considering the secrecy and authenticity require- ments of an AKE, we can construct a scheme that is more secure and with smaller communication complexity than a scheme created by a generic combination of a KEM with a signature scheme.

Our improvement uses particular properties of lattice-based encryption and signature schemes and consists of two parts – the first part increases security, whereas the second reduces communication complexity.

We first observe that parameters for lattice-based encryption schemes are always set so as to avoid decryption errors, since many observations by the adversary of such failures usually leads to him recovering the secret key. But since one of the requirements of an AKE is that it be forward-secure, the public key must change every time. The intuition is therefore that one can set the parameters of the scheme so as to not care about decryption errors and everything should still remain secure.

We show that this naive solution is not quite correct, but the intuition can be made to work by a small change in the scheme. Our new AKE, which now remains secure in case of decryption errors, fails to create a shared key with probability around 2−30, but adds enough security that we are able to instantiate a KEM based on the NTRU assumption with rings of smaller dimension.

Our second improvement is showing that certain hash-and-sign lattice signatures can be used in

“message-recovery” mode. In this mode, the signature size is doubled but this longer signature is enough to recover an even longer message – thus the signature is longer but the message does not need to be sent. This is advantageous when signing relatively long messages, such as the public keys and ciphertexts generated by a lattice-based KEM. We show how this technique reduces the communication complexity of the generic construction of our AKE by around 20%. Using a lattice- based signature in message-recovery mode is quite generic (i.e it does not depend on the structure of the message), and so it may be used in AKE constructions that use a different KEM, or even simply as a way to reduce the transmission length of a message and its digital signature.

?Supported by the European Horizon 2020 ICT Project SAFEcrypto (H2020/2014-2020 Grant Agreement ICT- 644729 – SAFECrypto), the French FUI Project FUI AAP 17 – CRYPTOCOMP, and the SNSF ERC Transfer Grant CRETP2-166734 – FELICITY.

(3)

1 Introduction

Lattice-based cryptography has matured to the point that it is seen as a viable replacement to number-theoretic cryptography. There are very efficient public-key encryption schemes (and thus Key Encapsulation Mechanisms) based on the NTRU [HPS98, HPS

+

15] and Ring-LWE problems [LPR10, DXL12, LPR13b, Pei14], as well as digital signature schemes that are also based on NTRU [DDLL13, HPS

+

14, DLP14] and Ring-LWE [Lyu12, GLP12].

Once we have practical protocols for digital signatures and public key encryption / key encapsulation, it is clear that one can construct an authenticated key exchange (AKE) scheme, and even a forward-secure one which guarantees the key privacy after long-term authentication means are compromised. A rough outline for a simple construction is described in Figure 1, which uses a generic key encapsulation scheme and a digital signature scheme. The simple idea is that Party 1 picks an encapsulation/decapsulation key pair (K

e

, K

d

), sends the encapsulation key in an authenticated way to Party 2, which in turn uses it to encapsulate a random seed k, in an authenticated message. Only Party 1 is then able to decapsulate the seed k derived into a session key sk. Authentication means are the signing keys, and their compromise or the compromise of any future or past encryption/decryption keys does not have any impact on the privacy of the session encrypted under key sk.

Party 1 Party 2

Generate(Kd,Ke)

σ1=Sig(Ke) Ke, σ1

-Ver(Ke, σ1) (c, k) =Enc(Ke) Ver(c, σ2) c, σ2

σ2=Sig(c) k0=Dec(Kd, c)

sk0=H(View, k0) sk=H(View, k)

Fig. 1. A Generic AKE Construction from a KEM and a Digital Signature

1.1 Recent Work

There has been a lot of recent work that deals with proposing optimizations of lattice-based KEMs. The works of [DXL12,Pei14,BCNS15, ADPS16] gave constructions (and instantiations) of a KEM derived from the Ring-LWE encryption scheme [LPR13a], while the work of [HPS

+

15]

optimized the parameters for a particular version of the NTRU KEM. All these papers left the authentication part of the AKE to known signature schemes and the generic composition in Figure 1. The work of Zhang et al. [ZZD

+

15] adapted the (H)MQV [LMQ

+

03,Kra05] discrete log-based AKE protocol to the Ring-LWE problem. But it seems that this approach leads to schemes that have larger communication complexity (for similar security levels) than the approach in Figure 1. Thus, it currently appears that the most efficient way of constructing lattice-based AKE schemes is a generic composition of a KEM with a digital signature scheme.

1.2 Our Contributions

In our work, we propose two enhancements to the generic AKE construction – allowing de-

capsulation errors, which increases security, and using digital signatures with message recovery,

which decreases the communication.

(4)

Handling Decapsulation Errors. The security of lattice-based encryption / encapsulation schemes relies on the hardness of solving linear equations in the presence of noise. The larger the noise is (with respect to the field that we are working over), the harder it is to recover the solution. On the other hand, if the noise is too large, decryption may fail. These decryption failures are not just an inconvenience – their detection usually results in the adversary being able to recover the secret key (c.f. [HNP

+

03]). For this reason, stand-alone encryption schemes use parameters such that decryption failures occur with only a negligible probability.

In a forward-secure AKE, however, where the encryption keys are ephemeral, there is intu- itively no danger of decryption failures (which will result in the users not agreeing on a shared key) since the users will restart and a fresh public key will be used in the next key-agreement attempt. The cost of a restart is an increase in the expected run-time and communication com- plexity of the scheme. For example, if one run of the protocol uses T of some resource and has a failure probability of , then the expected amount of this resource the complete protocol will require until it completes successfully is T /(1 − ). For values of that are small, this is very close to T .

A natural idea to construct such an AKE is to take a KEM that may have decapsulation failures and plug it into the prototype in Figure 1. This solution, however, is not necessarily secure. Consider an encapsulation scheme where invalidly formed ciphertexts immediately lead to the recovery of the decapsulated key

1

. The Adversary’s attack would then involve intercepting the ciphertext sent by Party 2 and recovering the key k

0

that will be the one decapsulated by Party 1 in the event of a decapsulation error (which occurs with non-negligible probability). The Adversary and Party 1 now share a session key. While a KEM in which malformed ciphertexts can be opened by the Adversary to the decapsulated key may appear to be contrived, it does show that the protocol in Figure 1 cannot be proven to be secure when instantiated with a generic scheme with decryption failures.

Our first contribution (Section 5) is a construction of a forward-secure AKE that remains secure even when instantiated with a KEM that leads to decapsulation failures. In particular, we prove that our scheme is secure as long as recovering the encapsulated key is a hard problem (regardless of what happens during decapsulation) – so essentially all we need for security is for the KEM to be a one-way function. The modification of the scheme is not particularly complicated – Party 2 simply needs to apply a hash function to k and include it in his message (see the informal description in Figure 2 and the formal one in Figure 3)– but the proof contains several subtleties.

In order to instantiate the AKE, we show that a KEM based on NTRU (Section 3) very naturally fits into the requirements of our generic construction. We give a quick description of it, and leave the full details to Section 3. The decapsulation key is a polynomial g with small coefficients (chosen according to Table 3) in the ring Z

q

[x]/hx

n

+1i for q = 12289 and n = 512 or 1024. The encapsulation key is h = f/g, where f is another polynomial with small coefficients.

The encapsulation procedure works by picking two polynomials r and e with small coefficients and computing the ciphertext/shared key pair (2hr +e, e mod 2). To decapsulate the ciphertext c = 2hr + e using the decapsulation key g, one computes

cg mod q mod 2/g = (2fr + ge mod q) mod 2/g = e mod 2,

1It is simple to construct such a scheme. Suppose we have an encapsulation scheme (without decapsulation errors) with encapsulation procedureEncand a decapsulation procedure whereDec(Kd,0) = 0. We modify it to a scheme where the encapsulation procedureEnc0 runsEncto obtain (c, k) and outputs it with probability 1−. With probability, it outputs (0, k). Notice that this new scheme is still secure (i.e. one-way) becausek is still hard to recover (and actually information-theoretically hard to recover when (0, k) is the output), but with probability, the decapsulated key is the constantDec(Kd,0) = 0.

(5)

Party 1 Party 2

Generate(Kd,Ke)

σ1=Sig(Ke) Ke, σ1

-Ver(Ke, σ1) (c, k) =Enc(Ke) a=H2(Ke, σ1, c, k) Ver(c, a, σ2) c, a, σ2

σ2=Sig(c, a) k0=Dec(Kd, c)

ifa6=H2(Ke, σ1, c, k0) then RESTART

sk0=H1(View, k0) sk=H1(View, k)

Fig. 2. Informal AKE Construction from Encapsulation and Digital Signatures with Decapsu- lation Errors

which is the shared key. Note that for the above equality to hold, it is crucial that 2fr + ge mod q mod 2 = 2fr + ge mod 2, which happens exactly when the coefficients of f , r, e, g are small enough that a reduction modulo q does not take place. If a reduction does take place, then we will end up with a decapsulation error.

Because in our construction decapsulation errors are no longer a security risk, we can set the parameters such that these failures occur a non-negligible number of times – for example with probability 2

−30

. If a failure does occur, then the protocol can be safely restarted. We believe that 2

−30

is a low-enough failure probability that some external, for example, networking error may have a higher probability of occurring. Table 3 shows the security gained when we instantiate our scheme such that it has decapsulation error of 2

−30

vs. 2

−128

. We discuss the security of our proposals later in this section.

Signatures in Message Recovery Mode. Just as for signatures based on standard number- theoretic assumptions, lattice-based signatures come in two flavors – Fiat-Shamir and hash-and- sign.

2

The improvement we present in this paper is only for hash-and-sign signature schemes – in particular for the specific parameters of the scheme presented in [DLP14]. We now give a brief description of that scheme, which combines the pre-image sampling algorithm from [GPV08]

with a particular instantiation of an NTRU lattice [HHGP

+

03].

The public key is a polynomial h ∈ Z

q

[x]/hx

n

+1i which is equal to f/g, where the coefficients of f and g are somewhat small.

3

The secret key, which is a basis of a particular lattice induced by h, allows the signer to find polynomials s

1

, s

2

with small coefficients such that hs

1

+s

2

= H(m), where m is the message and H is a hash function modeled as a random oracle that maps {0, 1}

to random elements in Z

q

[x]/hx

n

+ 1i.

The signature of a message m is (s

1

, s

2

), and the verification procedure checks that s

1

, s

2

have small coefficients and that hs

1

+s

2

= H(m). Note that because s

2

is completely determined by s

1

and m, there is no reason to send it. Thus the signature can be just s

1

, m. For ease of exposition of our improvement, assume that the bit-length of m is a little less than the bit-length of elements in Z

q

[x]/hx

n

+ 1i (e.g. n log q − 256 bits). Then rather than sending s

1

, m as the signature, we will send s

01

, s

02

that satisfy the equation hs

01

+ s

02

= t where t is the polynomial in Z

q

[x]/hx

n

+ 1i whose first coefficients are m + F (H

0

(m)) and its last 256/blog qc coefficients are H

0

(m). Here H

0

is a random oracle mapping {0, 1}

to {0, 1}

256

(thus its output fits into

2There are also lattice signature schemes that do not use random oracles, but those are much less practical.

3The distribution off andgis different from the way the secret key is constructed for the KEM. In particular, wedo not wantf andgto be too small. Full details are provided in [DLP14].

(6)

One-way authenticated KE Two-way authenticated KE

Dimensionn 512 1024 512 1024

Modulus 12289 12289 12289 12289

First flow size (bits)

≈7200 ≈14400 ≈9800 ≈19300

Second flow size (bits)

≈10300 ≈19600 ≈10300 ≈19600

Signing key size (bits)

≈2100 ≈3700 ≈2100 ≈3700 Verification

key size (bits)

7168 14336 7168 14336

Table 1. Parameter sizes and communication length for our AKE. We consider the versions where both parties authenticate themselves and the version in which only the server is authen- ticated.

256/blog qc coefficients) and F is another random oracle whose output range is n log q − 256 bits. Notice that if we send s

01

and s

02

as the signature, the message m can be recovered by first computing t = as

01

+ s

02

. Then from t, we can recover H

0

(m), then F (H

0

(m)), and finally m.

More details (in particular how one would handle messages that are longer than n log q bits) and a full security proof are provided in Section 4.2.

The main advantage of the message recovery signature scheme is that instead of sending the message m, one can send the shorter element s

2

. Note that if the message we are signing is short, then our technique of sending s

2

instead of a message is counterproductive and should not be used.

4

The public key and the ciphertext of a KEM, however, are polynomials that are pseudorandom over Z

q

[x]/hx

n

+ 1i, and so require n log q bits to represent, and their signa- tures would benefit from the message-recovery technique. The efficacy of the message recovery technique clearly does not depend on anything except the message size, and so it may also be appropriate to use in combination with other KEMs. In Table 2, we illustrate the savings of this technique when working over the ring Z

q

[x]/hx

1024

+ 1i. When combining our signature scheme with our KEM, or with the Ring-LWE based KEM in [ADPS16], the savings are about 20%.

Note that our complete scheme has less total communication complexity due to the fact that our NTRU KEM is a little bit more compact than the Ring-LWE one.

1.3 Putting Everything Together

Table 1 shows the communication complexity of our full AKE when instantiated with n = 512 and n = 1024 (the security of these choices will be discussed later) for the case of two-sided authentication and for the case of when only the second party needs to be authenticated (as is often the case in TLS). In addition, Section 6 describes a modification of our protocol in which the identities of the parties are hidden from a passive adversary (this is sometimes a desirable property and is an option in the IKE protocol). This anonymity property is impossible to achieve in a 2-round scheme,

5

and so a third round is required. Additionally, since maintaining anonymity requires the splitting of the key/signature pair usually sent in the first round, we

4We point out that this is in contrast to using message-recovery mode in other hash-and-sign signatures, such as RSA. In those cases, the signature size does not increase in message-recovery mode, and so this mode is always advantageous to use.

5The intuition is that the player who moves first has to send his signed message in the clear because there is no encryption key (public or private) available to him at the start of the protocol. Therefore a passive adversary can simply perform a verification procedure with that player’s public verification key to see if he is indeed the sender.

(7)

New Hope [ADPS16] Our Scheme Hash-and-

Sign

naive message-

recovery naive message- recovery First flow size

(bits)

≈24000 ≈19600 ≈23300 ≈18900 Second flow

size (bits)

≈25800 ≈21400 ≈23600 ≈19200 Total commu-

nication (bits)

≈49800 ≈41000 ≈46900 ≈38100

Table 2. Comparison of our paper with [ADPS16]. For the comparison to be meaningful we consider the AKE obtained by adding a Hash-and-Sign signature (either without or with message-recovery) to the scheme in [ADPS16]. We illustrate the savings of message-recovery mode by presenting the naive generic AKE construction and one that uses the digital signature in message-recovery mode.

cannot use the message-recovery technique there (but it can still be used when signing the ciphertext in the second flow), thus the total communication complexity will be somewhat larger than in our 2-round scheme.

The comparison of our AKE with n = 1024 to the one in [ADPS16] is given in Table 2.

Because [ADPS16] only proposed a KEM, we combine it with the digital signature scheme that we use for our KEM in this paper. One can see that our AKE is slightly shorter than the one from [ADPS16] for essentially the same security level. Also, the message-recovery technique reduces the communication lengths of both AKEs by the same amount.

1.4 Computational Efficiency

We will now discuss the efficiency of our AKE. The KEM part of our scheme requires generation of small polynomials and arithmetic operations in the ring Z

q

[x]/hx

n

+1i. Rather than generating polynomials with each coefficient being independently chosen from some distribution, we follow the original NTRU way of generating such polynomials by prescribing exactly how many of each coefficient the polynomial should have. Such polynomials can be created using n random swaps within an integer array. We remark that while such an algorithm would be weak to timing attacks the permutation can be done in constant time using a sorting network, as in e.g. [BCLvV16], this incurs a small overhead resulting in a complexity of O(n log

2

n). Addition and subtraction similarly requires O(n) integer operations. Multiplication and division can be done in quasi-linear time by employing the Number Theoretic Transform (i.e. FFT over a finite field) which has very efficient implementations (e.g. [LN16]). The prime q = 12289 was chosen such that x

n

+ 1 (for n = 512 and n = 1024) splits into n linear terms and is therefore amenable to the number theory transform. We point out that this is the same ring that was used in [DDLL13, ADPS16], and those works produced schemes that were faster than number- theoretic schemes of comparable security parameters. The KEM part of our scheme is therefore very efficient.

Lattice-based signatures that use the hash-and sign approach use a technique known as

lattice pre-image sampling [GPV08,Pei10,MP12] and this generally results in digital signatures

that are longer and much less efficient to compute than those generated using the Fiat-Shamir

approach [Lyu12, GLP12, DDLL13]. But there has been a lot of recent work on trying to op-

timally (and securely) instantiate hash-and-sign signatures over polynomial rings. In [DLP14],

it was shown that hash-and-sign signatures over carefully-constructed NTRU lattices that are

(8)

very similar to those in [HHGP

+

03] may be instantiated in a way such that they have signa- ture sizes that are somewhat smaller than the most compact Fiat-Shamir NTRU-based signa- ture [DDLL13]. Then it was shown in [LP15] that for many polynomial rings, the GPV sampling algorithm [GPV08] that is used to produce the compact signatures in [DLP14], can actually be run in time O(n

2

) (and in O(n) space) rather than O(n

3

) time required for general lattices.

And very recently, it was further shown that pre-image sampling can be done in quasi-linear time over rings Z

q

[x]/hx

n

+ 1i using ideas from the FFT procedure [DP15].

Asymptotically, therefore, hash-and-sign signatures are as efficient as the Fiat-Shamir ones.

The caveat is that the lattice pre-image sampling requires the intermediate storage of a vector of a high precision approximations to real numbers.

6

This requires roughly 300 - 700K bits of storage and so may not be a suitable option for constrained devices. One should therefore consider the situation in which the AKE is used before deciding whether using hash-and-sign signatures (and thus benefiting from their message-recovery mode) is appropriate. The most common scenario in which it would be appropriate is TLS where only the server (which is usually not a device with strong computational constraints) is being authenticated. On the other hand, if mutual authentication is required and one of the devices has resource limitations, then the hash-and-sign approach may not be appropriate and one may choose to forego the savings in the communication complexity.

1.5 Security

Obtaining the exact computational hardness of lattice-based schemes is an extremely difficult problem. In order to put our work into context and obtain an “apples-to-apples” comparison, we will use some security estimates from the recent work in [ADPS16]. The most efficient attacks against the KEM and signature scheme which comprise lattice-based AKEs are lattice reduction attacks.

7

The lattice attacks fall into two categories – sieving and enumeration (we discuss these in more detail in Section 3.2).

The attacks based on sieving are asymptotically more efficient, but have a very big downside in that the space complexity is essentially equivalent to the time complexity. Moreover, all known approaches to sieving have lower bounds to the required space complexity, and it would be a huge breakthrough if an algorithm were discovered that required less space. The attacks based on enumeration are less efficient time-wise, but do not require a lot of space, and thus are the attacks that are preferred in practice. We analyze our schemes with respect to both attacks following the methodology in [ADPS16].

8

All the schemes that we propose have complexity against enumeration attacks (with quantum speedup) larger than 128 bits (see Tables 3 and 4). Furthermore, all our schemes, with the exception of the digital signature component of the AKE for n = 512 have security larger than 128 bits against sieving attacks as well. While the signature scheme for n = 512 appears to have less than 128 bits of security, we point out that the sieving complexity we state follows that from [ADPS16], which uses the asymptotic complexity while leaving out the lower order terms. Those lower-order terms are in fact quite significant in practice, and put the security of the signature scheme above 128-bits. The analysis in [ADPS16] was purposefully done to be extremely optimistic in favor of the attacker, and for

6It was shown in [LW15] that one can do pre-image sampling without high-precision arithmetic, but the resulting vector (and thus the signature size) ends up being larger than when using sampling procedures such as [GPV08].

7There are also combinatorial attacks (e.g. [How07]), but the dimensions considered in this paper are too high for them to be effective.

8The paper [ADPS16] also discussed a “distinguishing” attack, but such an attack does not seem to be relevant in our case because the security in our AKE is based on the 1-wayness of the KEM – thus on a search, rather than a decision, problem.

(9)

that reason, that paper only proposes parameters for n = 1024. But they admit that there is no currently-known attack that exceeds 2

128

for n = 512.

There have recently been recent attacks on NTRU encryption schemes where the modulus is much larger than the size of the secret polynomials f , g [ABD16, CJL16]. But those attacks do not apply to schemes such as ours, where the size of these polynomials is not too far from the range in which their quotient will be uniformly-random in the ring [SS11].

1.6 Our Recommendations for Lattice-Based AKE

We presented two approaches for improving the generic construction of an AKE scheme from lattice assumptions. The first approach introduces a very small chance of protocol failure, but increases security. The second approach reduces the communication size of the flows in the AKE assuming that the public key and ciphertexts of the KEM are “large enough”.

If one were to make a recommendation for parameter sizes to be used today, one should probably err on the side of caution and recommend that one use n = 1024. In this sense, we agree with the decision to only propose parameters for n = 1024 in the Ring-LWE based KEM of [ADPS16]. On the other hand, there are currently no attacks that make our scheme with n = 512 less than 128-bit secure. Nor are there really algorithmic directions that look promising enough to make a significant impact on this parameter range. Thus the main reason for recommending n = 1024 is to guard against completely surprising novel attacks – in other words, to guard against “unknown unknowns.” But we believe that if cryptanalysis over the next several years intensifies and still does not reveal any novel attack ideas, it is definitely worth re-examining the possibility that the parameters for n = 512 are indeed secure enough.

As in [ADPS16], when we set n = 1024, the security against even the most optimistic attacks is way above the 128-bit threshold. In this case, we see little sense for using KEM parameters that increase security at the expense of having a 2

−30

chance of decapsulation errors (i.e. those in column III of Table 3). Thus if one were to set n = 1024, then we recommend using parameters in column IV of Table 3 along with those in column II of Table 4 for the signature. And one should use the signature in message-recovery mode. The communication size of the resulting AKE is in Table 2.

If one were to use n = 512, then one could use the KEM parameters in column I of Table 3.

In this case, an increase in 13 bits (compare to column II) of the security against sieve attacks may be worthwhile. One could then also use the parameters from column I of Table 4 for the signature. Note that the complexity against sieving attacks is not quite 128 bits, but we remind the reader that this does not take into account the lower order terms of the sieving attack, which are significant. Also, such an attack requires over 2

85

space, which makes it impossible to mount. Because the enumeration complexity is still higher than 128 bits, and authentication is not as critical as secrecy,

9

we believe that using the parameters in column I gives a good trade-off between security and efficiency in all but the most sensitive applications.

As mentioned in Section 1.4, the hash-and-sign signatures that we propose may not be suitable if the device doing the signing is extrememly limited in computational resources. In this case, we would recommend combining our KEM with a Fiat-Shamir signature such as BLISS (perhaps adapted to n = 1024) [DDLL13]. And if one does not want to perform any high precision arithmetic or Gaussian sampling, then one can combine our KEM with the Fiat- Shamir scheme in [GLP12] which only requires sampling from the uniform distribution. Also, if one uses the AKE in a way that is not completely forward-secure, i.e. the KEM public key does not change with each interaction, then our security reduction in case of decapsulation errors no

9If an attack on the signature scheme were discovered, the scheme could be changed. Whereas an attack on the KEM would reveal all previous secret communication.

(10)

longer holds. In this case, one should use our KEM with the parameters in columns II and IV of Table 3.

We also mention that the modulus of the rings we use in the paper – 12289 – was chosen for efficiency purposes. It is the smallest integer q such that the polynomial x

n

+ 1, for n = 512 or 1024, can be factored into linear factors with integer coefficients modulo q. Such a factorization, combined with the fact that n is a power of 2, allows one to perform multiplication and division operations in quasi-linear time using the Number Theory Transform. If one did not care about optimizing the efficiency of the AKE, then we could have chosen a smaller q and then performed multiplication using other techniques (e.g. Karatsuba’s multiplication). The advantage of choosing a smaller modulus is that the ciphertext, which is of length ≈ n log q would be shorter. Of course, if we use our schemes with a different modulus, we would have to change the distribution of all the variables in order to maintain similar security and decapsulation error.

So while one cannot choose this modulus to be too small, it seems that something on the order of 2

11

would be possible. Compared to using the current modulus of 12289, this could result in a savings of approximately 3n bits in the length of the KEM public key and ciphertext. Whether this is something worth doing would depend on the scenario in which the AKE is employed. The main reason that we only use 12289 in this paper is to obtain a fair comparison to [ADPS16], whose scheme also used this modulus.

1.7 Paper Organization

In Section 2, we state the formal security definitions of a digital signature, a KEM, and an AKE. In Section 3, we describe our KEM and discuss its security for the concrete parameters in Table 3. In Section 4, we present the construction and security proof for using a lattice-based hash-and-sign signature in message-recovery mode. In Section 5, we then present a generic construction of an AKE scheme from a KEM and a digital signature scheme, where the KEM may result in decapsulation errors. In Section 6, we also discuss a protocol in which the identity of the participants is hidden from a passive eavesdropper.

1.8 Acknowledgements

We thank L´ eo Ducas for very helpful discussions related to lattice reduction algorithms and to [ADPS16].

2 Cryptographic Preliminaries

In this section, we describe the formalisms and security models of authenticated key exchange and its underlying building blocks.

2.1 Digital Signatures

A digital signature scheme Σ consists of 3 (randomized) algorithms – SigKeyGen, Sig, and Ver.

The key generation algorithm, SigKeyGen, takes as input a string whose length is the security parameter, and outputs a secret signing key K

s

and a public verification key K

v

. The signing algorithm Sig takes as input a signing key K

s

and a message m, and outputs a signature σ.

In order to get a unified notation for both “signature with appendix”, where the signature

is appended to the unmodified message, and “signature with message-recovery”, where the

message is embedded in the signature itself, we assume that the signature σ contains (either

explicitly or implicitly) the message. The verification algorithm Ver takes the verification key

K

v

and a signature σ, and outputs either a message m or ⊥ (for failure, or invalid signature).

(11)

The correctness of the digital signature scheme is satisfied if for all (K

s

, K

v

) pairs out- put by the key generation algorithm, and all messages m in the message space, we have Ver(K

v

, Sig(K

s

, m)) = m. On the other hand, the security of a digital signature scheme is usually defined via the standard existential unforgeability against adaptive chosen message at- tacks [GMR88]: Upon getting the public verification key K

v

, the adversary is allowed to query a signing oracle and then attempts to produce a signature of a new message.

In our generic AKE construction, a slightly stronger security notion will be required, the so-called strong unforgeability (a.k.a. non-malleability) against adaptive chosen message at- tacks [SPMLS02,ADR02]: Upon getting the public verification key K

v

, the adversary is allowed to query a signing oracle and then attempts to produce a new signature (possibly on an already signed message).

The quality of an adversary A is measured by its success probability, denoted Succ

suf−cmaΣ

(A), in forging a new valid signature, and we denote by Succ

suf−cmaΣ

(t) the best success probability an adversary can get within time t.

2.2 Key Encapsulation Mechanism (KEM)

A key encapsulation mechanism KEM [Sho01] is defined via 3 (randomized) algorithms – KEMKeyGen, Enc, and Dec. The key generation algorithm, KEMKeyGen, takes as input a string whose length is the security parameter, and outputs a secret decapsulation key K

d

and a pub- lic encapsulation key K

e

. The encapsulation algorithm Enc takes as input the public key K

e

and outputs an ordered pair (c, k), where c is a ciphertext that encapsulates the key k. The decapsulation algorithm Dec takes as input the secret key K

d

and c and outputs a key k

0

.

The key encapsulation algorithm is correct if for all (K

d

, K

e

) key pairs produced by KEMKeyGen and (c, k) produced by Enc(K

e

), we have Dec(K

d

, c) = k. But this may not always be the case:

we then say that KEM is ε-correct (or is an ε-KEM) if

Kd,K

Pr

e,c,k

[Dec(K

d

, c) = k] ≥ ε,

where all the input variables follow the distribution probability (K

d

, K

e

) ← KEMKeyGen(1

λ

) and (c, k) ← Enc(K

e

). We will say that such a key encapsulation c, that can be decapsulated to the expected key k, is valid, or by abuse of notation, the pair (c, k) will be said to be valid.

On the other hand, the security of a key encapsulation mechanism is usually defined via the classical indistinguishability of the encapsulated key k: Upon getting a pair (c, k) where k is either the actual key encapsulated in c or a random key, independent from the key encapsulated in c, the adversary attempts to guess whether this is the actual or a random key.

However, in our application, a much weaker security notion will be enough, the so-called one-wayness property: Upon getting the encapsulation c of the key k, the adversary has to output k. The quality of an adversary A is measured by its success probability in outputting k:

Succ

owKEM

(A) = Pr[A(K

e

, c) = k|(K

d

, K

e

) ← KEMKeyGen(1

λ

), (c, k) ← Enc(K

e

)].

As above, we denote by Succ

owKEM

(t) the best success probability an adversary can get within time t.

We stress that in the above definition with possible invalid ciphertexts (a.k.a. decryption failures), we only consider the hardness of recovering the encapsulated key, and do not say anything about the decapsulated key (if they are not the same), as exploited in the counter- example in the previous section.

An additional property is the checkability of the encapsulated key, when for (K

e

, c) and a

candidate k, it is easy to check whether k is the actual encapsulated key. This is a classical

(12)

result that from a one-way KEM, one gets an indistinguishable KEM by simply using H(k) as the new encapsulted key, when H is modeled as a random oracle [BR93]. Checkability allows for a tight reduction between the two security notions in the random oracle model.

2.3 Authenticated Key Exchange (AKE)

An authenticated key exchange AKE [BR94,BCK98] is an interactive protocol between 2 entities that hold authentication means (a common secret key, a common password, or private keys for a public-key digital signature scheme) and that eventually agree on a common random ephemeral session key sk.

The correctness of an authenticated key exchange protocol is satisfied if when the two players interact with their intended partners (the players with whom they want to agree on a session key), they end up terminating the protocol in possession of the same session key. But as in the above KEM, this may not always be the case: we then say that AKE is ε-correct (or is an ε-AKE) if honest players agree on the same session key with probability greater than ε. In case of failure, we run the protocol again: it just needs not to fail too often, and perhaps even a 10%

failure rate would not be a major issue in practice.

On the other hand, the security of an authenticated key exchange protocol means that no adversary should be able to distinguish the actual session key agreed on by an honest player from a random bitstring. Let us thus define more formally the security game. An honest player P has several instances P

i

that can run concurrent executions of the protocol and the adversary has access to the following oracles to interact with the various instances of the different honest players:

– Send(P

i

, m) makes the message m to be sent to the i

th

-instance P

i

of the player P, and the output is the message that P generates from such a message m. A special message ”Start”

is used to ask the instance P

i

to initiate a new execution of the protocol. This models the fact that the adversary controls the network and any communication routed through it. In particular, he can forward, replay, block, or alter the messages;

– Reveal(P

i

), if P

i

has terminated and holds a session key, this key is output. This models the later misuse of the session key which might be leaked if used with a weak cryptographic system;

– Test(P

i

), if P

i

is fresh, one flips a bit b: if b = 1, the session key sk held by P

i

is sent back, otherwise a truly random key is sent. However, if P

i

is not fresh, the actual value of sk (possibly ⊥, if not yet defined) is returned. This query can be asked once only, and it characterizes the randomness of fresh session keys.

The adversary eventually has to guess the bit b. We stress that we are using the find-then-guess flavor of indistinguishability [BDJR97] only, where a unique session key can be tested. But it is well-known that using hybrids, this security notion implies the real-or-random definition, where the adversary could test many session keys, which thus guarantees randomness and independence of session keys that are fresh. The restriction to fresh keys, or fresh instances, is to avoid trivial attacks: an instance P

i

is fresh if

– P

i

holds a key;

– neither P

i

nor its partner have been asked a Reveal-query.

The latter restriction thus excludes situations where the adversary has already asked for that

key and therefore already knows it. For this case, the notion of partnership is also important,

and is defined by matching sessions: two instances are partners if their views of the protocol are

the same.

(13)

Basically, two partners should terminate with the same session key with reasonable proba- bility (ε-correctness) or abort. On the other hand, for all the fresh instances/keys, the adversary should not be able to distinguish the actual session keys from random bitstrings. Hence the ad- versary A outputs its guess b

0

for the bit b involved in the Test-query. The quality of the adversary A is measured by its advantage in guessing b: Adv

indAKE

(A) = | Pr[b

0

= 1|b = 1]−Pr[b

0

= 1|b = 0] |.

As above, we denote by Adv

indAKE

(t) the best advantage an adversary can get within time t.

2.4 Forward-Secure AKE

An enhanced security notion for AKE and ε-AKE is forward secrecy, which means that the above indistinguishability of the actual session key for a random bitstring should hold for previous sessions, even after the corruption of the long-term authentication means. In the security game, the adversary has access to an additional oracle:

– Corrupt(P ) makes the long-term authentication means of the player P to be given to the adversary.

Then, the forward-secure freshness property is slightly different: an instance P

i

is fs-fresh if – P

i

holds a key;

– none of P

i

nor its partner have been asked for a Reveal-query;

– none of P

i

nor its partner have been asked for a Corrupt-query before terminating the AKE protocol execution.

The quality of the adversary A is then measured by the advantage Adv

fs−indAKE

(A) = | Pr[b

0

= 1|b = 1] − Pr[b

0

= 1|b = 0] |. As above, we denote by Adv

fs−indAKE

(t) the best advantage an adversary can get against the forward-secure indistinguishability security game within time t.

2.5 Identity Hiding AKE

In the context of the “post-specified peer” model [CK02], in which the two parties do not know the identity of their partner before exchanging keys but learn it during the protocol, it can be natural to require identity protection from the key exchange protocol. This is an identity hiding notion, where parties should only learn each other’s identity if they are interacting with their intended partner. The anonymity of the key exchange can be defined against either passive or active attackers and requires that an adversary should not be able, after either observing an exchange or interacting with an honest player, to distinguish the identity of an honest player in the protocol from any other identity. We specify the security game in a more formal way in Section 6.

3 KEM from NTRU

In this section, we instantiate a KEM based on the hardness of the NTRU problem.

The NTRU problem deals with finding short solutions to polynomial equations over certain rings. Some of the more “popular” rings to use are Z[x]/hx

n

− 1i where n is a prime integer and Z [x]/hx

n

+ 1i where n is a power of 2. Starting from the seminal work of [HPS98] where the NTRU problem was first defined, there have been many different flavors of the problem mostly differing on the underlying distributions from which the keys and randomness are generated.

Elements in Z [x]/hx

n

± 1i are represented as polynomials of degree at most n − 1 and

reduction modulo an odd q maps the coefficients into the range [−(q − 1)/2, (q − 1)/2]. We

also define the norm of an element in Z[x]/hx

n

± 1i to simply be the `

2

-norm of the vector

(14)

formed by its coefficients. In all variants of the NTRU problem, there are some subsets D

e

, D

f

of Z [x]/hx

n

± 1i that consist of polynomials with coefficients of small norms. Furthermore, the polynomials from the set D

f

are also invertible in both Z

q

[x]/hx

n

± 1i and Z

2

[x]/hx

n

± 1i.

The NTRU trap-door function intuitively rests on two assumptions. The first is that when one is given a polynomial h = f /g mod q where f , g ← D

f

, it is hard to recover f and g. The second assumption is that when one is given an h generated as above and t = 2hr + e mod q where r, e ← D

e

, it is hard to recover e mod 2.

10

When one has the trap-door g, however, one can recover e by first computing gt mod q = 2fr + ge. If the modulus q is large enough (i.e.

2fr + ge in Z[x]/hx

n

± 1i is equal to 2fr + ge in Z

q

[x]/hx

n

± 1i), then the preceding is equal to ge in the ring Z

2

[x]/hx

n

± 1i. Since g has an inverse in Z

2

[x]/hx

n

± 1i, one can then divide by g to recover e mod 2.

Definition 1. For some ring R = Z

q

[x]/hx

n

± 1i and subsets of the ring D

f

and D

e

, generate polynomials f , g ← D

f

and e, r ← D

e

. Define h = f/g mod q and t = 2hr + e mod q. The NTRU(R, D

f

, D

e

) problem asks to find e mod 2 when given h and t.

The distributions of D

f

and D

e

will depend on the security and failure probability of our scheme. We direct the reader to Table 3 and Section 3.1. We now present a simple KEM whose one-wayness (see Section 2.2) is directly based on the NTRU(R, D

f

, D

e

) problem in Definition 1, and whose correctness is based on the discussion preceding the definition.

KEMKeyGen{ f , g ←

$

D

f

, h ← f/g mod q

Return (K

d

, K

e

) = (g, h) }

Enc(h){ r, e ←

$

D

e

, t ← 2hr + e mod q

Return (c, k) = (t, e mod 2) }

Dec(g, t){

Return k = gt mod q mod 2

g mod 2 }

3.1 KEM Parameters

Table 3 contains our proposed parameter choices for the KEM. To explain the table, we will use the first column as a running example. The polynomial ring considered in this instantiation is Z

12289

[x]/hx

512

+ 1i and the secret key and randomness parameters f, g, e, and r are chosen as random permutations of degree 512 polynomials that have 1 coefficient set to ±12 (each), 1 to ±11, 3 to ±10, etc. and 48 set to 0. The norm of the secret key and error elements (f, g) and (r, e) in this instance is approximately 93.21.

Note that for security, one should use a distribution that produces the largest vectors possible while not resulting in too many decryption failures. The most appropriate such distribution is the normal distribution (either the discrete normal or a rounded continuous).

11

Such an operation,

10This is somewhat different from the standard NTRU assumption in that we are going to allow the coefficients ofeto be larger than 2, but only requireemod 2 to be recovered. This is actually more related to an NTRU encryption scheme that was first introduced in [SS11] where the message was hidden in the lower order bits of the error. One could then think of our KEM as an encryption of a random message. But since the message itself is random, its randomness contributes to the noise making it larger.

11The paper of [ADPS16] proposes to use the binomial distribution, which is a good approximation of the normal distribution and is not too difficult to generate. It should be pointed out that the distribution does not really affect the security of the scheme – of main importance is the norm of the generated vectors.

(15)

I II III IV Polynomial x512+ 1x512+ 1x1024+ 1x1024+ 1 Modulus 12289 12289 12289 12289

±12 coeff. 1 0 0 0

±11 coeff. 1 0 1 0

±10 coeff. 3 0 2 0

±9 coeff. 5 1 4 0

±8 coeff. 8 2 8 1

±7 coeff. 12 4 15 3

±6 coeff. 17 9 26 9

±5 coeff. 24 17 42 22

±4 coeff. 31 28 61 46

±3 coeff. 38 41 81 80

±2 coeff. 44 55 100 118

±1 coeff. 48 65 113 150

0 coeff. 48 68 118 166

σ 4.151 2.991 3.467 2.510

sk norm ≈93.21 ≈67.17 ≈110.42 ≈79.54 bits (pk and

ciphertext)

7168 7168 14336 14336

bits sk 2560 2560 5120 5120

failure prob. ≈2−30 ≈2−128 ≈2−30 ≈2−128

block size 487 438 1026 939

sieving com- plexity (log

#opera- tions)

>128 >115 >269 >246

sieving space (log bits)

>114 >104 >227 >209

enumeration complexity (log #oper- ations)

>185 >157 >645 >503

Table 3. Parameters for the NTRU KEM

(16)

though, might be somewhat more costly than simply creating a random permutation of a fixed vector. We thus fix the coefficients of our polynomials to be as close to a discrete Gaussian as possible. Concretely the number of coefficients set to an integer k is the probability that a discrete Gaussian of parameter σ outputs k multiplied by the degree of the polynomial, e.g.

512 · Pr[D

4.151

= 12] ≈ 1 so we fix the number of coefficient with value 12 to one in our first parameter set. Public keys and ciphertexts are polynomials in Z

12289

[x]/hx

512

+ 1i and thus need 512 · dlog 12289e = 7168 bits of storage memory. On the other hand, since the coefficients of the secret key are no larger than 12 they can be stored in 5 bits, resulting in a secret key of size 512 · 5 = 2560 bits. To evaluate the failure probability of decapsulation we model the polynomial as having Gaussian coefficients, using an error analysis similar to the one of Section 5.4.1 in [MR08], we obtain the following probability of failure:

Pr

k2fr + gek

> q − 1 2

= 1 − erf

q − 1

√ 40nσ

2

where erf is the Gauss error function. Though we use permutations of fixed polynomials rather than Gaussians, experiments show that the error rate is close to the expected one.

3.2 Security of the KEM

The most effective attacks against the NTRU problem involve lattice reduction, and this is the reason why cryptographic constructions based on NTRU are usually referred to as lattice-based schemes. Consider an attack that tries to find the key e from the public key h and the ciphertext t = 2hr + e mod q. We know that 2hr + e − t = 0 mod q. If we define the set

Λ = {(y

1

, y

2

, z) : y

i

∈ Z [x]/hx

n

± 1i, z ∈ Z & 2hy

1

+ y

2

− zt = 0 mod q},

then it is not hard to see that it is an additive group over Z

d

with d = 2n + 1, and thus a lattice.

Also notice that the tuple v = (r, e, 1), if seen as a vector over Z

d

, is in fact a “short vector”

in this lattice. Note that we could have also tried to find the secret key g from the public key h but that problem is strictly harder (as a lattice problem) than the one we consider here.

The best known approach to solving this problem is through the use of the BKZ algorithm.

This algorithm reduces a basis of the lattice by iteratively solving the shortest vector problem (SVP) in sublattices of fixed dimension β. To assess the hardness of finding the key e we need to compute the necessary block size β and evaluate the hardness of solving SVP in dimension β.

We follow the analysis of [ADPS16].

12

This attack tries to recover the vector v = (r, e, 1) ∈ Λ by solving a unique SVP problem in this lattice, i.e. we exploit the fact that v is constructed in such a way that all the other linearly-independent vectors in Λ will be much larger.

We model the behavior of BKZ using the geometric series assumption, i.e. the basis output by BKZ is such that kb

i

k = δ

d−2i−1

· Vol(Λ)

1/d

where (b

i

)

1≤i≤d

is the Gram-Schmidt orthogo- nalization of the basis and δ = ((πβ)

1/β

β/2πe)

1/2(β−1)

. By construction of the BKZ algorithm we know that

b

d−β+1

will be the norm of the shortest vector in the lattice obtained by pro- jecting Λ onto the last β vectors of the Gram-Schmidt basis. Thus if the norm of the projection of v is smaller than the expected norm of

b

d−β+1

this vector will be detected when using BKZ. The expected norm of the projection of v is kvk p

β/d, and using the fact that Λ has volume q

n

we obtain the following condition on the block size β:

kvk p

β/d ≤ δ

2β−d−3

q

n/d

12It is worth noting that the attack defined as the dual attack in that paper does not apply: this attack creates a distinguisher (in our case to know whethertis well-formed or completely random), but it is not clear how such a distinguisher can be used to break the one-wayness of our KEM.

(17)

We obtain the block sizes given in table 1 by solving this equation (we in fact consider a slightly more general equation which relates to the one given in [ADPS16]).

We now consider the time needed to solve the SVP problem in dimension β, it is worth noting that the BKZ algorithm in fact needs to do a large number of calls to its SVP solving oracle which implies that the complexity of running BKZ can be much larger than the complexity of solving SVP in dimension β . There exist two main approaches to solving SVP:

– Enumeration: The shortest vector of the lattice is found by enumerating all the vectors in a sphere of a given radius. This approach does not need large storage space as it proceeds by backtracking down a search tree. It is however (asymptotically) slower than Sieving as it runs in time 2

O(n2)

. The complexity of the enumeration algorithm depends on the quality of the input basis, as such the best known algorithm for enumeration relies on recursively applying BKZ basis reduction as a subroutine of enumeration and pruned enumeration as a subroutine of BKZ reduction. While this algorithm remains super-exponential (and thus highly impractical in high dimensions such as the one of our parameter sets III and IV) it has rather small constants in the exponents and benefits from a quadratic speedup in the quantum setting by applying the results of [Mon15]. We extrapolate the results of [Che13]

to the block size given in Table 1 and halve the result to obtain our (quantum) enumeration complexity.

– Sieving: The shortest vector of the lattice is found by sampling an exponential number of lattice vectors and combining them to create increasingly shorter vectors. All known sieving algorithms need a list of at least (4/3)

β/2+o(β)

≈ 2

0.2075β+o(β)

vectors to be successful (improving the space complexity is an open problem which seems rather unrealistic as this lower bound is already the result of a rather optimistic heuristic assumption). The best known algorithm has complexity 2

0.292β

[Laa15, BDGL16] which can be reduced to 2

0.262β

on a quantum computer by using Grover’s algorithm [LMvdP15]. We use this complexity to obtain the parameters in Table 1.

For extremely high-security situations we would advocate the use of our third or fourth set.

13

But due to the high storage requirement that is intrinsic in the sieving attack, we believe that our first set is also secure against today’s best attacks and those in the foreseeable future.

Notice that by raising the decapsulation probability from 2

−128

to 2

−30

(between the second and first set of parameters), we were able to gain the extra security necessary for the 128- bit security level. But since parameter set 4 already has extremely high security, we do not think that it makes too much sense to increase its security further by increasing the chance of decapsulation errors.

4 Digital Signatures from NTRU

After the KEM, the second component of our AKE is a digital signature scheme. As for number- theoretic schemes, there are two ways to construct (efficient) lattice-based signature schemes.

The first approach is via the Fiat-Shamir transform of a sigma protocol. The currently most efficient such protocol is BLISS, which was proposed in [DDLL13]. The second approach, hash- and-sign, was proposed by Gentry et al. [GPV08], and its most efficient instantiation is based on the hardness of finding short vectors in NTRU lattices [DLP14].

4.1 Hash-and-sign and Message Recovery

In this section we show how to adapt the hash-and-sign scheme from [DLP14] to create a signature scheme with message recovery. What this means is that instead of sending a signature

13This is the same parameter set used in [ADPS16] and we achieve essentially the same security as in that paper.

(18)

and a message, one can simply send a larger signature which then allows for the entire message to be recovered. We first briefly outline the scheme from [DLP14]. In the below scheme the distribution D

f

is some distribution from which secret keys are drawn and the distribution D

s

is the distribution of signatures. The goal of the signer is to produce polynomials according to the distribution D

s

conditioned on the message that he is signing. He is able to do that using the fact that he knows the secret NTRU keys f and g.

SigKeyGen{ f , g ←

$

D

f

, h ← f/g mod q Return (K

s

, K

v

) = ((f , g), h) } Sig((f , g), m){ t ← H(m),

s

1

, s

2

$

D

s

such that hs

1

+ s

2

= t mod q Return σ = (s

1

, m) }

Ver(h, σ = (s

1

, m)){ t ← H(m)

s

2

← t − hs

1

mod q if k(s

1

, s

2

)k < B then accept else reject }

We now give a brief intuition about the correctness of the scheme. The correctness relies on the fact that the polynomials s

1

and s

2

are drawn according to a discrete Normal distribution D

s

with a small standard deviation by using the trapdoor f, g (this can be done by using e.g. [GPV08, LP15]) so for an appropriate positive value B, the probability that k(s

1

, s

2

)k < B is overwhelming. The condition s

1

h + s

2

= t comes directly from the way s

1

and s

2

are obtained during the sampling.

We point out that as described above, the scheme needs to be stateful in order to be secure.

In particular, it needs to output the same signature for the same m, and therefore store the signatures that were output. There are two simple ways to remove this requirement. The first way is for the signer to use a pseudo-random function on the message m (with an additional secret key) in order to derive the randomness that he will use to produce the signature. The second way is for the signer to pick a random string r and compute t ← H(m, r) instead of H(m), and then send r along with the signature. This way, the signer is almost certainly assured that he will never sign the same (m, r) pair twice.

We use the security analysis of Section 3.2 to revise the security of [DLP14] and also extend the parameters to include the ring Z

12289

[x]/hx

1024

+ 1i. The security of the scheme is based on the fact that it is hard to recover f and g from h, and that forging a signature implies finding short polynomials s

1

, s

2

such that hs

1

+ s

2

= 0 mod q (see [DLP14] for more formal security statements).

Based on the way that the parameters are set, recovering f and g is harder than the cor- responding problem for the KEM, and so we focus on the problem of forging signatures. As in Section 3.2, this corresponds to solving an SVP instance. However, since the polynomials s

1

and s

2

are much larger than the polynomials of our KEM (here the coefficients of s

1

and s

2

have standard deviation 1.17 √

q which is ≈ 50 times larger than the ones used for the KEM) the corresponding problem is no longer unique-SVP, but rather an approximate-SVP one. To solve this problem we compute the γ factor of the associated lattice, as done in [DLP14]

(see Table 4). To solve SVP using the BKZ algorithm, one needs the vector b

1

output by

BKZ to be the shortest vector of the lattice, which corresponds to the condition δ ≤ γ where

(19)

I II Polynomial x512+ 1x1024+ 1

Modulus 12289 12289

Signing key size (bits) ≈2100 ≈3700 Verification key size

(bits)

6956 13912 message size (bits) 6956 13912 hash-and-sign size (bits) ≈11600 ≈23300 message-recovery hash-

and-sign size (bits)

≈9600 ≈18900

Gamma factor 1.0041 1.0022

Block size 388 906

sieving complexity (log

#operations)

>102 >237 sieving space (log bits) >85 >216 enumeration complexity

(log #operations)

>130 >520

Table 4. Signature parameters for n = 512 and 1024, q = 12289, and message m ∈ Z

q

[x]/hx

n

+ 1i

δ = ((πβ)

1/β

β/2πe)

1/2(β−1)

. From this equation we obtain the block size β and the analysis of Section 3.2 gives the parameters of Table 4.

4.2 Signature with Message Recovery

Instead of sending the signature σ = (s

1

, m) and then letting the verification algorithm recover s

2

, it may sometimes be intuitively useful to send the signature as s

1

, s

2

and let the verifier somehow recover m. This may be advantageous because s

1

and s

2

are drawn according to small Gaussians, and may be compressed (see Section 4.3), while m can be any polynomial in Z

q

[x]/hx

n

+ 1i and so cannot be encoded in less than n log(q) bits. In this scenario, a better solution would thus be to modify t so that sending s

1

and s

2

would allow the verifier to recover m. Our signature with message recovery can be used to recover messages of up to n log q − 256 bits, the scheme we define here can be used for messages m = (m

1

km

2

) of arbitrary size but the second part of the message m

2

will not benefit from message recovery and thus needs to be output as part of the signature.

Sig((f, g), m = (m

1

km

2

)){ t = (m

1

+ F (H

0

(m)) mod qkH

0

(m)) s

1

, s

2

$

D

s

such that hs

1

+ s

2

= t mod q Return σ = (s

1

, s

2

, m

2

) }

Ver(h, σ = (s

1

, s

2

, m

2

)){ (t

1

kt

2

) ← hs

1

+ s

2

mod q m

1

← t

1

− F (t

2

) mod q

if k(s

1

, s

2

)k < B and H

0

(m

1

||m

2

) = t

2

then accept

else reject }

While the hash function H mapped to a random element of Z

q

[x]/hx

n

+ 1i ' Z

nq

in the

previous scheme, now we want (m

1

+ F (H

0

(m)) mod qkH

0

(m)) to be a random element of

Références

Documents relatifs

Applying the MuSIASEM method to analyze the metabolic patterns of housing estates via the end-use matrix and using the supply matrix to estimate the losses and actual values

First we make this set into a set of

In order to enable message-dependent openings, we add an encryption layer to the previous scheme using an IBE where the signed message serves as the receiver’s identity.. The

For example, text results and results based on transcripts (e.g. video, audio) are presented with highlighted text snippets and are underpinned with semantic entities ex- tracted

Briefly, a network of MPAs could be an ad-hoc or regional network, a grouping of MPAs that are in proximity to each other but were not planned as a synergistic network; a

Previously, one- time digital signatures based on hash functions involved hundreds of hash function computations for each signature; we show that given online ac- cess to a

In the first part of this paper, a theoretical security analysis of QIM data hiding measures the in- formation leakage about the secret dither as the mutual informa- tion between

This new experience of moving easily through profiles already makes clear that what is meant by 2-LS and 1-LS social theories does not refer to different domains of reality but