• Aucun résultat trouvé

Performance Analysis

Dans le document The DART-Europe E-theses Portal (Page 121-124)

Verifiable Polynomial Evaluation

4.6 Performance Analysis

This section evaluates the theoretical performance of our verifiable polynomial evaluation scheme. We study the storage overhead induced by our protocol as well as its communication and computation complexities. We will show that while adopting the amortized model defined by Gennaroet al. [90], our protocol meets the efficiency requirement defined in Section3.4.

4.6.1 Storage

Data owner Ois required to store and publish the public key (b0,r1,r0)∈G1×G22. ServerS however keeps thed+ 1 coefficientsai ∈Fp of polynomialAand thed−1 encodingsqi ∈G2. Table4.1 lists the storage complexity of our protocol.

4.6.2 Communication

In terms of communication complexity, our verifiable polynomial evaluation solution requires constant bandwidth consumption. Indeed, at the end of the execution of algorithm ProbGen, querierQsends to cloud serverS encodingσxand verification keyVKx, requiringO(1) space.

Similarly, serverS returns encodingσy = (y, π) which amounts toO(1) space. Table4.1sums up the bandwidth complexity.

4.6.3 Computation

Algorithm Setup first generates a random coefficient b0 ∈ Fp to construct polynomial B and conducts an Euclidean division of polynomial A by polynomialB. The latter operation consists of dmultiplications and additions, where dis the degree of polynomialA. Once the Euclidean division is performed, algorithmSetupperforms one exponentiation inG1 to derive b0, and d+ 1 exponentiations in G2 to compute r0, r1 and qi. Although computationally expensive, algorithm Setup is executed only once by the client. Besides, its computational cost is amortized over the large number of verifications that third-party verifiers can carry out.

On the other hand,ProbGencomputes the verification keyVKx = (VK(x,B),VK(x,R)) which demands a constant number of operations that does not depend on the degree of polynomial A. More precisely, ProbGen consists of computing x2 inFp, performing one exponentiation and one multiplication in G1 to get VK(x,B) = gB(x), and running one exponentiation and one multiplication in G2 to obtain VK(x,R)=hR(x).

Furthermore, algorithm Compute runs in two steps: (i) the evaluation of polynomial A at point x which requires at most d additions and multiplications in Fp if the server uses Horner’s rule; and (ii) the generation of the proof π which involves d−3 multiplications in Fp and d−1 exponentiations andd−2 multiplications inG2.

Finally, the work at verifier V only consists of one exponentiation and one division in G2 and the computation of 2 bilinear pairings (indeed, we can rephrase Equation 4.1 as e(g,VKhy

(x,R)) =e(VK(x,B), π)).

Summary. The reader may refer to Table 4.1for a summary of the computational perfor-mances of our protocol. We can conclude from the above that our solution meets the require-ment on efficiency. Indeed, as Table4.1shows, the combined costs of algorithmsProbGenand Verify are negligible compared to the complexity of evaluating the polynomial. As a matter of fact, the time required to compute ProbGen and Verify are constant and independent of the degree of the outsourced polynomial. Moreover, the asymptotic cost of Compute is kept linear ind, which is substantially the same as evaluating the polynomial. In other terms, the complexity of generating the proof of computation does not influence the overall asymptotic complexity ofCompute. The complexity of algorithmSetupis admittedly linear in the degree

of the outsourced polynomial, however, it is amortized over an unlimited number of efficient verifications. Furthermore, our protocol is efficient in terms of communication complexity but also efficient in terms of storage for the data owner.

Table 4.1: Costs of our Verifiable Polynomial Evaluation scheme Storage |G|refers to the size (in bits) of elements in setG.

Data owner O(1) 1· |G1|+ 2· |G2|bits

Server O(d) (d+ 1)· |Fp|+ (d−1)· |G2|bits Communication

Outbound O(1) 1· |Fp|+ 2· |G1|bits Inbound O(1) 1· |Fp|+ 1· |G2|bits

Operations Setup ProbGen Compute Verify

PRNG 1 - -

-Additions inFp d - d

-Multiplications in Fp d 1 2d−3

-Multiplications inG1 - 1 -

-Multiplications inG2 - 1 d−2 1

Exponentiations in G1 1 1 -

-Exponentiations in G2 d+ 1 1 d−1 1

Pairings - - - 2

4.6.4 Comparison with Related Work

We compare our solution with two relevant existing techniques for verifiable polynomial eval-uation. Fiore and Gennaro [85] devise Algebraic Pseudo-Random Functions (aPRF), also used by Zhang and Safavi-Naini [193], to develop publicly verifiable solutions. Compared to these two solutions, our protocol induces the same amount of computational costs but with the additional property of public delegatability. Another solution for public verification considers signatures for correct computation [141], and uses polynomial commitments [108]

to construct these signatures. Besides public verifiability, this solution implements public delegatability. However, the construction in [141] relies on the d-SBDH assumption, whereas our solution is secure under a weaker assumption that is thebd/2c-SDH. It is worth mention-ing that our protocol can be changed to rely on the bd/δc-SDH assumption, where δ is the degree of the divisor polynomial and therefore our scheme can accommodate higher-degree polynomials. Table4.2compares our verifiable polynomial evaluation solution with the work described by Fiore and Gennaro [85] and Papamanthou et al. [141].

4.6.5 Experimental Results

We developed a prototype of our verifiable polynomial evaluation scheme in Python, using the Charm-Crypto library90which implements cryptographic primitives such as elliptic curves and bilinear pairings. We experimented on a machine with the following characteristics: Pro-cessor Intel Core i5-2500; CPU@3.80GHz clock speed; 64 bit OS; RAM 16 GB. All reported times are computed as the average of the times measured for a total of 20 executions of our protocol.

Figure4.2depicts the time (reported in Table 4.3) needed by each of the four algorithms of our protocol in function of the degree of the outsourced polynomial. As expected, the costs induced by algorithms Setup and Compute grow linearly with the degree of the polynomial.

We highlight the fact that in accordance with the theoretical cost analysis we conducted at

90Charm-Crypto library,http://jhuisi.github.io/charm/[Accessed: February 3, 2016].

Table 4.2: Comparison with related work

Hardness Public

Setup ProbGen Compute Verify Assumptions Delegatability

Fiore and Gennaro 1pairing 1pairing d+ 1expinG1 1pairing co-CDH No

[85] 2(d+ 1)expinG1 1expinG1 1expinGT DLin

1expinGT

Papamanthou et al. Polynomial preparation d+ 1expinG1 2pairing d-SBDH Yes

[141] 2d+ 1expinG1 2expinG1

Our scheme d+ 1expinG2 1expinG1 d1expinG2 2pairings bd/2c-SDH Yes 1expinG1 1expinG2 1expinG2

Table 4.3: Average times of our protocol and amortization

d Setup(s) ProbGen(s) Compute(s) Verify(s) ComputeLocal (s) Amortization

5 0.011 0.0031 0.005 0.0032 2.174×10−5 ×

50 0.103 0.0030 0.070 0.0031 1.245×10−4 ×

500 0.728 0.0029 0.723 0.0031 0,001 ×

5000 7.205 0.0030 7.22 0.0031 0,012 1195

50000 72.58 0.0036 72.22 0.0032 0,127 602

500000 796.9 0.0043 724.9 0.0032 1,324 606

the beginning of Section 4.6, algorithms ProbGen and Verify generate light costs, indepen-dent from the degree of the outsourced polynomial. We also compute in the last column of Table 4.3 the breakeven point from which the expensive cost of Setup is amortized over multiple verifications. To interpret these values, we introduce the following criterion, called outsourceability.

Definition 10 (Outsourceability - computation). The criterion x of outsourceability for a verifiable computation scheme is determined by a parameterx≥0, according to which the time to pre-process the function f to be outsourced is amortized overx verifications of results returned by a remote server. Stated differently, x is such that:

tSetup+x·(tProbGen+tVerify)≤x·tComputeLocal where talgo is the time required to execute algorithmalgo.

Hence, x is defined by the relation:

x=

tSetup

tComputeLocal −(tProbGen+tVerify)

Table4.3 shows that for degreesd≤5000, outsourcing the evaluation of the polynomials is not an interesting strategy because it would be more costly for the data owner to outsource the polynomial and verify the correctness of the results than evaluating it locally. However, for polynomials with larger degrees d ≥ 5000, outsourcing is a winning strategy. Namely, if we consider the case where d = 500000, the data owner should better make the choice to outsource the polynomial to the cloud, if at least x = 606 polynomial evaluations are requested to the server (naturally, for the same polynomial). Besides, it is worth noticing that we run our benchmarks on a machine that has 16 GB of RAM. Modern smartphones91 have between 1 and 4 GB of RAM, latest laptops92 have no more than 8 GB of RAM.

91Gareth Beavis, “Best Phone 2016: The 10 Best Smartphones We’ve Tested”, TechRadar, January 25, 2016,http://tiny.cc/w4ft8x[Accessed: February 3, 2016].

92Joel Santo Domingo, Laarni Almendrala Ragaza, “The 10 Best Laptops of 2016”, PC Magazine, January

Figure 4.2: Experimental measurements

Therefore, we can extrapolate the analysis of outsourceability of our verifiable polynomial solution to the real world. It may take greater time on smartphones and laptops to evaluate a polynomial with large degree. Hence, even for polynomial with degree d≤5000, the best strategy for users of these devices would be to outsource these polynomials to the cloud, in order to save computation resources.

Dans le document The DART-Europe E-theses Portal (Page 121-124)