• Aucun résultat trouvé

Quantum algorithms for the log-determinant

Dans le document Quantum algorithms for machine learning (Page 105-110)

7.3.1 Previous work

Some years ago, another quantum algorithm for the log-determinant had been put forward [161]. This algorithm leverages quantum computers to sample from the uniform superpo-sitions of singular values of a matrix. Then, once a certain number of samples is collected, the log-determinant is estimated as:

As the runtime of the algorithm was not thoroughly analyzed in [160], an analysis appeared in [161].

Lemma 75 (Sampling-based algorithm for log-determinant [161]). Let A ∈ Rn×n be a SPD matrix for which we have quantum access. Then, there is a quantum algorithm, that estimates log det(A)such that |log det(A)−log det(A)|< nwith high probability in time

O(µ(A)κ(A)/e 3). (7.9)

Proof. The idea of this algorithm is to estimate the log-determinant as log det(A) = nE[logσi], where E[logσi] = n1Pn

i=1logσi. For this, we want to sample from the uni-form distribution of singular values of A. Let’s recall that for a symmetric matrix A, its eigenvalue decomposition is A =P

jσj|ujihuj|. We start the algorithm from the maxi-mally mixed state ρ0 = n1P

j|jihj|, which also equals n1P

j|ujihuj|, as a totally mixed state is totally mixed in any orthogonal basis. By using SVE (or equivalently Hamiltonian simulation and QPE), we can obtain ρ1 = 1nP

j|ujihuj| ⊗ |σ˜jih˜σj|, where |σjσ˜j| ≤1

for allj. As consequence, it is simple to check using the Taylor expansion of the logarithm function, that for each singular value, |logσj−log ˜σj| ≤1j. By lemma 20, the com-plexity to generate ρ1 is O(µ(A)/e 1). By measuring the second register of ρ1 we sample from the uniform distribution of the singular values, and we can therefore approximate E[log ˜σj]. The error of the quantum procedure in estimatingE[log ˜σj] is

To approximate E[log ˜σj] to precision2, Hoeffding inequality (i.e. lemma 2) tells us that it suffices to make ˜O(1/22) measurements on ρ1. By picking 1=/2κ(A), 2=/2, the error to estimate log det(A) is bounded byn. The complexity isO(κ(A)µ(A)/e 3).

Remark: If we do not use the SVE and assume thatAis ssparse and hermitian (as in the original formulation of this algorithm), then we can recover the singular values of A applying phase estimation to a unitary that performs Hamiltonian simulation on A. The complexity to implement e−iAt to precision 0 is O(stkAkmax+ log 1/0) [104, 103], and we perform a quantum phase estimation with errorO(1/σmax1). Thus the complexity to obtainρ1 isO(skAke max/1). In this setting we recover the result of [161], as the runtime becomes O(skAke maxκ(A)/3).

Lemma 76 (Variance of Zhao’s estimator). Let A ∈ Rn×n such that kAk ≤ 1. Let log det(A) be the estimate of log det(A) using the algorithm described in lemma 75 with error parameter. Then:

V ar(log det(A)) = (nlogκ(A))2 12 .

Proof. The Bienaymé formula says that for random variablesXi, Var(P

iXi) =P

iVar(Xi) under the assumption that the random variables are independent. From this, as we have chosenm=O(−2) samples, we derive that:

As we sample the σi uniformly, we can equivalently assume that we sample logσi uniformly, and the variance of the uniform distribution on an interval [a, b] is: 121(b−a)2. Therefore, for the random variable under consideration the variance is 121(log(σn)2).

7.3.2 SVE-based quantum algorithm for the log-determinant

We start our series of quantum algorithms for the log determinant using a pretty direct approach to the problem, i.e. using a quantum algorithm for singular value estimation.

Algorithm 8SVE-based quantum algorithm for the log-determinant

Require: Quantum access to the SPD matrix A∈Rn×n such thatkAk ≤1,∈(0,1),.

Ensure: An estimate |log det(A)−log det(A)| ≤n.

1: Prepare

2: Perform SVE by theorem 20 up to precisionlogκand do control rotations to prepare 1

3: Apply amplitude estimation to estimate the probability of |0i to precision 2(A) logκ(A). Set the result asP.

4: Return log det(A) =−κ2(A)(logκ(A))kAk2FP.

Theorem 77 (SVE-based log-determinant). Let A ∈ Rn×n by a SPD matrix such that kAk ≤1for which we have quantum access as in Definition 8. Then, Algorithm 8 estimates log det(A)such that|log det(A)−log det(A)|<log det(A) in time

O(µ(A)κe 3(A)(logκ(A))2/2). (7.13) Proof. We can rewrite the quantum state encoding the representation ofA(which we can create with quantum access toA) as follows:

|Ai= 1

Starting from the state|Ai, we can apply SVE by theorem 20 toAup to precision1, and prepare the state

Since we assume thatkAk ≤1, using controlled operations, we can prepare 1

(a). Error analysis. We choose1such thatκ(A)1 is small. Note that

Denote P0 as the 2 approximation of P obtained by amplitude estimation, then kAk2FP0/C2 is an (n+ 2|log det(A)|)(κ(A)1+O(κ2(A)21)) +2kAk2F/C2 approximation

(b). Runtime analysis. The runtime of the algorithm comes from the using of SVE by theorem 20 and the performing of amplitude estimation on the state in equation (7.15).

The complexity to obtain the state (7.15) is O(µ(A)/e 1). The complexity to perform amplitude estimation (i.e. theorem 18) is O(µ(A)/e 12) = O(µ(A)κe 3(A)(logκ(A))2/2).

We now use the hypothesis that kAk < 1 (say 1/2), and observe that |log det(A)| ≥

|nlog(σ1)| = nlog(1/σ1). Thus, we improve the precision in the estimate of the error, and we run Algorithm 77 with precision log(1/σ

1). By running our algorithm with a better precision we can assure the error is nownlog(1/σ

1)log det(A). Hence the new complexity becomesO(µ(A)κ(A)e −2log(1/σ1)) but we now can guarantee a relative error with respect to log det(A). As the matrix is sub-normalized, i.e. kAk < 1, this would increase the runtime only by a constant factor.

7.3.3 SVT based algorithm for log-determinant

For computing an estimate of the log-determinant with a better dependence on the param-eters that governs the runtime we explore the framework of singular value transformation, which is described in section 2.5.4. When we have a quantum access to A, i.e. when using quantum data structures in theorem 8, we can obtain an (µ(A),logn, log())-block-encoding ofA, as described in lemma 50 of [70]. More precisely, the error log() comes from the truncation error performed on the entries ofaij, while creating the quantum accessible data structure (i.e Definition 8). In the algorithms presented in this thesis we implicitly considered this error to be 0. In the next algorithm we made it explicit, in order to have a more general statement.

Theorem 78. Let >0. Assume to have a (µ(A), a, 1) block encoding of a SPD matrix A∈Rn×n withkAk<1 with 1κ(A)2log(κ(A)µ(A))2 µ(A). There is a quantum algorithm

Algorithm 9Log-determinant estimation using singular value transformation

Require: Block encoding (µ(A), ε) for εas in theorem 78, for a SPD matrix A ∈Rn×n withkAk ≤1,∈(0,1).

Ensure: An estimate |log det(A)−log det(A)| ≤log det(A).

1: Set2= and3=/8 log(2κ(A)),

2: Construct the polynomial ˜S(x) in lemma 3 with accuracy3.

3: Prepare

4: Construct the block-encoding of ˜S(A/µ(A)) to precision 4κ(A)p

1/µ(A) to create the

5: Apply inner product estimation subroutines, i.e. lemma 30 with precision 2/log(σ1) to estimatehψ01i. Denote the result asP.

6: Return log det(A) = 2nPlog(2κ(A)) +nlogµ(A).

that returns the estimate log det(A) such that |log det(A)−log det(A)| ≤ log det(A) in timeO(µ(A)κ(A)/).e

Proof. We define as ˜S the polynomial approximation of the logarithm function (which we reported in lemma 3) with error3 . Combined with lemma 25 - which allows to create a block encoding of a matrix where we applied a polynomial function to its singular values -we can find an (1, a+2,4κ(A)p

1/µ(A) log(1/)) block-encoding of ˜S(A/µ(A)) with circuit complexity of O(κ(A)µ(A)poly log(1/3)) = O(κ(A)µ(A)). Remark that the polynomiale approximation induces an error in the estimate of the log-determinant. We can bound this error, by analyzing the approximation of the singular values of the matrix ˜S(A/µ(A)) as:

From the states defined in Algorithm 9, it is simple to note that:

P =hψ01i= 1

LetP be an2-approximation ofP obtained by an inner product estimation subroutine, (i.e. lemma 30). We set log det(A) = 2nPlog(2κ(A)µ(A)) +nlogµ(A) as an estimate of log det(A). We use polynomial approximation of logarithm in lemma 3 with error3.

The error is given by three things: the error in the block encoding of the polynomial approximation, i.e. theorem 25, which is 4κ(A)p

1/µ(A), and affects the creation of1i.

The error 2 in inner product estimation subroutines, that gives |P −P| ≤ 2, and the error induced by the polynomial approximation on the trace, i.e. Equation (7.18). These errors sums up additively, and thus we can use the triangle inequality to bound them, i.e.

choose 2, 3 such that the whole error is bounded byn. to create a block-encoding of ˜S(A/µ(A)) is O(µ(A)κ(A) log(1/3)). The total cost of the algorithm to get an error ofnisO(µ(A)κ(A) log(1/3)

2 ) =O(e µ(A)κ(A) ).

We now use the hypothesis that kAk < 1 (say 1/2), and observe that |log det(A)| ≥

|nlog(σ1)| = nlog(1/σ1). Thus, we improve the precision in the estimate of the error, and we run Algorithm 9 with precision log(1/σ

1). By running our algorithm with a better precision we can assure the error is nownlog(1/σ

1)log det(A). Hence the new complexity becomesO(µ(A)κ(A)e −1log(1/σ1)) but we now can guarantee a relative error with respect to log det(A). As the matrix is sub-normalized, i.e. kAk < 1, this would increase the runtime only by a constant factor.

Dans le document Quantum algorithms for machine learning (Page 105-110)