• Aucun résultat trouvé

Constructing confidence sets for the matrix completion problem

N/A
N/A
Protected

Academic year: 2021

Partager "Constructing confidence sets for the matrix completion problem"

Copied!
13
0
0

Texte intégral

(1)

HAL Id: hal-01504257

https://hal.archives-ouvertes.fr/hal-01504257

Preprint submitted on 9 Apr 2017

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Constructing confidence sets for the matrix completion problem

Alexandra Carpentier, Olga Klopp, Matthias Löffler

To cite this version:

Alexandra Carpentier, Olga Klopp, Matthias Löffler. Constructing confidence sets for the matrix

completion problem. 2017. �hal-01504257�

(2)

Constructing confidence sets for the matrix completion problem

April 9, 2017

Alexandra Carpentier, Universit¨ at Potsdam

1

Olga Klopp, University Paris Nanterre

2

Matthias L¨ offler, University of Cambridge

3

Abstract

In the present note we consider the problem of constructing honest and adaptive confi- dence sets for the matrix completion problem. For the Bernoulli model with known variance of the noise we provide a realizable method for constructing confidence sets that adapt to the unknown rank of the true matrix.

Keywords: low rank recovery, confidence sets, adaptivity, matrix completion.

1 Introduction

In recent years, there has been a considerable interest in statistical inference for high-dimensional matrices. One particular problem is matrix completion where one observes only a small number n ≪ m

1

m

2

of the entries of a high-dimensional m

1

× m

2

matrix M

0

of unknown rank r; it aims at inferring the missing entries. The problem of matrix completion comes up in many areas including collaborative filtering, multi-class learning in data analysis, system identification in control, global positioning from partial distance information and computer vision, to mention some of them. For instance, in computer vision, this problem arises as many pixels may be missing in digital images. In collaborative filtering, one wants to make automatic predictions about the preferences of a user by collecting information from many users. So, we have a data matrix where rows are users and columns are items. For each user, we have a partial list of his preferences. We would like to predict the missing ones in order to be able to recommend items that he may be interested in.

In general, recovery of a matrix from a small number of observed entries is impossible, but, if the unknown matrix has low rank, then accurate and even exact recovery is possible. In the noiseless setting, [6, 4, 3] established the following remarkable result: assuming that it satisfies a low coherence condition, M

0

can be recovered exactly by constrained nuclear norm minimization with high probability from only n & r(m

1

∨ m

2

) log

2

(m

1

∨ m

2

) entries observed uniformly at random.

1Institut f¨ur Mathematik, carpentier@maths.uni-potsdam.de

2MODAL’X, kloppolga@math.cnrs.fr

3Statistical Laboratory, Centre for Mathematical Sciences, m.loffler@statslab.cam.ac.uk

(3)

What makes low-rank matrices special is that they depend on a number of free parameters that is much smaller than the total number of entries. Taking the singular value decomposition of a matrix A ∈ R

m1×m2

of rank r, it is easy to see that A depends upon (m

1

+ m

2

)r − r

2

free parameters. This number of free parameters gives us a lower bound for the number of observations needed to complete the matrix.

A situation, common in applications, corresponds to the noisy setting in which the few avail- able entries are corrupted by noise. Noisy matrix completion has been extensively studied recently (e.g., [12, 15, 2, 8]). Here we observe a relatively small number of entries of a data matrix

Y = M

0

+ E

where M

0

= (M

ij

) ∈ R

m1×m2

is the unknown matrix of interest and E = (ε

ij

) ∈ R

m1×m2

is a matrix of random errors. It is an important issue in applications to be able to say from the observations how well the recovery procedure has worked or, in the sequential sampling setting, to be able to give data-driven stopping rules that guarantee the recovery of the matrix M

0

at a given precision. This fundamental statistical question was recently studied in [7] where two statistical models for matrix completion are considered: the trace regression model and the Bernoulli model (for details see Section 1.1). In particular, in [7], the authors show that in the case of unknown noise variance, the information-theoretic structure of these two models is fundamentally different.

In the trace regression model, even if only an upper bound for the variance of the noise is known, a honest and rank adaptive Frobenius-confidence set whose diameter scales with the minimax optimal estimation rate exists. In the Bernoulli model however, such sets do not exist.

Another major difference is that, in the case of known variance of the noise, [7] provides a realizable method for constructing confidence sets for the trace regression model whereas for the Bernoulli model only the existence of adaptive and honest confidence sets is demonstrated. The proof uses the duality between the problem of testing the rank of a matrix and the existence of honest and adaptive confidence sets. In particular, the construction in [7] is based on infimum test statistics which can not be computed in polynomial time for the matrix completion problem. The present note aims to close this gap and provides a realizable method for constructing confidence sets for the Bernoulli model.

1.1 Notation, assumptions and some basic results

We assume that each entry of Y is observed independently of the other entries with probability p = n/(m

1

m

2

). More precisely, if n ≤ m

1

m

2

is given and B

ij

are i.i.d. Bernoulli random variables of parameter p independent of the ε

ij

’s, we observe

Y

ij

= B

ij

(M

ij

+ ε

ij

) , 1 ≤ i ≤ m

1

, 1 ≤ j ≤ m

2

. (1) This model for the matrix completion problem is usually called the Bernoulli model. Another model often considered in the matrix completion literature is the trace regression model (e.g., [12, 15, 2, 10]). Let k

0

= rank(M

0

).

In many of the most cited applications of the matrix completion problem, such as recom- mendation systems or the problem of global positioning from the local distances, the noise is bounded but not necessarily identically distributed. This is the assumption which we adopt in the present paper. More precisely, we assume that the noise variables are independent, ho- moscedastic, bounded and centered:

Assumption 1. For any (ij) ∈ [m

1

] × [m

2

] we assume that E (ε

ij

) = 0, E (ε

2ij

) = σ

2

and that there exists a positive constant U > 0 such that

max

ij

| ε

ij

| ≤ U.

(4)

Let m = min(m

1

, m

2

), d = m

1

+ m

2

. For any l ∈ N we set [l] = { 1, . . . , l } . For any integer 0 ≤ k ≤ m and any a > 0, we define the parameter space of rank k matrices with entries bounded by a in absolute value as

A (k, a) =

M ∈ R

m1×m2

: rank(M ) ≤ k, k M k

≤ a . (2) For constants β ∈ (0, 1) and c = c(σ, a) > 0 we have that

inf

Mc

sup

M0∈A(k,a)

P

M

0

k M c − M

0

k

22

m

1

m

2

> c kd n

!

≥ β

where M c is an estimator of M

0

(see, e.g., [11]). It has been also shown in [11] that an iterative soft thresholding estimator M c satisfies with P

M

0

-probability at least 1 − 8/d k M c − M

0

k

2F

m

1

m

2

≤ C (a + σ)

2

kd

n and k M

0

− M c k

≤ 2a (3) for a constant C > 0. These lower and upper bounds imply that for the Frobenius loss the minimax risk for recovering a matrix M

0

∈ A (k

0

, a) is of order

r (σ + a)

2

k

0

dm

1

m

2

n .

For k ∈ [m] we set

r

k

= C (σ + a)

2

dk

n ,

where C is the numerical constant in (3).

Let A, B be matrices in R

m1×m2

. We define the matrix scalar product as h A, B i = tr(A

T

B).

The trace norm of a matrix A = (a

ij

) is defined as k A k

:= P σ

j

(A), the operator norm as k A k := σ

1

(A) and the Frobenius norm as k A k

22

:= P

i

σ

2i

= P

i,j

a

2ij

where (σ

j

(A))

j

are the singular values of A ordered decreasingly. k A k

= max

i,j

| a

ij

| denotes the largest absolute value of any entry of A.

In what follows, we use symbols C, c for a generic positive constant, which is independent of n, m

1

, m

2

, and may take different values at different places. We denote by a ∨ b = max(a, b).

We use the following definition of honest and adaptive confidence sets:

Definition 1. Let α, α

> 0 be given. A set C

n

= C

n

((Y

ij

, B

ij

), α) ⊂ A (m, a) is a honest confidence set at level α for the model A (m, a) if

lim inf

n

inf

M∈A(m,a)

P

nM

(M ∈ C

n

) ≥ 1 − α.

Furthermore, we say that C

n

is adaptive for the sub-model A (k, a ) at level α

if there exists a constant C = C(α, α

) > 0 such that

sup

M∈A(k,a)

P

nM

( k C

n

k

2

> Cr

k

) ≤ α

while still retaining

sup

M∈A(m,a)

P

nM

( k C

n

k

2

> Cr

m

) ≤ α

.

(5)

2 A non-asymptotic confidence set for matrix completion problem

Let M c be an estimator of M

0

based on the observations (Y

ij

, B

ij

) from the Bernoulli model (1) such that k M c k

≤ a . Assume that for some β > 0 M c satisfies the following risk bound:

sup

M0∈A(k0,a)

P k M c − M

0

k

22

m

1

m

2

≤ C (σ + a)

2

k

0

d n

!

≥ 1 − β. (4)

We can take, for example, the thresholding, estimator considered in [11] which attains (4) with β = 8/d. Our construction is based on Lepski’s method. We denote by M c

k

the projection of M c on the set A (k, a ) of matrices of rank k with sup-norm bounded by a :

M c

k

∈ arg min

A∈A(k,a)

k M c − A k

2

. Set

S = { k : k M c − M c

k

k

22

≤ r

k

} and k

= min { k ∈ S } .

We will use M f = M c

k

to center our confidence set and the residual sum of squares statistic ˆ r

n

: ˆ

r

n

= 1 n

X

ij

(Y

ij

− B

ij

M f

ij

)

2

− σ

2

. (5)

Given α > 0, let

¯ z = p

256 k M − M f k

22

+ z(U C

)

2

dk

and ξ

α,U

= 2U

2

p

log(α

−1

) + 4U

2

log(α

−1

) 3 √

n .

Here z is a sufficiently large numerical constant to be chosen later on and C

≥ 2 is an universal constant in Corollary 3.12 [1]. We define the confidence set as follows:

C

n

= (

M ∈ R

m1×m2

: k M − M f k

22

m

1

m

2

≤ 128

ˆ

r

n

+ a

2

zdk

+ ¯ z n + ξ

α,U

√ n )

. (6)

Theorem 1. Let α > 0, d > 16 and suppose that M c attains the risk bound (4) with probability at least 1 − β . Let C

n

be given by (6). Assume that k M

0

k

≤ a and that Assumption 1 is satisfied. Then, for every n ≥ m log(d), we have

P

M

0

(M

0

∈ C

n

) ≥ 1 − α − exp( − cd). (7)

Moreover, with probability at least 1 − β − exp( − cd) k C

n

k

22

m

1

m

2

≤ C (σ + a)

2

dk

0

n . (8)

Theorem 1 implies that C

n

is an honest and adaptive confidence set:

Corollary 1. Let α > 0, d > 16 and suppose that M c attains the risk bound (4) with probability

at least 1 − β . Let C

n

be given by (6). Assume that Assumption 1 is satisfied. Then, for

n ≥ m log(d), C

n

is a α + exp( − cd) honest confidence set for the model A (m, a) and adapts to

every sub-model A (k, a), 1 ≤ k ≤ m, at level β + exp( − cd).

(6)

Proof of Theorem 1. We consider the following sets C (k, a ) =

A ∈ R

m1×m2

: k A k

≤ a , k M

0

− A k

22

≥ 256( a ∨ U )

2

zd

p and rank(A) ≤ k

and write

C = ∪

mk=1

C (k, a). (9)

When M

0

− M f

2

2

≤ 256(a ∨ U )

2

zd

p we have that M

0

∈ C

n

. So, we only need to consider the case M

0

− M f

2

2

≥ 256(a ∨ U )

2

zd

p . In this case we have that M f ∈ C . We introduce the observation operator X defined as follows,

X : R

m1×m2

→ R

m1×m2

with X (A) = (B

ij

a

ij

)

ij

. and set k A k

2L2(Π)

= E kX (A) k

22

= p k A k

22

. We can decompose

ˆ

r

n

= n

−1

kX (f M − M

0

) k

22

+ 2n

−1

hX (E), M

0

− M f i + n

−1

kX (E) k

22

− σ

2

. Then we can bound the probability P

M

0

(M

0

∈ / C

n

) by the sum of the following probabilities:

I := P

M

0

k M f − M

0

k

2L2(Π)

128 > kX (f M − M

0

) k

22

+ za

2

dk

! ,

II := P

M

0

− 2 hX (E), M

0

− M f i > z ¯ , III := P

M

0

−kX (E) k

22

+ nσ

2

> √ nξ

α,U

.

By Lemma 1, the first probability is bounded by 8 exp ( − 4d) for z ≥ (27C

)

2

. For the second term we use Lemma 7 which implies that II ≤ exp( − cd) for z ≥ 6240. Finally, for the third term, Bernstein’s inequality implies

P

−kX (E) k

22

+ nσ

2

> t ≤ exp

− t

2

2

nU

2

+

23

U

2

t

.

Taking t = 2U

2

p

n log(α

−1

) +

43

U

2

log(α

−1

) we get that III ≤ α by definition of ξ

α,U

. This completes the proof of (7).

To prove (8), using Lemma 1 and Lemma 7, we can bound the square Frobenius norm diameter of our confidence set C

n

defined in (6) as follows:

k C

n

k

22

m

1

m

2

. k M f − M

0

k

22

m

1

m

2

+

r

k

+ ξ

α,U

√ n

.

This bound holds on an event of probability at least 1 − exp( − cd). Now we restrict to the event where M c attains the risk bound in (4) which happens with probability at least 1 − β. On this event, M

0

∈ A (k

0

, a) implies k M c − M c

k0

k

22

≤ r

k0

. So, k

0

∈ S and k

≤ k

0

. Now, the triangle inequality and r

k

≤ r

k0

imply that on the intersection of those two events we have that

k M f − M

0

k

22

. m

1

m

2

(r

k0

+ r

k

) . m

1

m

2

r

k0

.

This, together with the definition of ξ

α,U

and the condition n ≤ m

1

m

2

, completes the proof of

(8).

(7)

A Technical Lemmas

Lemma 1. With probability larger then 1 − 8 exp ( − 4d) we have that sup

A∈C

kX (M

0

− A) k

2

− k M

0

− A k

L2(Π)

78

k M

0

− A k

L2(Π)

a p

rank(A)d ≤ 27 C

where C

is an universal numerical constant and C is defined in (9).

Proof. We have that P sup

A∈C

kX (M

0

− A) k

2

− k M

0

− A k

L2(Π)

78

k M

0

− A k

L2(Π)

a p

rank(A)d ≥ 27 C

!

k0

X

k=1

P sup

A∈C(k,a)

kX (M

0

− A) k

2

− k M

0

− A k

L2(Π)

− 7

8 k M

0

− A k

L2(Π)

≥ 27 C

a √ kd

!

| {z }

I

. (10)

In order to upper bound I, we use a standard peeling argument. Let α = 7/6 and ν

2

= 470a

2

zd p . For l ∈ N set

S

l

=

A ∈ C (k, a) : α

l

ν ≤ k A − M

0

k

2

≤ α

l+1

ν . Then

I ≤ X

l=1

P

sup

A∈Sl

kX (M

0

− A) k

2

− k M

0

− A k

L2(Π)

≥ 27 C

a √ kd + 7

8 α

l

a √ 470zd

≤ X

l=1

P sup

A∈C(k,a,αl+1ν)

kX (M

0

− A) k

2

− k M

0

− A k

L2(Π)

≥ 27 C

a √ kd + 7

8 α

l

a √ 470zd

!

| {z }

II

where C (k, a, T ) = { A ∈ C (k, a) : k M

0

− A k

2

≤ T } . The following Lemma gives an upper bound on II:

Lemma 2. Consider the following set of matrices

C (k, a, T ) = { A ∈ C (k, a) : k M

0

− A k

2

≤ T } and set

Z

T

= sup

A∈C(k,a,T)

kX (M

0

− A) k

2

− k M

0

− A k

L2(Π)

.

Then, we have that P

Z

T

≥ 3

4

√ pT + 27 C

a √ kd

≤ 4e

−c1p T2/a2

with c

1

≥ (512)

−1

.

Lemma 2 implies that II ≤ 8 exp( − c

1

p α

2l

ν

2

/a

2

) and we obtain I ≤ 8

X

∞ l=1

exp( − c

1

p α

2l+2

ν

2

/a

2

) ≤ 8 X

l=1

exp − 2c

1

p ν

2

log(α) l/a

2

(8)

where we used e

x

≥ x. We finally compute for ν

2

= 188a

2

zdp

−1

I ≤ 8 exp − 2c

1

p ν

2

log(α)/a

2

1 − exp ( − 2c

1

p ν

2

log(α)/ a

2

) ≤ 16 exp ( − 376 c

1

zd log(7/6)) ≤ exp( − 5d) where we take z ≥ (27C

)

2

. Using (10) and d ≥ log(m) we get the statement of Lemma 1.

Proof of Lemma 2. This proof is close to the proof of Theorem 1 in [15]. We start by applying the discretization argument. Let { G

1δ

, . . . G

N(δ)δ

} be a δ − covering of C (k, a, T ) given by Lemma 3. Then, for any A ∈ C (k, a, T ) there exists some index i ∈ { 1, . . . , N (δ) } and a matrix ∆ with k ∆ k

2

≤ δ such that A = G

iδ

+ ∆. Using the triangle inequality we have

k M

0

− A k

L2(Π)

− kX (M

0

− A) k

2

≤ X (M

0

− G

iδ

)

2

− k M

0

− G

iδ

k

L2(Π)

+ kX ∆ k

2

+ √ pδ.

Lemma 3 implies that ∆ ∈ D

δ

(2k, 2a, 2T ) where

D

δ

(k, a, T ) = { A ∈ R

m1×m2

: k A k

≤ a, k A k

2

≤ δ and k A k

≤ √ kT } . Then,

Z

T

≤ max

i=1,...,N(δ)

X (M

0

− G

iδ

)

2

− k M

0

− G

iδ

k

L2(Π)

+ sup

∆∈Dδ(2k,2a,2T)

kX ∆ k

2

+ √ pδ.

Now we take δ = T /8 and use Lemma 6 and Lemma 5 to get Z

T

≤ √ pδ + 8 C

a √

kd + 19 a C

2kd + √ pT /2 + √ pδ ≤ 27 C

a √

kd + 6 √ pT /8.

with probability at least 1 − 8 exp

512pTa22

.

Lemma 3. Let δ = T /8. There exists a set of matrices { G

1δ

, . . . G

Nδ (δ)

} with N

δ

18Tδ

2(d+1)k

and such that

(i) For any A ∈ C (k, a, T ) there exists a G

Aδ

∈ { G

1δ

, . . . G

N(δ)δ

} satisfying k A − G

Aδ

k

2

≤ δ and (A − G

Aδ

) ∈ D

δ

(2k, 2a, 2T ).

(ii) Moreover, k G

jδ

− M

0

k

≤ 2 a and k G

jδ

− M

0

k

2

≤ 2T for any j = 1, . . . , N

δ

. Proof. We use the following result (see Lemma 3.1 in [5] and Lemma A.2 in [17]):

Lemma 4. Let S (k, T ) = { A ∈ R

m1×m2

: rank(A) ≤ k and k A k

2

≤ T } . Then, there exists an ǫ − net S(k, T ¯ ) for the Frobenius norm obeying

S(k, T ¯ ) ≤ (9T /ǫ)

(m1+m2+1)k

.

Let S

M0

(k, T ) = { A ∈ R

m1×m2

: rank(A) ≤ k and k A − M

0

k

2

≤ T } and take a X

0

∈ C (k, a, T ). We have that S

M0

(k, T ) − X

0

⊂ S(2k, 2T ). Let ¯ S(2k, 2T ) be an δ − net given by Lemma 4. Then, for any A ∈ S

M0

(k, T ) there exists a ¯ G

Aδ

∈ S(2k, ¯ 2T ) such that k A − X

0

− G ¯

Aδ

k

2

≤ δ. Let G

jδ

= Π( ¯ G

jδ

) + X

0

for j = 1, . . . , S(2k, ¯ 2T ) where Π is the projection operator under Frobenius norm into the set D (2k, 2a, 2T ) = { A ∈ R

m1×m2

: k A k

≤ 2a, and k A k

≤ 2 √

2kT } . Note

(9)

that as D (2k, 2a, 2T ) is convex and closed, Π is non-expansive in Frobenius norm. For any A ∈ C (k, a, T ) ⊂ S

M0

(k, T ), we have that A − X

0

∈ D (2k, 2a, 2T ) which implies

k A − X

0

− Π( ¯ G

Aδ

) k

2

= k Π(A − X

0

− G ¯

Aδ

) k

2

≤ k A − G ¯

Aδ

− X

0

) k

2

≤ δ

and we have that (A − Π( ¯ G

Aδ

) − X

0

) ∈ D

δ

(2k, 2a, 2T ) which completes the proof of (i) of Lemma 3.

To prove (ii), note that by the definition of Π we have that k G

jδ

− M

0

k

= k Π( ¯ G

jδ

)+X

0

− M

0

k

= k Π( ¯ G

jδ

+ X

0

− M

0

) k

≤ 2a and k G

jδ

− M

0

k

2

≤ 2T .

Lemma 5. Let δ = T /8 and assume that n ≥ m log(m). We have that with probability at least 1 − 4 exp

− pT

2

512 a

2

sup

∆∈Dδ(2k,2a,2T)

kX ∆ k

2

≤ 19 a C

2kd + √ pT /2.

Proof. Let X

T

= sup

∆∈Dδ(2k,2a,2T)

kX ∆ k

2

. We use the following Talagrand’s concentration inequal- ity :

Theorem 2. Suppose that f : [ − 1, 1]

N

→ R is a convex Lipschitz function with Lipschitz constant L. Let Ξ

1

, . . . Ξ

N

be independent random variables taking value in [ − 1, 1]. Let Z : = f (Ξ

1

, . . . , Ξ

n

). Then for any t ≥ 0,

P ( | Z − E (Z ) | ≥ 16L + t) ≤ 4e

−t2/2L2

. For a proof see [16] and [8]. Let f (x

11

, . . . , x

m1m2

) : = sup

∆∈Dδ(2k,2a,2T)

qP

(i,j)

x

2ij

2ij

. It is easy to see that f (x

11

, . . . , x

m1m2

) is a Lipschitz function with Lipschitz constant L = 2a. Indeed,

| f (x

11

, . . . , x

m1m2

) − f (z

11

, . . . , z

m1m2

) |

=

sup

∆∈Dδ(2k,2a,2T)

sX

(i,j)

x

2ij

2ij

− sup

∆∈Dδ(2k,2a,2T)

sX

(i,j)

z

ij2

2ij

≤ sup

∆∈Dδ(2k,2a,2T)

sX

(i,j)

(x

ij

− z

ij

)

2

2ij

≤ 2a k x − z k

2

where x = (x

11

, . . . , x

m1m2

) and z = (z

11

, . . . , z

m1m2

). Now, Theorem 2 implies P (X

T

≥ E (X

T

) + 32 a + t) ≤ 4 exp

− t

2

8a

2

. (11)

Next, we bound the expectation E (X

T

). Applying Jensen’s inequality, a symmetrization argu- ment and the Ledoux-Talagrand contraction inequality (see, e.g., [13]) we get

( E (X

T

))

2

≤ E

 sup

∆∈Dδ(2k,2a,2T)

X

(i,j)

B

ij

2ij

 ≤ E

 sup

∆∈Dδ(2k,2a,2T)

X

(i,j)

B

ij

2ij

− E B

ij

2ij

 + pδ

2

≤ 2 E

 sup

∆∈Dδ(2k,2a,2T)

X

(i,j)

ǫ

ij

B

ij

2ij

 + pδ

2

≤ 8a E

 sup

∆∈Dδ(2k,2a,2T)

X

(i,j)

ǫ

ij

B

ij

ij

 + pδ

2

= 8a E sup

∆∈Dδ(2k,2a,2T)

|h Σ

R

, ∆ i|

!

+ pδ

2

≤ 16 a √

2k T E ( k Σ

R

k ) + pδ

2

(10)

where { ǫ

ij

} is i.i.d. Rademacher sequence, Σ

R

= P

(i,j)

B

ij

ǫ

ij

X

ij

with X

ij

= e

i

(m

1

)e

Tj

(m

2

) and e

k

(l) are the canonical basis vectors in R

l

. Lemma 4 in [11] and n ≥ m log(m) imply that

E k Σ

R

k ≤ C

p

pd (12)

where C

≥ 2 is an universal numerical constant. Using (12), √

x + y ≤ √

x + √ y, 2xy ≤ x

2

+ y

2

and δ = T /8 we compute

E (X

T

) ≤ 4 a C

p

2kpd T

1/2

+ √ pδ ≤ 16a C

2kd + 3 √ pT /8.

Taking in (11) t = √ pT /8 we get the statement of Lemma 5.

Lemma 6. Let δ = T /8, d > 16 and (G

1δ

, . . . , G

N(δ)δ

) be the collection of matrices given by Lemma 3. We have that

k=1,...,N

max

(δ)

X (M

0

− G

kδ

)

2

− k M

0

− G

kδ

k

L2(Π)

≤ √ pδ + 8 C

a √ kd

with probability at least 1 − 4 exp

8a22

.

Proof. For any fixed A ∈ R

m1×m2

satisfying k A k

≤ 2a we have that kX A k

2

= sX

ij

B

ij

A

2ij

= sup

kuk2=1

X

ij

(u

ij

B

ij

A

ij

) .

Then we can apply Theorem 2 with f (x

11

, . . . , x

m1m2

) : = sup

kuk2=1

P

ij

(u

ij

x

ij

) to get

P ( |kX A k

2

− E kX A k

2

| > t + 32 a ) ≤ 4 exp

− t

2

8a

2

. (13)

On the other hand let Z = sup

kuk2=1

P

ij

(u

ij

B

ij

A

ij

). Applying Corollary 4.8 from [14] we get that Var Z = k A k

2L2(Π)

− ( E kX A k

2

)

2

≤ 16

2

a

2

which together with (13) implies

P kX A k

2

− k A k

L2(Π)

> t + 48 a

≤ 4 exp

− t

2

8a

2

. (14)

Now Lemma 6 follows from Lemma 3, (14) with t = √ pδ + 5C

a √

kd and the union bound.

Lemma 7. We have that sup

A∈C

|hX (E), A − M

0

i| −

2561

k M

0

− A k

2L2(Π)

d rank(A) ≤ 6240(U C

)

2

.

with probability larger then 1 − exp( − cd) with c ≥ 0, 003

(11)

Proof. Following the lines of the proof of Lemma 1 with α = p

65/64 and ν

2

= 252( a ∨ U )

2

zd we get p

P sup

A∈C

|hX (E), A − M

0

i| −

2561

k M

0

− A k

2L2(Π)

d rank(A) ≥ 6240(U C

)

2

!

k0

X

k=1

X

∞ l=1

P sup

A∈C(k,a,αl+1ν)

|hX (E), A − M

0

i| ≥ 6240(U C

)

2

dk + pα

2l

ν

2

256

!

≤ 4

k0

X

k=1

X

∞ l=1

exp

− c

2

2l+2

ν

2

(a ∨ U )

2

≤ exp ( − cd) where we use the following lemma:

Lemma 8. Consider the following set of matrices

C (k, a, T ) = { A ∈ C (k, a) : k M

0

− A k

2

≤ T } and set

Z e

T

= sup

A∈C(k,a,T)

|hX (E), A − M

0

i| . We have that

P

Z e

T

≥ 6240(C

U )

2

dk + pT

2

/260

≤ 4 exp − pT

2

/c

2

(a ∨ U )

2

with c

2

≤ 12(1560)

2

Proof of Lemma 8. Fix a X

0

∈ C (k, a, T ). For any A ∈ C (k, a, T ), we set ∆ = A − X

0

and we have that rank(∆) ≤ 2k and k ∆ k

2

≤ 2T . Then using |hX (E), A − M

0

i| ≤ |hX (E), X

0

− M

0

i| +

|hX (E), ∆ i| we get

Z e

T

≤ |hX (E), X

0

− M

0

i| + sup

∆∈T(2k,2a,2T)

|hX (E), ∆ i|

where

T (2k, 2a, 2T ) = { A ∈ R

m1×m2

: k A k

≤ 2a, k A k

2

≤ 2T and rank(A) ≤ 2k } . Bernstein’s inequality and k X

0

− M

0

k

2

≤ T imply that

P {|hX (E), X

0

− M

0

i| > t } ≤ 2 exp

− t

2

2

pT

2

+

43

U at

.

Taking t = pT

2

/520 we get P

|hX (E), X

0

− M

0

i| > pT

2

/520 ≤ 2 exp

− pT

2

c

2

(a ∨ U )

2

. (15)

On the other hand, Lemma 9 implies that with probability at least 1 − 2 exp

− pT

2

c

3

(a ∨ U )

2

sup

∆∈T(2k,2a,2T)

|hX (E), ∆ i| ≤ 6240(C

U )

2

kd + pT

2

/520

which together with (15) implies the statement of Lemma 8.

(12)

Lemma 9. Assume that n ≥ m log(m). We have that with probability at least 1 − 2 exp

− pT

2

(a ∨ U )

2

sup

∆∈T(2k,2a,2T)

|hX (E), ∆ i| ≤ 6240(C

U )

2

kd + pT

2

/520 where c

3

is a numerical constant.

Proof. Let X e

T

= sup

∆∈T(2k,2a,2T)

|hX (E), ∆ i| = sup

∆∈T(2k,2a,2T)

hX (E), ∆ i . First we bound the ex- pectation E ( X e

T

):

E ( X e

T

) ≤ E

 sup

∆∈T(2k,2a,2T)

X

(i,j)

ε

ij

B

ij

ij

 = E sup

∆∈T(2k,2a,2T)

|h Σ, ∆ i|

!

≤ 2 √

2k T E ( k Σ k ) where Σ = P

(i,j)

B

ij

ε

ij

X

ij

with X

ij

= e

i

(m

1

)e

Tj

(m

2

) and e

k

(l) are the canonical basis vectors in R

l

. Using n ≥ m log(m) Lemma 4 in [11] and Corollary 3.3 in [1] imply that

E k Σ k ≤ C

U p

pd. (16)

where C

≥ 2 is an universal numerical constant. Using (16) we get E ( X e

T

) ≤ 2C

U p

2kpd T ≤ 3120(C

U )

2

kd + pT

2

/1560. (17) Now we use Theorem 3.3.16 in [9] (see also Theorem 8.1 in [7]) to obtain

P

X e

T

≥ E ( X e

T

) + t

≤ exp − t

2

4U a E ( X e

T

) + 4σ

2

pT

2

+ 9U at

!

≤ exp

− t

2

8aU

2

C

2kpdT + 4σ

2

pT

2

+ 9U at

(18) Taking in (18) t = pT

2

/1560 + 2U C

2kpdT , together with (17) we get the statement of Lemma 9.

Acknowledgements

The work of A. Carpentier is supported by the DFGs Emmy Noether grant MuSyAD (CA 1488/1- 1). The work of O. Klopp was conducted as part of the project Labex MME-DII (ANR11-LBX- 0023-01). The work of M. L¨ offler was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis, and the European Research Council (ERC) grant No. 647812.

References

[1] Afonso S. Bandeira and Ramon van Handel. Sharp nonasymptotic bounds on the norm of

random matrices with independent entries. Ann. Probab., 44(4):2479–2506, 2016.

(13)

[2] T. Tony Cai and Wen-Xin Zhou. Matrix completion via max-norm constrained optimization.

Electron. J. Stat., 10(1):1493–1525, 2016.

[3] E. J. Cand`es and Y Plan. Matrix completion with noise. Proceedings of IEEE, 98(6):925–936, 2009.

[4] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Fondations of Computational Mathematics, 9(6):717–772, 2009.

[5] Emmanuel J. Cand`es and Yaniv Plan. Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements. CoRR, abs/1001.0339, 2010.

[6] Emmanuel J. Cand`es and Terence Tao. The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inform. Theory, 56(5):2053–2080, 2010.

[7] A. Carpentier, O. Klopp, M. L¨ offler, and R. Nickl. Adaptive confidence sets for matrix completion. to appear in Bernoulli, 2017.

[8] Sourav Chatterjee. Matrix estimation by universal singular value thresholding. Ann. Statist., 43(1):177–214, 2015.

[9] E. Gin´e and R. Nickl. Mathematical Foundations of Infinite-Dimensional Statistical Models.

Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2015.

[10] Olga Klopp. Noisy low-rank matrix completion with general sampling distribution.

Bernoulli, 20(1):282–303, 2014.

[11] Olga Klopp. Matrix completion by singular value thresholding: sharp bounds. Electron. J.

Stat., 9(2):2348–2369, 2015.

[12] V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist., 39(5):2302–2329, 2011.

[13] Vladimir Koltchinskii. Oracle inequalities in empirical risk minimization and sparse recov- ery problems, volume 2033 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011.

Lectures from the 38th Probability Summer School held in Saint-Flour, 2008, ´ Ecole d’´ Et´e de Probabilit´es de Saint-Flour. [Saint-Flour Probability Summer School].

[14] M. Ledoux. The concentration of measure phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2001.

[15] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix com- pletion: Optimal bounds with noise. Journal of Machine Learning Research, 13:1665–1697, 2012.

[16] Michel Talagrand. A new look at independence. Ann. Probab., 24(1):1–34, 1996.

[17] Yu-Xiang Wang and Huan Xu. Stability of matrix factorization for collaborative filtering.

In Proceedings of the 29th International Conference on Machine Learning, ICML 2012,

Edinburgh, Scotland, UK, June 26 - July 1, 2012, 2012.

Références

Documents relatifs

This is very important, as we will see that our adaptive strategy crucially depends on the existence of these adaptive confidence sets: Consider for example the problem of

The con…dence sets of this paper are guaranteed to provide a pre-speci…ed level of asymptotic coverage for a parameter of interest in models that consist of a …nite number of

Heart transplantation in a patient with isolated non compaction of the left ventricular myocardium.. Clinical characterization of left ventricular non compaction in children:

Médecine interne Gastro entérologie Cardiologie Anesthésie Réanimation Anesthésie réanimation Radiothérapie Radiologie Radiologie Pédiatrie Pédiatrie Médecine aérotique

• We provide supervised and semi-supervised estimation procedures, which are optimal or optimal up to an extra logarithmic factor. Importantly, our results show that

Our main contributions are as follows: in the trace regression model, even if only an upper bound for the variance of the noise is known, it is shown that practical honest

As the study of the risk function L α viewed in Equation (6) does not provide any control on the probability of classifying, it is much more difficult to compare the performance of

In the presence of very many instruments, a number exponential in the sample size, too many non-zero coefficients, or when there is an endogenous regressor which is only instrumented