• Aucun résultat trouvé

Estimating the covariance of random matrices

N/A
N/A
Protected

Academic year: 2021

Partager "Estimating the covariance of random matrices"

Copied!
25
0
0

Texte intégral

(1)

HAL Id: hal-00811808

https://hal-upec-upem.archives-ouvertes.fr/hal-00811808

Preprint submitted on 11 Apr 2013

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Estimating the covariance of random matrices

Pierre Youssef

To cite this version:

Pierre Youssef. Estimating the covariance of random matrices. 2013. �hal-00811808�

(2)

PIERRE YOUSSEF

ABSTRACT. We extend to the matrix setting a recent result of Srivastava-Vershynin [21] about estimating the covariance matrix of a random vector. The result can be in- terpreted as a quantified version of the law of large numbers for positive semi-definite matrices which verify some regularity assumption. Beside giving examples, we dis- cuss the notion of log-concave matrices and give estimates on the smallest and largest eigenvalues of a sum of such matrices.

1. I

NTRODUCTION

In recent years, interest in matrix valued random variables gained momentum. Many of the results dealing with real random variables and random vectors were extended to cover random matrices. Concentration inequalities like Bernstein, Hoeffding and others were obtained in the non-commutative setting ( [5],[22], [14]). The methods used were mostly combination of methods from the real/vector case and some matrix inequalities like the Golden-Thompson inequality (see [8]).

Estimating the covariance matrix of a random vector has gained a lot of interest re- cently. Given a random vector X in R

n

, the question is to estimate Σ = E XX

t

. A natural way to do this is to take X

1

, .., X

N

independent copies of X and try to approxi- mate Σ with the sample covariance matrix Σ

N

=

N1 Pi

X

i

X

it

. The challenging problem is to find the minimal number of samples needed to estimate Σ. It is known using a result of Rudelson (see [19]) that for general distributions supported on the sphere of radius √

n, it suffices to take cn log(n) samples. But for many distributions, a number proportional to n is sufficient. Using standard arguments, one can verify this for gauss- ian vectors. It was conjectured by Kannan- Lovasz- Simonovits [11] that the same result holds for log-concave distributions. This problem was solved by Adamczak et al ([3], [4]). Recently, Srivatava-Vershynin proved in [21] covariance estimate with a number of samples proportional to n, for a larger class of distributions covering the log-concave case. The method used was different from previous work on this field and the main idea was to randomize the sparsification theorem of Batson-Spielman-Srivastava [7].

Our aim in this paper is to adapt the work of Srivastava-Vershynin to the matrix setting replacing the vector X in the problem of the covariance matrix by an n × m random matrix A and try to estimate E AA

t

by the same techniques. This will be possible since in the deterministic setting, the sparsification theorem of Batson-Spielman-Srivastava [7]

1

(3)

has been extended to a matrix setting by De Carli Silva-Harvey-Sato [9] who precisely proved the following:

Theorem 1.1. Let B

1

, . . . , B

m

be positive semi-definite matrices of size n × n and arbitrary rank. Set B :=

Pi

B

i

. For any ε ∈ (0, 1), there is a deterministic algorithm to construct a vector y ∈ R

m

with O(n/ε

2

) nonzero entries such that y ≥ 0 and

B

X

i

y

i

B

i

(1 + ε)B.

For an n × n matrix A, denote by kAk the operator norm of A seen as an operator on l

n2

. The main idea is to randomize the previous result using the techniques of Srivastava- Vershynin [21]. Our problem can be formulated as follows:

Take B a positive semi-definite random matrix of size n × n. How many independent copies of B are needed to approximate E B i.e taking B

1

, .., B

N

independent copies of B, what is the minimal number of samples needed to make

N1 Pi

B

i

− E B

very small.

One can view this as an analogue to the covariance estimate of a random vector by taking for B the matrix AA

t

where A is an n × m random matrix. With some regularity, we will be able to take N proportional to n. However, in the general case this is no longer true. In fact, take B uniformly distributed on {ne

i

e

ti

}

i6n

where e

j

denotes the canonical basis of R

n

. It is easy to verify that E B = I

n

and

N1 Pi

B

i

is a diagonal matrix and its diagonal coefficients are distributed as

n

N (p

1

, .., p

n

),

where p

i

denotes the number of times e

i

e

ti

is chosen. This problem is well- studied and it is known (see [12]) that we must take N > cn log(n). This example is essentially due to Aubrun [6]. More generally, if B is a positive semi-definite matrix such that E B = I

n

and Tr(B) 6 n almost surely, then by Rudelson’s inequality in the non-commutative setting (see [15]) it is sufficient to take cn log(n) samples.

The method will work properly for a class of matrices satisfying a matrix strong regularity assumption which we denote by (M SR) and can be viewed as an analog to the property (SR) defined in [21].

Definition 1.2. [Property (M SR)]

Let B be an n × n positive semi-definite random matrix such that E B = I

n

. We will say that B satisfies (M SR) if for some η > 0 we have :

P (kP BP k > t) 6 c

t

1+η

∀t > c.rank(P ) and ∀P orthogonal projection of R

n

.

The main result of this paper is the following:

(4)

Theorem 1.3. Let B be an n × n positive semi-definite random matrix verifying E B = I

n

and (M SR) for some η > 0. Then for every ε < 1, taking N = C

1

(η)

n

ε2+ 2η

we have

E

1 N

N

X

i=1

B

i

I

n

6 ε where B

1

, .., B

N

are independent copies of B.

If X is an isotropic random vector of R

n

, put B = XX

t

then kP BP k = kP X k

22

. Therefore if X verifies the property (SR) appearing in [21], then B verifies property (M SR). So applying Theorem 1.3 to B = XX

t

, we recover the covariance estimation as stated in [21].

In order to apply our result, beside some examples, we investigate the notion of log- concave matrices in relation with the definition of log-concave vectors. Moreover re- marking some strong concentration inequalities satisfied by these matrices we are able, using the ideas developed in the proof of the main theorem, to have some results with high probability rather than only in expectation as is the case in the main result. This will be discussed in the last section of the paper.

The paper is organized as follows: in section 2, we show how to prove Theorem 1.3 using two other results (Theorem 2.1, Theorem 2.2) which we prove respectively in sections 3 and 4 using again two other results (Theorem 3.1, Theorem 4.1) whose proofs are given respectively in sections 5 and 6. In section 7, we give some applications and discuss the notion of log-concave matrices and prove some related results.

2. P

ROOF OF

T

HEOREM

1.3

We first introduce a regularity assumption on the moments which we denote by (M W R):

∃p > 1 such that E hBx, xi

p

6 C

p

∀x ∈ S

n−1

.

Note that by a simple integration of tails, (M SR) (with P a rank one projection) implies (M W R) with p < 1 + η.

The proof of Theorem 1.3 is based on two theorems dealing with the smallest and largest eigenvalues of 1

N

N

X

i=1

B

i

.

Theorem 2.1. Let B

i

n × n independent positive semi-definite random matrices verify- ing E B

i

= I

n

and (M W R) .

Let ε < 1, then for

N > 16 (16C

p

)

p−11

n ε

2p−1 p−1

we get

E λ

min

1 N

N

X

i=1

B

i

!

> 1 − ε

C1(η) = (64c)1+2η(1 + 1η)2η∨64(4c)1η(32 +32η)1+η3 ∨256(2c)32+2η(16 +16η)4η

(5)

Theorem 2.2. Let B

i

n × n independent positive semi-definite random matrices verify- ing E B

i

= I

n

and (M SR).

Then for any N we have

E λ

max

N

X

i=1

B

i

!

6 C(η).(n + N ) Moreover, for ε < 1 and N > C

2

(η)

n

ε2+ 2η

we have

E λ

max

1 N

N

X

i=1

B

i

!

6 1 + ε

We will give the proof of these two theorems in sections 3 and 4 respectively. We need also a simple lemma:

Lemma 2.3. Let 1 < r 6 2 and Z

1

, ..., Z

N

be independent positive random variables with E Z

i

= 1 and satisfying ( E Z

ir

)

1r

6 M Then

E

1 N

N

X

i=1

Z

i

− 1

6 2M

N

r−1r

.

Proof. Let (ε

i

)

i6N

independent ±1 Bernouilli variables. By symmetrization and Jensen’s inequality we can write

E

1 N

N

X

i=1

Z

i

− 1

6 2 N E

N

X

i=1

ε

i

Z

i

6 2 N E

N

X

i=1

Z

i2

!

1 2

6 2 N E

N

X

i=1

Z

ir

!

1 r

6 2 N

N

X

i=1

E Z

ir

!

1 r

6 2M

N

r−1r

Proof of Theorem 1.3. Take N > c(η)

n

ε2+ 2η

satisfying conditions of Theorem 2.1 (with p = 1 +

η2

) and Theorem 2.2. Note that by the triangle inequality

1 N

N

X

i=1

B

i

I

n

6

1 N

N

X

i=1

B

i

− 1 n Tr 1

N

N

X

i=1

B

i

!

I

n

+

1 n Tr 1

N

N

X

i=1

B

i

!

I

n

I

n

:= α + β

C2(η) = 16c1η(32 +32η)1+η3 ∨16√

2(4c)32+η2(8 +8η)4η

(6)

Observe that α = max

λ 1 N

N

X

i=1

B

i

− 1 n Tr 1

N

N

X

i=1

B

i

!

I

n

!

= max

"

λ

max

1 N

N

X

i=1

B

i

!

− 1 n Tr 1

N

N

X

i=1

B

i

!

, 1 n Tr 1

N

N

X

i=1

B

i

!

λ

min

1 N

N

X

i=1

B

i

!#

Since the two terms in the max are non-negative, then one can bound the max by the sum of the two terms. More precisely, we get α 6 λ

max

1

N

N

X

i=1

B

i

!

−λ

min

1 N

N

X

i=1

B

i

!

and by Theorem 2.1 and Theorem 2.2 we deduce that E α 6 2ε.

Note that

β =

1 N

N

X

i=1

Tr(B

i

) n − 1

=

1 N

N

X

i=1

Z

i

− 1

,

where Z

i

=

Tr(Bni)

. Since B

i

satisfies (M W R), then taking r = min(2, 1 +

η2

) we have

∀i 6 N , ( E Z

ir

)

1r

6 1 n

n

X

j=1

( E hB

i

e

j

, e

j

i

r

)

1r

6 c(η).

Therefore Z

i

satisfy the conditions of Lemma 2.3 and we deduce that E β 6 ε by the choice of N .

As a conclusion

E

1 N

N

X

i=1

B

i

I

n

6 E α + E β 6 3ε

3. P

ROOF OF

T

HEOREM

2.1

Given A an n × n positive semi-definite matrix such that all eigenvalues of A are greater than a lower barrier l

A

= l i.e A l.I

n

, define the corresponding potential function to be φ

l

(A) = Tr(A − l.I

n

)

−1

.

The proof of Theorem 2.1 is based on the following result which will be proved in section 5:

Theorem 3.1. Let A l.I

n

and φ

l

(A) 6 φ, B a positive semi-definite random matrix satisfying E B = I

n

and Property (M W R) with some p > 1.

Let ε < 1, if

φ 6 1

4 (8C

p

)

p−11

ε

p−1p

then there exist l

0

a random variable such that

(7)

A + B l

0

.I

n

, φ

l0

(A + B) 6 φ

l

(A) and E l

0

> l + 1 − ε.

Proof of Theorem 2.1. We start with A

0

= 0 and l

0

= −

nφ

so that φ

l0

(A

0

) = −

ln

0

= φ.

Applying Theorem 3.1, one can find l

1

such that

A

1

= A

0

+ B

1

l

1

.I

n

, φ

l1

(A

1

) 6 φ

l0

(A

0

) and

E l

1

> l

0

+ 1 − ε Now apply Theorem 3.1 once again to find l

2

such that

A

2

= A

1

+ B

2

l

2

.I

n

, φ

l2

(A

2

) 6 φ

l1

(A

1

) and

E l

2

> l

1

+ 1 − ε > l

0

+ 2(1 − ε)

After N steps, we get E λ

min

(A

N

) > E l

N

> l

0

+ N (1 − ε). Therefore, E λ

min

1

N

N

X

i=1

B

i

!

> 1 − εn N φ

Taking N =

εφn

, we get E λ

min

1 N

N

X

i=1

B

i

!

> 1 − 2ε.

4. P

ROOF OF

T

HEOREM

2.2

Given A an n × n positive semi-definite matrix such that all eigenvalues of A are less than an upper barrier u

A

= u i.e Au.I

n

, define the corresponding potential function to be ψ

u

(A) = Tr (u.I

n

A)

−1

.

The proof of Theorem 2.2 is based on the following result which will be proved in section 6:

Theorem 4.1. Let Au.I

n

and ψ

u

(A) 6 ψ, B a positive semi-definite random matrix satisfying E B = I

n

and Property (M SR).

Let ε < 1, if

ψ 6 C

3

(η)ε

1+η2

there exists u

0

a random variable such that

A + Bu

0

.I

n

, ψ

u0

(A + B) 6 ψ

u

(A) and E u

0

6 u + 1 + ε.

C3(η) =h

8(2c)η1(16 +16η)1+3ηi−1

∧h

16(2c)32+η2(8 +8η)4ηi−1

(8)

Proof of Theorem 2.2. We start with A

0

= 0, u

0

=

nψ

so that ψ

u0

(A

0

) = ψ.

Applying Theorem 4.1, one can find u

1

such that

A

1

= A

0

+ B

1

u

1

.I

n

, ψ

u1

(A

1

) 6 ψ

u0

(A

0

) and E u

1

6 u

0

+ 1 + ε.

Now apply Theorem 4.1 once again to find u

2

such that

A

2

= A

1

+ B

2

u

2

.I

n

, ψ

u2

(A

2

) 6 ψ

u1

(A

1

) and E u

2

6 u

1

+ 1 + ε.

After N steps we get E λ

max N

X

i=1

B

i

!

6 E u

N

6 u

0

+ N (1 + ε).

Taking N >

εψn

= c

0

(η)

−1 n

ε2+ 2η

, we deduce that E λ

max

1

N

N

X

i=1

B

i

!

6 1 + 2ε

5. P

ROOF OF

T

HEOREM

3.1

5.1. Notations. We are looking for a random variable l

0

of the form l + δ where δ is a positive random variable playing the role of the shift.

If in addition A (l + δ).I

n

, we will note :

L

δ

= A − (l + δ).I

n

0 so that Tr

B

12

(A − (l + δ).I

n

)

−1

B

12

=

D

L

−1δ

, B

E

.

λ

1

, .., λ

n

will denote the eigenvalues of A and v

1

, .., v

n

the corresponding eigenvec- tors. (v

i

)

i6n

are also the eigenvectors of L

−1δ

corresponding to the eigenvalues

λ 1

i−(l+δ)

. 5.2. Finding the shift. To find sufficient conditions for such δ exists, we need a matrix extension of Lemma 3.4 in [7] which, up to a minor change, is essentially contained in Lemma 20 in [9] and we formulate it here in Lemma 5.2. This method uses the Sherman-Morrison-Woodbury formula:

Lemma 5.1. Let E be an n × n invertible matrix, C a k × k invertible matrix, U an n × k matrix and V a k × n matrix. Then we have:

(E + U CV )

−1

= E

−1

E

−1

U (C

−1

+ V E

−1

U )

−1

V E

−1

Lemma 5.2. Let A as above satisfying A l.I

n

. Suppose that one can find δ > 0 verifying δ 6

kL1−1

0 k

and

D

L

−2δ

, B

E

φ

l+δ

(A) − φ

l

(A) −

B

12

L

−1δ

B

12

> 1 Then

λ

min

(A + B) > l + δ and φ

l+δ

(A + B) 6 φ

l

(A).

(9)

Proof. First note that

kL1−1

0 k

= λ

min

(A) − l, so the first condition on δ implies that λ

min

(A) > l + δ.

Now using Sherman-Morrison-Woodbury formula with E = L

δ

, U = V = B

12

, C = I

n

we get :

φ

l+δ

(A + B) = Tr (L

δ

+ B)

−1

= φ

l+δ

(A) − Tr

L

−1δ

B

12

I

n

+ B

12

L

−1δ

B

12−1

B

12

L

−1δ

6 φ

l+δ

(A) −

D

L

−2δ

, B

E

1 +

B

12

L

−1δ

B

12

Rearranging the hypothesis, we get φ

l+δ

(A + B ) 6 φ

l

(A).

Since kL

−10

k 6 Tr

L

−10

= φ

l

(A) and

B

12

L

−1δ

B

12

6

D

L

−1δ

, B

E

then in order to satisfy conditions of Lemma 5.2, we may search for δ satisfying:

(1) δ 6 1

φ

l

(A) and

D

L

−2δ

, B

E

φ

l+δ

(A) − φ

l

(A) −

D

L

−1δ

, B

E

> 1 For t 6

1φ

, let us note :

q

1

(t, B) =

D

L

−1t

, B

E

= Tr

B (A − (l + t).I

n

)

−1

and

q

2

(t, B) =

D

L

−2t

, B

E

Tr(L

−2t

) = Tr (B(A − (l + t).I

n

)

−2

) Tr (A − (l + t).I

n

)

−2

We have already seen in Lemma 5.2 that if t 6

φ1

6

kL1−1

0 k

then A (l + t).I

n

so the definitions above make sense. Since we have :

φ

l+δ

(A) − φ

l

(A) = Tr(A − (l + δ).I

n

)

−1

− Tr(A − l.I

n

)

−1

= δTr((A − (l + δ).I

n

)

−1

(A − l.I

n

)

−1

) 6 δTr(A − (l + δ).I

n

)

−2

In order to have (1), it will be sufficient to choose δ satisfying δ 6

φ1

and

(2) 1

δ q

2

(δ, B) − q

1

(δ, B) > 1

(10)

q

1

and q

2

can be expressed as follows:

q

1

(t, B) =

n

X

i=1

hBv

i

, v

i

i

λ

i

lt and q

2

(t, B) =

P

i

hBvi,vii i−l−t)2

P

i

i

lt)

−2

φ

l

(A) =

n

X

i=1

i

l)

−1

6 φ so that (λ

i

l).φ > 1 for all i, then we have : (1 − t.φ)(λ

i

l) = λ

i

lt.(λ

i

l).φ 6 λ

i

lt 6 λ

i

l therefore

q

1

(t, B) 6 (1 − t.φ)

−1

q

1

(0, B) and

(1 − t.φ)

2

q

2

(0, B) 6 q

2

(t, B) 6 (1 − t.φ)

−2

q

2

(0, B)

Lemma 5.3. Let s ∈ (0, 1) and take δ = (1 − s)

3

q

2

(0, B)1

{q1(0,B)6s}

1

{q2(0,B)6s

φ}

. Then A + B (l + δ).I

n

and φ

l+δ

(A + B) 6 φ

l

(A).

Proof. As stated before, it is sufficient to check that δ 6

1φ

and

1δ

q

2

(δ, B)−q

1

(δ, B) > 1.

If q

1

(0, B) > s or q

2

(0, B) >

φs

then δ = 0 and there is nothing to prove since φ

l

(A + B) 6 φ

l

(A).

In the other case i.e q

1

(0, B) 6 s and q

2

(0, B ) 6

φs

, we have δ = (1 − s)

3

q

2

(0, B).

So δ 6 (1 − s)

3φs

6

φ1

and 1

δ q

2

(δ, B) − q

1

(δ, B) = 1

(1 − s)

3

q

2

(0, B) q

2

(δ, B) − q

1

(δ, B)

> 1

(1 − s)

3

q

2

(0, B) (1 − δφ)

2

q

2

(0, B ) − (1 − δφ)

−1

q

1

(0, B)

> (1 − s)

2

(1 − s)

3

s

(1 − s) = 1

5.3. Estimating the random shift. Now that we have found δ, we will estimate E δ using the property (M W R). We will start by stating some basic facts about q

1

and q

2

. Proposition 5.4. Let as above A l.I

n

and φ

l

(A) 6 φ, B satisfying (M W R). Then we have the following :

(1) E q

1

(0, B) = φ

l

(A) 6 φ and E q

1

(0, B)

p

6 C

p

φ

p

. (2) E q

2

(0, B) = 1 and E q

2

(0, B)

p

6 C

p

.

(3) P (q

1

(0, B) > u) 6 C

p

(

φu

)

p

and P (q

2

(0, B) > u) 6

Cupp

.

(11)

Proof. Since E B = I

n

then E q

1

(0, B) = φ

l

(A) and E q

2

(0, B) = 1.

Now using the triangle inequality and Property (M W R) we get :

( E q

1

(0, B)

p

)

1p

=

"

E

n

X

i=1

hBv

i

, v

i

i λ

i

l

!p#1p

6

n

X

i=1

( E hBv

i

, v

i

i

p

)

p1

λ

i

l 6

n

X

i=1

C

1

pp

λ

i

l 6 C

1

pp

φ

With the same argument we prove that E q

2

(0, B )

p

6 C

p

. The third part of the propo-

sition follows by Markov’s inequality.

Lemma 5.5. If δ is as in Lemma 5.3. Then

E δ > (1 − s)

3

1 − 2C

p

φ s

!p−1

Proof. Using the above proposition and H older’s inequality with

.. 1p

+

1q

= 1 we get : E δ = E (1 − s)

3

q

2

(0, B)1

{q1(0,B)6s}

1

{q2(0,B)6s

φ}

= (1 − s

3

)

h

E q

2

(0, B) − E q

2

(0, B)1

{q1(0,B)>s or q2(0,B)>φs}

i

> (1 − s)

3

1 − ( E q

2

(0, B)

p

)

1p

. P

(

q

1

(0, B) > s or q

2

(0, B) > s φ

)!1q

> (1 − s)

3

1 − C

1

pp

C

p

φ s

!p

+ C

p

φ s

!p!1q

> (1 − s)

3

1 − 2C

p

φ s

!p−1

Now it remains to make good choice of s and φ in order to finish the prove Theo- rem 3.1. Take l

0

= l + δ, the choice of δ being as before with s =

4ε

.

As we have seen, we get A + B l

0

.I

n

and φ

l0

(A + B ) 6 φ

l

(A). Moreover, E l

0

= l + E δ > l + (1 − s)

3

1 − 2C

p

φ s

!p−1

> 1 − ε, by the choice of φ. This ends the proof of Theorem 3.1.

6. P

ROOF OF

T

HEOREM

4.1

6.1. Notations. We are looking for a random variable u

0

of the form u + ∆ where ∆ is a positive random variable playing the role of the shift.

We will note U

t

= (u+t).I

n

−A so that Tr

B

12

((u + t).I

n

A)

−1

B

12

=

D

U

t−1

, B

E

.

(12)

As before, λ

1

, .., λ

n

will denote the eigenvalues of A and v

1

, .., v

n

the corresponding eigenvectors. (v

i

)

i6n

are also the eigenvectors of U

t−1

corresponding to the eigenvalues

1 u+t−λi

.

6.2. Finding the shift. To find sufficient conditions for such ∆ exists, we need a matrix extension of Lemma 3.3 in [7] which, up to a minor change, is essentially contained in Lemma 19 in [9]. For the sake of completeness, we include the proof.

Lemma 6.1. Let A as above satisfying Au.I

n

. Suppose that one can find ∆ > 0 verifying

(3)

D

U

−2

, B

E

ψ

u

(A) − ψ

u+∆

(A) +

B

12

U

−1

B

12

6 1 Then

A + B ≺ (u + ∆).I

n

and ψ

u+∆

(A + B) 6 ψ

u

(A).

Proof. Since

D

U

−2

, B

E

and ψ

u

(A) − ψ

u+∆

(A) are positive, then by (3) we have

B

12

U

−1

B

12

< 1 and

D

U

−2

, B

E

1 −

B

12

U

−1

B

12

6 ψ

u

(A) − ψ

u+∆

(A) First note that

B

12

U

−1

B

12

=

U

1 2

BU

1 2

< 1, so U

1 2

BU

1 2

I

n

. Therefore we get BU

which means that A + B ≺ (u + ∆).I

n

.

Now using the Sherman-Morrison-Woodbury (see Lemma 5.1) with E = U

, U = V = B

12

, C = I

n

we get :

ψ

u+∆

(A + B) = Tr (U

B )

−1

= ψ

u+∆

(A) + Tr

U

−1

B

12

I

n

B

12

U

−1

B

12−1

B

12

U

−1

6 ψ

u+∆

(A) +

D

U

−2

, B

E

1 −

U

1 2

BU

1 2

6 ψ

u

(A)

We may now find ∆ satisfying (3). Let us note :

Q

1

(t, B) =

B

12

U

t−1

B

12

=

B

12

((u + t).I

n

A)

−1

B

12

and

Q

2

(t, B) =

D

U

t−2

, B

E

ψ

u

(A) − ψ

u+t

(A) = Tr

B ((u + t).I

n

A)

−2

ψ

u

(A) − ψ

u+t

(A)

Since Q

1

and Q

2

are both decreasing in t, we work with each separately. Precisely

fix θ ∈ (0, 1) and define ∆

1

,

2

as follows :

(13)

1

the smallest positive number such that Q

1

(∆

1

, B ) 6 θ and

2

the smallest positive number such that Q

2

(∆

2

, B) 6 1 − θ

Now take ∆ = ∆

1

+ ∆

2

, then Q

1

(∆, B) + Q

2

(∆, B) 6 θ + 1 − θ = 1. So this choice of ∆ satisfies (3) and it remains now to estimate ∆

1

and ∆

2

separately.

6.3. Estimating ∆

1

.

We may write Q

1

(∆

1

, B) =

n

X

i=1

B

12

v

i

v

it

B

12

u + ∆

1

λ

i

.

Put ξ

i

= B

12

v

i

v

it

B

12

, µ

i

= ψ(uλ

i

) and µ = ψ∆

1

. Denote P

S

the orthogonal pro- jection on (v

i

)

i∈S

, clearly rank(P

S

) = |S|. Then we have :





































E kξ

i

k = 1

P

X

i∈S

ξ

i

> t

!

= P (kP

S

BP

S

k > t) 6

t1+ηc

∀t > c |S|

n

X

i=1

1

µ

i

= ψ

u

(A) ψ 6 1

µ is the smallest positive number such that

n

X

i=1

ξ

i

µ

i

+ µ θ ψ Id

We will need an analog of Lemma 3.5 appearing in [21]. We extend this lemma to a matrix setting:

Lemma 6.2. Suppose {ξ

i

}

i6n

are symmetric positive semi-definite random matrices with E kξ

i

k = 1 and :

P

X

i∈S

ξ

i

> t

!

6 c

t

1+η

provided t > c|S| = c

X

i∈S

E kξ

i

k.

for all subsets S ⊂ [n] and some constants c, η > 0. Consider positive numbers µ

i

such

that

n

X

i=1

1 µ

i

6 1.

Let µ be the minimal positive number such that

n

X

i=1

ξ

i

µ

i

+ µ K · Id,

for some K > C = 4c. Then E µ 6

Kc(η)1+η

.

(14)

We do not reproduce the proof here as it is a direct adaptation of the argument in [21]

to the matrix setting. Applying Lemma 6.2 we get E µ 6 c(η)

ψθ1+η

, so that

(4) E ∆

1

6 c(η) ψ

η

θ

1+η

6.4. Estimating ∆

2

.

Suppose θ 6

12

. Since ψ

u

(A) − ψ

u+t

(A) = t.Tr

(u.I

n

A)

−1

((u + t).I

n

A)

−1

we can write

Q

2

(t, B) =

P

i

hBvi,vii (u+t−λi)2

t

Pi

(u + tλ

i

)

−1

(u − λ

i

)

−1

6 1 t

P

i

hBvi,vii (u+t−λi)(u−λi)

P

i

(u + tλ

i

)

−1

(u − λ

i

)

−1

(5)

:= 1

t P

2

(t, B)

First note that P

2

(t, B) can be written as

Pi

α

i

(t) hBv

i

, v

i

i with

Pi

α

i

= 1. Having this in mind, one can easily check that E P

2

(t, B) = 1 and

(6) E P

2

(t, B)

1+4

6 c(η),

where for the last inequality, we used the fact that B satisfies (M W R) with p = 1 +

4

. In order to estimate ∆

2

, we will divide it into two parts as follows:

2

= ∆

2

1

{P

2(0,B)6θ }

+ ∆

2

1

{P

2(0,B)>θ }

:= H

1

+ H

2

Let us start by estimating E H

1

. Suppose that P

2

(0, B) 6

θ

and denote

x = (1 + 4θ)P

2

(0, B).

Since ψ

u

(A) 6 ψ, we have (u−λ

i

).ψ > 1 ∀i and therefore u+x−λ

i

6 (1+xψ)(u−λ

i

).

This implies that

P

2

(x, B) 6 (1 + xψ)P

2

(0, B).

Now write

Q

2

(x, B) 6 1

x P

2

(x, B) 6 1 +

x P

2

(0, B) 6 1 + (1 + 4θ)

θ4

1 + 4θ 6 1 − θ, which means that

2

1 {

P2(0,B)6θ

} 6 (1 + 4θ)P

2

(0, B) and therefore

(7) E H

1

= E ∆

2

1 {

P2(0,B)6θ

} 6 1 + 4θ

Now it remains to estimate E H

2

. For that we need to prove a moment estimate for

2

. First observe that using (6) we have

P {∆

2

> t} = P {Q

2

(t, B) > 1 − θ} 6 P {P

2

(t, B) > t.(1θ)} 6 c(η)

t

1+4

Références

Documents relatifs

This inequality is the required tool to evaluate the penalty function and establish a risk bound for the corresponding penalized maximum

It passed 11 resolutions, including resolutions on the health conditions of the Arab population in Palestine, the impact of health care financing, health system priorities, issues

ln the United Sta- tes the National Committee for Digital Cartographie Data Standards has proposed a standard for cartographie data exchange (Moelle ring 1985) based on the

Random matrices with statistically dependent matrix elements The GUE probability distribution is uniquely determined by conditions ( 1.14) and ( 1.15).. Its proof in

The proofs use several results on the eigenvalues and eigenvectors of the uncorrelated matrix Q, satisfying EQ = I M , as input: the isotropic local Marchenko-Pastur law established

We also prove a Second Main Theorem type result, orbifold hyperbolicity of a general orbifold with one component and high multiplicity, and Kobayashi hyperbolicity of the cyclic

In spite ofthe ontological connotations of the labels 'platonism' and even 'Platonism', the most important thrust of Plato's thinking, whether he was talking about Forms and

Let X be a non-singular projective carve of genus g, defined over a field K of characteristic zero. Recall that a meromorphic differential is said to be of the second kind if all of