As showed in the previous sections, to complete the proof of Theorem 1.2, it only remains to prove Lemma 4.9. In this section, we will prove Lemma 4.9, i.e., write the terms in (4.19) as polynomials inF orF1/2· F (up to negligible error). Since the uniformness can be easily checked, we will only focus on the fixeda, b, z, w:
a, b: 16a6=b6N, z:||z| −1|62ε, and w∈Iε.
First we need to write the single matrix elements ofG’s and G′sas this type of polynomials. To do so, we start with deriving some bounds onG’s under the condition:
|Rem|> 1
4Nε(N η)−1 (5.1)
Note: this condition is guaranteed byχa>0,h(tX)>0 orh(tX(a,a))>0.
5.1 Preliminary lemmas. This subsection summarizes some elementary results from [9] and [10]. Note that all the inequalities in this subsection hold uniformly for boundedzandw. Furthermore, they hold without the condition (5.1).
Recall the definitions of Y(U,T),G(U,T),G(U,T),yi andyi in the definition 4.1.
Lemma 5.1(Relation between G, G(T,∅)andG(∅,T)). Fori, j6=k( i=j is allowed) we have G(k,ij ∅)=Gij−GikGkj
Gkk
, Gij(∅,k)=Gij−GikGkj
Gkk , (5.2)
G(∅,i)=G+(Gyi∗) (yiG)
1−yiGy∗i , G=G(∅,i)−(G(∅,i)y∗i) (yiG(∅,i))
1 + yiG(∅,i)yi∗ , (5.3) and
G(i,∅)=G+(Gyi) (y∗iG) 1−y∗iGyi
, G=G(i,∅)−(G(i,∅)yi) (yi∗G(i,∅)) 1 +y∗iG(i,∅)yi
.
Definition 5.2. In the following, EX means the integration with respect to the random variableX. For any T⊂J1, NK, we introduce the notations
Zi(T):= (1−Ey
i)y(T)i G(T,i)y(T)i ∗ and
Zi(T):= (1−Ey
i)y(T)i ∗G(i,T)y(T)i .
Recall by our convention thatyi is aN×1 column vector and yi is a1×N row vector. For simplicity we will write
Zi=Zi(∅), Zi=Zi(∅). Lemma 5.3(Identities for G,G,Z andZ). For any T⊂J1, NK, we have
G(∅,ii T)=−w−1h
1 +m(i,G T)+|z|2Gii(i,T)+Zi(T)
i−1
, (5.4)
G(ij∅,T)=−wG(ii∅,T)G(i,T)jj
y(T)i ∗G(ij,T)y(T)j
, i6=j, (5.5)
where, by definition, G(i,T)ii = 0 ifi∈T. Similar results hold forG: hGii(T,∅)
i−1
=−wh
1 +m(GT,i)+|z|2G(iiT,i)+Zi(T)i
(5.6) Gij(T,∅)=−wG(T,ii ∅)Gjj(T,i)
y(T)i G(T,ij)yj(T)∗
, i6=j. (5.7)
Definition 5.4(ζ-High probability events). Define
ϕ := (logN)log logN. (5.8)
Let ζ >0. We say that anN-dependent eventΩ holds with ζ-high probability if there is some constantC such that
P(Ωc) 6 NCexp(−ϕζ)
for large enough N. Furthermore, we say that Ω(u)holds with ζ-high probability uniformly foru∈UN, if there is some uniform constantC such that
umax∈UN
P(Ωc(u)) 6 NCexp(−ϕζ) (5.9)
for uniformly large enoughN.
Note: Usually we chooseζ to be 1. By the definition, if some eventΩholds withζ-high probability for someζ >0, thenΩholds with probability larger then1−N−D for anyD >0.
Lemma 5.5 (Large deviation estimate). Let X be defined as in Theorem 1.2. For any ζ > 0, there exists Qζ >0 such that for T⊂J1, NK,|T| 6N/2 the following estimates hold with ζ-high probability uniformly for 16i, j6N,|w|+|z|6C:
|Zi(T)|=(1−Ey
i)
y(T)i G(T,i)y(T)i ∗6ϕQζ/2 s
Imm(T,i)G +|z|2ImG(T,i)ii
N η , (5.10)
|Zi(T)|=(1−Ey
i)
y(T)i ∗G(i,T)yi(T)6ϕQζ/2 s
Imm(i,G T)+|z|2ImG(i,iiT)
N η .
Furthermore, for i6=j, we have
(1−Eyiyj)
y(T)i G(T,ij)y(T)j ∗6ϕQζ/2 s
Imm(T,ij)G +|z|2ImG(T,ij)ii +|z|2ImG(T,ij)jj
N η , (5.11)
(1−Ey
iyj)
y(iT)∗G(ij,T)y(jT)6ϕQζ/2 s
Imm(ij,G T)+|z|2ImGii(ij,T)+|z|2ImGjj(ij,T)
N η , (5.12)
where Ey
iyj
yi(T)G(T,ij)y(jT)∗
=|z|2G(ijT,ij)+δijm(GT,ij), Ey
iyj
yi(T)∗G(ij,T)yj(T)
=|z|2Gij(ij,T)+δijm(ij,G T). (5.13)
Lemma 5.6. Let X be defined as in Theorem 1.2. Suppose|w|+|z|6C.For anyζ >0, there existsCζ such that if the assumption
η>ϕCζN−1|w|1/2 (5.14)
holds then the following estimates hold
maxi |Gii|62(logN)|w|−1/2, (5.15)
maxi |w||Gii||Gii(i,∅)|6(logN)4, (5.16) maxij |Gij|6C(logN)2|w|−1/2, (5.17)
|m|62(logN)|w−1/2| (5.18)
withζ-high probability uniformly for |w|+|z|6C.
5.2 Improved bounds onG’s.
The next lemma gives the bounds onG,Gandmunder the condition (5.1). Note: with (4.3), it implies that for anyU,T: |U|+|T|=O(1),
|Rem(U,T)| ≫(N η)−1. (5.19)
Before we give the rigorous proof for the bounds on G, G, we provide a rough picture on the sizes of these terms under the condition (5.1),w∈Iε and||z| −1|62ε. We note that the typical size of theG(klU,T) heavily relies on whetherk=land whether k,l are inU,T.
(i) Ifk=l /∈U∪T, the typical size ofG(kkU,T)(w, z)ism(w, z) = N1 TrG(w, z).
(ii) Ifk6=l, andk, l /∈U∪T, the typical size ofG(U,T)kl (w, z)isp
|m|/(N η).
(iii) If{k, l} ∩U6=∅, thenG(U,T)kl = 0. This result follows from the definition, and it worth to emphasize:
{k, l} ∩U6=∅ =⇒ G(U,T)kl =Gkl(T,U)= 0 (5.20) (iv) Ifk=l∈T, then the typical size of G(U,T)kk is |wm|−1
(v) Ifk6=l, andk∈Tandl /∈T, then the typical size ofG(klU,T)is(|w1/2m|)−1p
|m|/(N η) (vi) Ifk6=l, andk, l∈Tthen the typical size ofG(klU,T)is|wm2|−1p
|m|/(N η)
(vii) With the definition ofG(U,T)andG(T,U)in Def. 4.1, one can easily see thatGkl(T,U)has the same typical size asG(U,T)kl (Here the superscript ofG is(T,U)not(U,T)).
We note: The mis bounded by (logN)C|w|−1/2 in (5.18) (no better bound is obtained in this paper), but we believe that it could be much smaller.
Lemma 5.7. Let X be defined as in Theorem 1.2. Letεbe small enough positive number,||z2| −1|62εand w∈Iε (see definition in (2.5)). If (5.1)holds , i.e., |Rem(w, z)|> 14Nε(N η)−1 inΩ = Ω(ε, w, z). Then there existsΩe ⊂Ω, andC >0such thatΩe holds inΩwith 1-high probability uniformly forz,w:||z2|−1|62ε andw∈Iε, (see definition in (5.9)) and the following bounds hold inΩe for any16i6=j6N, (HereA∼B denotes there existsC >0such that C−1|B|6|A|6C|B| )
|1 +m|>N34ε(N η)−1 (5.21)
|1 +m(i,i)|>N14εZi(i)
(5.22)
G(ii∅,i)= (1 +O(N−14ε))−1 w
1
1 +m(i,i) (5.23)
|1 +m| ∼ |m| (5.24)
|Gii|6(logN)C|m| (5.25)
|G(ij∅,i)|6 ϕC
|w1/2m| s|m|
N η (5.26)
|G(ii∅,j)|6(logN)C|m| (5.27)
|G(ii∅,ij)|6 C
|wm| (5.28)
|Gij|6ϕC s
|m|
N η (5.29)
|wGii|−1>N12ε|Zi| (5.30)
|m(i,i)|>(logN)−1 (5.31)
Furthermore, with the symmetry and the definition of G(U,T) andG(T,U), these bounds also hold under the following exchange
G(U,T)↔ G(T,U), Z ↔Z. (5.32)
Proof of Lemma 5.7: In the following proof, we only focus on the fixedz,w,iandj, since the uniformness can be easily checked.
We chooseζ = 1. Because ϕ≪Nε for any fixedε >0 (see (5.8)) and in this lemmaw∈ Iε, one can easily check that the assumption in this lemma implies the conditions of lemma 5.6 i.e.,
w∈Iε =⇒ (5.14)holds for∀Cζ (5.33)
Therefore we can use all of the results (withζ= 1) of lemma 5.6 in the following proof.
1. We first prove (5.21). The condition (5.1) implies that|N1
P
iReGii|> 14Nε(N η)−1, then there exists i: 16i6N such that|Gii|> 14Nε(N η)−1. Together with (5.16), it implies that |Gii(i,∅)|6|w|−1N−45εN η with1- high probability inΩ. Inserting it into (5.6) withT=i, usingG(i,i)ii = 0from (5.20), we have
|1 +m(i,i)+Zi(i)|>N45ε(N η)−1 (5.34) Applying (5.10) to boundZi(i)withT=i, using Schwarz’s inequality and the factG(i,i)ii = 0again, we obtain
|Zi(i)|6N−ε/20Imm(i,i)+Nε/10(N η)−1 (5.35)
holds with 1-high probability inΩ. Together with (5.34), it implies that with 1-high probability in Ω,
|1 +m(i,i)|>2N34ε(N η)−1 Then replacingm(i,i)withmby (4.3), we obtain (5.21).
2. For (5.22), first using (4.3) and (5.21), we have that foranyi: 16i6N
|1 +m(i,i)|>N23ε(N η)−1 (5.36)
holds with 1-high probability inΩ. Together with the Z version of (5.35):
|Zi(i)|6N−ε/4Imm(i,i)+Nε/3(N η)−1 we obtain (5.22).
3. For (5.23), it follows from (5.4) withT=i, (5.20) and (5.22).
4. Now we prove (5.24). Suppose (5.21), (5.23) and (5.10) holds inΩ0⊂Ω. From our previous results,Ω0
holds with 1-high probability inΩ. Now we prove that (5.24) holds inΩ0. First we assume that|1 +m|63, clearly otherwise (5.24) holds. Together with (5.21), it implies that (N η)−1 6 3N−12ε. Using (4.3) and
|1 +m| 63, we obtain |1 +m(i,i)| 6 4 and |m∅G,i)| 6 5. With (5.23), the bound |1 +m(i,i)| 6 4 implies
|G(ii∅,i)|>|5w|−1. The assumptionw∈Iεimplies|w|6ε(see definition ofIεin (2.5)). Then applying (5.10) onZi, and using ||z| −1|62εand the bounds we just proved on(N η)−1,m(G∅,i) andG(ii∅,i), we obtain that inΩ0,
|Zi|6N−13 ε|G(ii∅,i)| (5.37) Together with|G(∅,i)ii |>|5w|−1 and the assumption||z| −1|62εand|w|6ε, we have
|z|2|G(ii∅,i)|+Zi
>|10w|−1 (5.38)
Now inserting (5.38) into the identity (5.6) withT=∅, using|m(∅,i)G |65, and|w|6εagain, we obtain that
Gii= 1
−w
|z|2G(ii∅,i)+Zi
+εi, |εi|6|60w| 1
|w|
|z|2G(ii∅,i)+Zi
(5.39)
Then together with (5.37) and (5.23), inΩ0, we have
Gii− |z|−2(1 +m(i,i))6(O(|w|) +o(1))|(1 +m(i,i))| (5.40) Combining (5.21) and (4.3), we have
(1 +m(i,i)) = (1 +o(1))(1 +m) Inserting it into (5.40), we have
Gii− |z|−2(1 +m)6(O(|w|) +o(1))|(1 +m)|, in Ω0 (5.41)
It is easy to extend this result to the following one:
maxi
Gii− |z|−2(1 +m)6(O(|w|) +o(1))|(1 +m)|, inΩe (5.42)
holds in a probability setΩe ⊂Ωsuch thatΩe holds with 1-high probability in Ω. Since m= N1 P
iGii, for small enoughε, with|w|6εand||z2| −1|62ε, (5.42) implies that
9
10|1 +m|6|m|6 11
10|1 +m|, inΩe It completed the proof of (5.24).
We note: combining (4.3), (5.1), (5.21) and (5.24), we have for any|U|, |T|=O(1),
m(U,T)∼m∼1 +m∼1 +m(U,T), |U|, |T|=O(1) (5.43) 5. For (5.25), it follows from (5.23)(withGii(i,∅) in the l.h.s.), (5.43) and (5.16).
6. For (5.26), first using (5.5), (5.12), (5.13) and (5.20), we obtain that
|G(ij∅,i)|6ϕC|w||G(ii∅,i)||G(i,i)jj | s
Imm(ij,i)G +|z|2ImGjj(ij,i)
N η (5.44)
holds with 1-high probability inΩ. Applying (5.16) onX(i,i) instead ofX, we obtain that
|w||G(i,i)jj ||Gjj(ij,i)|6(logN)4 (5.45) Recall (5.1) implies (5.19). Applying (5.25) onG(i,i)jj , we have that
|G(i,i)jj |6(logN)C|m(i,i)| (5.46)
holds with 1-high probability in Ω. Then inserting (5.45), (5.46), (5.23) and (5.43) into (5.44), with (5.18) we obtain (5.26).
7. For (5.27), from (5.3), we have
Gii =G(ii∅,j)−(G(∅,j)yj∗)i(yjG(∅,j))i
1 + yjG(∅,j)y∗j ,
On the other hand, (5.6) and (5.13) show that (similar result can be seen in (6.18) of [9]) Gjj =−w−1(1 + yjG(∅,j)y∗j)−1
Then
Gii =G(ii∅,j)+wGjj
(G(∅,j)XT)ij−G(ij∅,j)z∗ (XG(∅,j))ji−G(ji∅,j)z
(5.47) SinceXjk’s(16k6N)are independent of G(∅,j), using large deviation lemma (e.g. see Lemma 6.7 [9] ), as in (3.44) of [10], we have that with 1-high probability,
|(XG(∅,j))ji|+|(G(∅,j)XT)ij|6ϕC s
ImG(ii∅,j)
N η (5.48)
Inserting this bound, (5.25), (5.26) and (5.43) into (5.47), we have
|Gii−G(ii∅,j)|6ϕCw|m| ImG(ii∅,j)
N η + 1 w|m|N η
!
i.e.,
Gii =
1 +O(|w|m N η )
G(ii∅,j)+O(ϕC N η) It implies that
G(ii∅,j)=
1 +O(|w|m N η )
Gii+O(ϕC N η) Then with (5.15) and (5.18), it implies
|Gii−G(∅,j)ii |6ϕC(N η)−1 and we obtain (5.27).
8. For (5.28), using (5.4) and (5.20), we have
G(ii∅,ij)=−w−1[1 +m(i,ij)G +Zi(ij)]−1 Using (5.10) and (5.20) again, we can boundZi(ij) as
|Zi(ij)|6ϕC s
Imm(i,ij)G N η Together with (5.43) and (5.21), we obtain (5.28).
9. For (5.29), using (5.5), (5.12) and (5.13), we obtain that
|Gij|6ϕC|w||Gii||G(i,∅)jj | s
Imm(ij,G ∅)+|z|2ImGjj(ij,∅)+|z|2ImG(ij,ii ∅)
N η +ϕC|wz2||Gii||G(i,∅)jj ||Gij(ij,∅)| (5.49) Furthermore, with (5.7), (5.11), (5.20) and (5.43), we have
|Gij(ij,∅)|6ϕC|w||Gii(ij,∅)||Gjj(ij,i)| s
Imm(ij,ij)
N η 6ϕC|w||Gii(ij,∅)||Gjj(ij,i)| s
|m|
N η (5.50)
Here these two bounds holds with 1-high probability. As in (5.46), applying (5.23) onGjj(ij,i), with (5.43) we have
|G(ij,i)jj |6C|w|−1|m(ij,ij)|−16C|w|−1|m|−1 with 1-high probability inΩ. With (5.25), (5.28), (4.3) and (5.21), we also have
|Gii|6(logN)C|m|, |Gii(ij,∅)|+|Gjj(ij,∅)|6C|w|−1|m|−1, |m(ij,G ∅)|6C|m|,
For theG(i,jj∅)in (5.49), as in (5.47) and (5.48), with (5.20), we have
G(i,jj∅)−G(i,i)jj =wGii(i,∅)(G(i,i)XT)ji(XG(i,i))ij (5.51)
=O
ϕC|wGii(i,∅)|ImG(i,i)jj (N η)−1
Then applying (5.25) onG(i,i)jj , and applying (5.23) onGii(i,∅), with (5.43) we obtain that
|G(i,jj∅)|6(logN)C|m| Inserting these bounds into (5.49) and (5.50), we obtain (5.29).
10. For (5.30), using (5.10) (with T=∅) and (5.23), (5.43) , we have
|Zi|6ϕC
s|m|+ (|wm|)−1
N η (5.52)
holds with 1-high probability inΩ. Together with (5.18), we obtain
|Zi|6ϕC s
(|wm|)−1
N η (5.53)
Together with (5.25) and (5.18), we have
|Zi||wGii|6ϕC s
|w|1/2 N η . Then with (2.6), we obtain (5.30).
11. For (5.31), we note that (5.24) implies|m|>(logN)−1. Then with (5.43), we obtain (5.31).
5.3 Polynomialization of Green’s functions: In this subsection, using the bounds we proved in the last subsection, we write the G’s andG’s as the polynomials inF andF1/2· F (with negligible error).
We note: In the Lemma 3.2 and 4.9 we assumedXab= 0, but the bounds we proved in Lemma 5.6 and Lemma 5.7 still hold for this type ofX, the similar detailed argument was given in Remark 3.8 of [2].
Lemma 5.8. Lemma 5.6 and Lemma 5.7 still hold if one enforcesXst= 0 for some fixed16s, t6N.
Note: Heres, tare allowed to be the same as the i, j in Lemma 5.6 and Lemma 5.7. For example, from (5.29), we have|Gst|6ϕCm1/2(N η)−1/2, even ifXst= 0.
By the definitions of A(f)X , P1,2,3(X), B1,2,3(X) and P1,2,3(X), one can see that the values of A(f)X , P1,2,3(X) would not change if one replaced the G’s inside with χaG’s. Therefore, instead of G’s, we will writeχaGas the polynomials inF andF1/2· F (with negligible error).
Definition 5.9. For simplicity, we define the notations:
α:=χa|m(a,a)|, β := χa
|wm(a,a)|, γ=χa|w|1/2 s
|m(a,a)| N η We collect some basic properties of these quantities in the the following lemma.
Lemma 5.10. Under the assumption of Lemma 3.2, forz,w: ||z2| −1|62εandw∈Iε
χa(logN)−16α6(logN)Cβ6(logN)Cη−1 (5.54)
χa(logN)−1N−1/26γ6N−ε/2 (5.55)
βγ2=χa(N η)−1 (5.56)
χa(logN)C
N η 6α6χa(logN)C|w−1/2| (5.57)
hold with 1-high probability.
Proof of Lemma 5.10: We note χa = 1 implies the condition (5.1). Hence the results in Lemma 5.7 hold with 1- high probability. First from (5.31) and|w|>η, we have the first and the third inequalities of (5.54), and the first inequality of (5.55). The second inequality in (5.54) follows from (5.18) and (5.43). It also implies the second inequality of (5.57). Combining the second inequality of (5.54) with (2.6), we obtain the second inequality in (5.55). For (5.56), one can easily check this identity by the definition of β andγ.
For the first inequality of (5.57), it follows from (5.21) and (5.43).
Definition 5.11. Under the assumption of Lemma 3.2, for w∈Iε,||z| −1|62εands, k6=a, we define Sks
andSesk as random variables which are independent of thea-th row and columns ofX and G(∅,a)ka
G(aa∅,a)
= X(a)
s
SksXsa and G(∅,a)ak G(aa∅,a)
= X(a)
s
XsaSesk
With (5.5), one can obtain their explicit expressions, e.g., Sks:=z∗wG(a,a)kk Gks(ak,a)−wG(a,a)kk
X(a) t
Gst(ak,a)Xtk
Similarly, we defineSks andSesk as random variables which are independent of thea-th row and columns of X and
Gka(∅,a)
Gaa(a,∅) =X
s
SksXas and Gak(∅,a)
Gaa(a,∅) =X
s
XasSesk
As one can see that S, S,e S and Sehave the same behaviors. Here we collect some basic properties of these quantities in the the following lemma.
Lemma 5.12. We assume that ||z| −1|62ε,w∈Iε,k6=a andX satisfies the assumption of Lemma 3.2.
For someC >0, with 1- high probability, we have
|χaSks|6χaϕC(δsk+γ) (5.58)
so asS,e S andSe. Recall the definition F’s in Def. 4.3, for someC >0, we have
χaXaa∈nγF, χa(XSX)aa∈nγF, (5.59) and
χa(XTSSX)e aa∈nN γ2F (5.60)
Furthermore, (5.58), (5.59) and (5.60) hold uniformly for ||z| −1| 6 2ε, w ∈ Iε and k, s : k, s 6= a, 16k, s6N.
Note: With (5.59), we also have χa
G(aa∅,a)−1
(XG(∅,a))aa=χa
G(aa∅,a)−1X
k
XakG(ka∅,a)=χa((XSX)aa+Xaa)∈nγF (5.61) Proof of Lemma 5.12: Since the uniformness are easy to be checked, we will only focus on the fixedz, w,sandk.
1. For (5.58), the condition χa = 1 implies that we can apply Lemma 5.7 on theX(a,a). Recall: these bounds also hold under the exchange (5.32). Then the bounds (5.25) and (5.26) imply that fors6=k,
χa|G(a,a)kk |6(logN)C|m(a,a)|, χa|Gks(ak,a)|6 ϕC
|w1/2m(a,a)| s
|m(a,a)|
N η , (5.62)
holds with 1-high probability. Similarly (5.23) and (5.43) implies that fors=k χa|Gkk(ak,a)|6C|wm(a,a)|−1
holds with 1-high probability. Then with the explicit expression ofSks in Def. 5.11, we have χaSks=O(δks+ϕCγ)−wG(a,a)kk
X(a) t
Gst(ak,a)Xtk (5.63)
holds with 1-high probability. Since Xtk’s are independent of Gst(ak,a)’s (1 6t 6N), using large deviation lemma (e.g. see Lemma 6.7 [9] ), as in (3.44) of [10], we have for
| X(a)
t
Gst(ak,a)Xtk|6ϕC s
ImGss(ak,a)
N η (5.64)
holds with 1-high probability. Applying Lemma 5.7 on theX(a,a)again, from (5.27), we have
|G(ak,a)ss |6(logN)C|m(a,a)|+Cδks
1
|wm(a,a)|
with 1-high probability. Together with the first part of (5.62), (5.63) and (5.64), we obtain
|χaSks|6Cδsk+ϕCγ+ϕC|w1/2m(a,a)|γ (5.65) with 1-high probability. At last, with (5.18) and (5.43), we obtain (5.58).
2. For (5.59), we recall the definition ofF in Def. 4.3, especially the twoN1/2factors inF. It is easy to see that (5.59) follows from the first inequality of (5.55) and the bounds onS in (5.58).
3. For (5.60), since the (5.58) also holds forS, then with the first inequality of (5.55), we havee
|χa(SSe )kl|6ϕC δkl+γ+N γ2
6ϕCN γ2 with 1-high probability. Together with definition ofF, we obtain (5.60).
Now we introduce a method to track and show the dependence of the random variables on the indices.
First we give a simple example to show the basic idea. LetAkl,16k, l6N be a family of random variables:
Akl= G(a,a)kk
|G(a,a)kk | G(a,a)ll
|G(a,a)ll |(XG(a,a)XT)aa, 16k, l6N (5.66) whereXT is the transpose ofX. By definition ofF andF0, we can say,
Akl∈ F0· F0· F1∈ F But the first part of the r.h.s. of (5.66), i.e., G
(a,a) kk
|G(a,a)kk | only depends on the first indexk, the second part G
(a,a) ll
|G(a,a)ll |
only depends on the second indexland the third part is independent of the indices. Therefore, we prefer to write it as
Akl ∈ F0[k]· F0[l]· F1[∅].
More precisely, Akl ∈ F0[k]· F0[l]· F1[∅] means thatAkl =f1(k)f2(l)f3 andf1(k)∈ F0, f2(l)∈ F0, f3∈ F1, andf1(k)only depends on indexk,f2(l)only depends on indexl, andf3 does not depends on index.
For general case, to show how the variable depends on the indices, we define the following notations.
Definition 5.13. Let AI be a family of random variables where I is indices (vector), not including index a.
we write
AI ∈Y
i
Fα[Iii], Fαi ∈
F0, F1/2, F1, F
where Ii is a part of I, if and only if there exists fi(Ii) ∈ Fαi such that AI = Q
ifi(Ii) and fi(Ii) only depends on the indices in Ii.
For the example in (5.66), we writeAkl∈ F0[k]·F0[l]·F[∅], whereI= (k, l),I1= (k),I2= (l)andI3= (∅), α1=α2= 0andα3= 1
The following lemma shows the G’s can be written as the polynomials inF’s.
Lemma 5.14. For simplicity, we introduce the notaion:
F0,X[k] :=XakF0[k]+XkaF0[k] (5.67) i.e.,
fk∈ F0,X[k] ⇐⇒ ∃gk, hk ∈ F0[k]:fk =Xakgk+Xkahk
Let w∈Iε and||z| −1|62ε. Under the assumption of Lemma 3.2, for any D >0, we have
χaG(aa∅,a),∈nβF+O≺(N−D) (5.68) and
χaGaa∈n αF+O≺(N−D) (5.69)
For any k6=a,
χa(G(aa∅,a))−1G(∅,a)ka ∈n γF1/2[k] +F0,X[k] +O≺(N−D) (5.70) and,
χaGak∈n r α
N ηF1/2[k] · F[∅]+ (α+βγ)F[∅]· F0,X[k] +O≺(N−D) (5.71) For any k, l6=a,
χa
Gkl−G(a,a)kl
∈n χa
N ηF1/2[k]F1/2[l] +βγF0,X[k] F1/2[l] +βγF0,X[l] F1/2[k] +βF0,X[k] F0,X[l]
!
F[∅]+O≺(N−D) (5.72) Furthermore,(5.68)-(5.72) hold uniformly for ||z| −1|62ε,w∈Iε and16k, l6=a6N
Proof of Lemma 5.14: Because one can easily check the uniformness, in the following proof we will only focus on the fixedw, z, k and l. Recall (4.18) and (5.33), with the assumption w∈Iε and ||z| −1| 62ε, we know the results in Lemma 5.6 and 5.7 hold under the assumption of this lemma. Furthermore, these results also hold forX(a,a)(instead ofX).
1. We first prove (5.68). Applying Lemma 5.7 on X(a,a), with (5.25), (5.29) and the first inequality of (5.57), we have
χaG(a,a)kl ∈
δklα+|w−12|γ
F0, and |w−12|γ6α (5.73) Then with
Za(a)=
(XG(a,a)XT)aa−m(a,a)
(5.74) andα:=χam(a,a), we have
χa(XG(a,a)XT)aa∈n αF and χaZa(a)∈nαF. (5.75) From (5.6) and (5.20) withi=a,T=a, we have
χaGaa(a,∅)=χa
1
−w
1
1 +m(a,a)+Za(a)
Then with (5.22), for anyε, D >0, there exists Cε,D dependingεandD, such that
χaGaa(a,∅)=χa 1
−w
CXε,D
k=1
1
(1 +m(a,a))k(Za(a))k−1
+O≺(N−D)
holds with 1-high probability. Hence with (5.43) andχaZa(a)∈nm(a,a)F in (5.75), we obtain that χaGaa(a,∅)∈n 1
wm(a,a)F+O≺(N−D) =βF+O≺(N−D) (5.76)
which implies (5.68) with the fact: Gaa(a,∅) andG(aa∅,a) have the same behavior.
2. Now we prove (5.69). From (5.30) and (5.4), withi =aand T =∅, for any ε, D >0, there exists Cε,D dependingε andDsuch that with 1-high probability,
χaGaa= From (5.10) and (5.13), we have
Za=zX Now we claim that for anyD,
χaZa ∈nβF+O≺(N−D) (5.79)
Combining (5.79), (5.80) and (5.77), we obtain (5.69).
2.aWe prove (5.79) first. Using theG version of (5.61) and (5.76), we can write the first two terms of the r.h.s. of (5.78) as we can write Similarly for the third term of the r.h.s. of (5.78), using (5.2), we can write it as
(XTG(a,∅)X)aa=X
Using (5.75) , (5.61) and (5.76), we obtain
χa(XTG(a,∅)X)aa∈nαF+βγ2F+O≺(N−D), (5.82) For the fourth term of the r.h.s. of (5.78), using (5.2), we have
Gkk(a,a)=Gkk(a,∅)−Gka(a,∅)Gak(a,∅)
Gaa(a,∅) Together with (5.60), it implies that
m(a,G ∅)=m(a,a)+ 1
Now inserting these bounds back to (5.78) and using the relations betweenα,β andγ in (5.54) and (5.55), we obtain (5.79).
2.b Now we prove (5.80). With (5.83) and
(Gaa(a,∅))−1=−w(1 + (XG(a,a)XT)aa) =−w(1 +m(a,a)+Za(a)) we write
1
1 +m(a,G ∅)+|z|2G(a,aa∅) =
G(a,aa∅)−1
1+m(a,a) G(a,aa∅)
+N1
XSSe XT
aa+ 1 +|z|2
(5.84)
= −w(1 + (XG(a,a)XT)aa)
−w(1 +m(a,a))(1 +m(a,a)+Za(a)) +N1
XSSe XT
aa+ 1 +|z|2 We write this denominator as
−w(1 +m(a,a))(1 +m(a,a)) +|z|2 +
−w(1 +m(a,a))Za(a)+ 1 N
XSSe XT
aa+ 1
(5.85) With (5.22), (5.43), (5.18), we can bound the first term in the second bracket as follows:
χa|w(1 +m(a,a))Za(a)|6N−ε/5
holds with 1-high probability. Together with (5.60) and (5.55), with 1-high probability, we can bound the second bracket of (5.85) as
χa
−w(1 +m(a,a))Za(a)+ 1 N
XSSe XT
aa+ 1
6N−ε/6 (5.86)
On the other hand, we claim for someC >0, the following inequality holds with 1-hight probability.
χa
−w(1 +m(a,a))(1 +m(a,a)) +|z|2>χa(logN)−C (5.87) If (5.87) does not hold, thenχa = 1 and 1 +m(a,a)= (−|z|+O(logN)−C)w−1/2. With (4.3), (5.21) and
||z| −1|62ε, we obtain
1 +m(a,G ∅)= (−|z|+O(logN)−C)w−1/2, (5.88) It follows from1 +m(a,a)= (−|z|+O(logN)−C)w−1/2 and (5.23) that
Gaa(a,∅)= (|z|−1+O(logN)−C)w−1/2 (5.89) Inserting them into (5.10), with (2.6), we have
|Za|=O(logN)−Cw−1/2 (5.90) Now insert (5.88), (5.89) and (5.90) into (5.4), we obtain|Gaa|>(logN)C−1|w|−1/2for anyC >0, which contradacts (5.15). Therefore, (5.87) must hold for someC >0.
Recall the denominator of the r.h.s. of (5.84) equals to the sum of the l.h.s. of (5.86),(5.87) (see (5.85)).
Then inserting (5.86),(5.87) into (5.85), we have that for any fixed D, there exists Cε,D, such that with 1-high probability,
χa
1 +m(a,G ∅)+|z|2G(a,aa∅) =−χaw(1 + (XG(a,a)XT)aa) (5.91)
∗
CXε,D
k=1
(−w(1 +m(a,a))Za(a)+N1
XSSe XT
aa+ 1k−1
−w(1 +m(a,a))(1 +m(a,a)) +|z|2k +O≺(N−D) For the terms in (5.91), we apply (5.75) on (XG(a,a)XT)aa and Za(a), apply (5.24) on(1 +m(a,a)), apply (5.60) on
XSSe XT
aa, apply (5.55) on γand apply (5.87) on the denominator of (5.91), we obtain χa
1 +m(a,G ∅)+|z|2Gaa(a,∅) ∈n −χa(wF+wαF)
CXε,D
k=1
−wα2F+γ2Fk−1
+O≺(N−D)
With the bounds of αand γ in (5.54), (5.55) and (5.57), it implies (5.80). Combining (5.79), (5.80) and (5.77), we obtain (5.69).
3. For (5.70), it clearly follows the Def. 5.11, (5.58) and Def. 5.13.
4. Now we prove (5.71). First with (5.6) and (5.13), we have
(Gaa)−1=−w(1 + (Y G(∅,a)YT)aa). (5.92) Applying (5.3) onGak withi=a, recallingY =X−zI, we have
Gak=G(ak∅,a)+wGaa
(G(∅,a)XT)aa−z∗G(∅,a)aa (XG(∅,a))ak−zG(ak∅,a)
(5.93)
=G(ak∅,a)+wGaaG(aa∅,a)|z|2G(ak∅,a)−zwG(ak∅,a)Gaa(G(∅,a)XT)aa
−z∗wGaaG(aa∅,a)(XG(∅,a))ak+wGaa(G(∅,a)XT)aa(XG(∅,a))ak
Writing the first term in the r.h.s. asG(ak∅,a)Gaa(Gaa)−1 and applying (5.92) on (Gaa)−1, we can write the first three terms in the r.h.s. of (5.93) as
−1−(XG(∅,a)XT)aa+z∗(XG(∅,a))aa
wGaaG(ak∅,a) Therefore
Gak=
−1−(XG(∅,a)XT)aa+z∗(XG(∅,a))aa
wGaaG(ak∅,a)+
−z∗G(∅,a)aa + (G(∅,a)XT)aa
wGaa(XG(∅,a))ak
(5.94) Inserting (5.68)-(5.70), (5.81), (5.82), the fact: αβ=χa and (5.61) into (5.94), we have
χaGak∈n 1 +α+βγ+βγ2 F[∅]
γF1/2[k] +F0,X[k]
+ (1 +γ)F[∅](XG(∅,a))ak+O≺(N−D)
More precisely, here what we used is theG-version of (5.81), (5.82), i.e.,
χa(XG(∅,a))aa∈nβγF and χa(XG(∅,a)XT)aa∈(α+βγ2)F. They follows from (5.81), (5.82) and the symmetry betweenGandG.
Next using (5.57), we have χaGak∈n(α+βγ)F[∅]
γF1/2[k] +F0,X[k]
+F[∅](XG(∅,a))ak+O≺(N−D) (5.95)
∈nχa
r α
N ηF1/2[k] · F[∅]+ (α+βγ)F[∅]· F0,X[k] +χa(XG(∅,a))akF[∅]+O≺(N−D) For(XG(∅,a))ak in (5.95), using (5.2), fork6=awe have (note: scan bea)
G(a,a)sk =G(sk∅,a)−G(sa∅,a)G(ak∅,a) G(aa∅,a)
Together with (5.61), (5.68) and (5.70), it implies that χa(XG(∅,a))ak =χa(XG(a,a))ak+χa
(XG(∅,a))aa
G(aa∅,a)
G(ak∅,a) (5.96)
∈nχa(XG(a,a))ak+βγ2F[∅]· F1/2[k] +βγF[∅]· F0,X[k] +O≺(N−D) It follows from (5.73), (note: |w−1/2|γ=α1/2(N η)−1/2) that
χa(XG(a,a))ak∈n r α
N ηF1/2[k] +αF0[k]Xak∈n r α
N ηF1/2[k] +αF0,X[k] . Inserting it into (5.96), with Lemma 5.10, we obtain
χa(XG(∅,a))ak∈n r α
N ηF1/2[k] · F[∅]+ (α+βγ)F[∅]· F0,X[k] +O≺(N−D) (5.97) Together with (5.95), we obtain (5.71).
5. Now we prove (5.72). With (5.97), (5.70) and Lem. 5.10, we have χα
(G(∅,a)XT)ka−z∗G(ka∅,a)
∈nβγF1/2[k] · F[∅]+βF[∅]· F0,X[k] +O≺(N−D) (5.98) Together with (5.3), (5.92), (5.68) and (5.56), we can writeGkl as follow,
χa
Gkl−G(kl∅,a)
=χawGaa
(G(∅,a)XT)ka−z∗G(ka∅,a) (XG(∅,a))al−zG(al∅,a)
(5.99)
∈n1 βF[∅]
βγF1/2[k] · F[∅]+βF[∅]· F0,X[k] βγF1/2[l] · F[∅]+βF[∅]· F0,X[l]
+O≺(N−D)
∈n 1
N ηF1/2[k] · F1/2[l] · F[∅]+βγ
F0,X[k] · F1/2[l] +F0,X[l] · F1/2[k]
· F[∅]+βF0,X[k] · F0,X[l] · F[∅]+O≺(N−D)
Furthermore, with (5.2), (5.68), (5.70) and (5.56), we can writeG(kl∅,a)as
Therefore, together with (5.99), we obtain (5.72).
Next, we write the terms appeared in the Lemma 4.9 as polynomials inF,F1/2andF1/2·F(with proper coefficients and ignorable error terms).
Proof of Lemma 5.15: 1. For (5.100), using (5.72) and (5.69), we have χa(m−m(a,a)) = χa
Then with (5.54) and (5.55), we obtain (5.100).
2. For (5.101), it follows from (5.72),F1/2· F1/2⊂ F and the fact: Xab= 0that Then with (5.69), (5.68) and (5.81), we obtain (5.102).
4. Now we prove (5.103), with (5.3) and (5.92) again, we write it is (Y G)ak=−wGaa(Y G(∅,a))ak=−wGaa
(XG(∅,a))ak−zG(ak∅,a)
(5.110) Then using (5.98) and (5.69), we obtain (5.103).
5. For (5.104), by definition, we write(Y G2)abas
With (5.108) and Lemma 5.10, we obtain χa
and Lemma 5.10 again, we get χa
Similarly, with (5.102), (5.71) and Lemma 5.10 again, we obtain χa(Y G)aaGab∈n χa
η
F1/2[b]F[∅]+F[b]Xba
and we obtain (5.104). Then with Lemma 5.10, we obtain (5.105).
7. (5.106), we write it as Similarly, using (5.72), and Lemma 5.10 we have
χaGbbGbb∈n Using (5.69), and Lemma 5.10 we have
χaGbaGab∈n β
ηF+O≺(N−D)
which completes the proof of (5.106).
8. For (5.107), it follows from
(Y G2YT)aa=Gaa+w(G2)aa
and (5.69) and (5.105).
Now we are ready to prove Lemma 4.9, which is the key lemma in the proof of our main result.
5.4 Proof of lemma 4.9. First with m−m(a,a) =O(N η)−1 (see (4.3)) and the definition of χa, for any fixedD >0, with 1-high probability, we can write the h(tX)as
h(tX) =χah(tX) =
Cε,D
X
k=0
1
k!h(k)(tX(a,a))χa
Rem−Rem(a,a) Nε(N η)−1
k
+O(N−D)
where constantCε,D depends onεon D, andh(k) is thek−thderivative ofh. Using (5.100) and the fact thathis smooth and supported in[1,2], we obtain
h(tX)∈nF+O≺(N−D) (5.112)
and
h(tX)−h(tX(a,a))∈n N−ε1(|tX(a,a)|62)F+O≺(N−D) (5.113) Note: 1(|tX(a,a)|62) =1(|Rem(a,a)|62Nε(N η)−1). Similarly, one can prove
h′(tX), h′′(tX), h′′′(tX)∈n 1(|tX(a,a)|62)F+O≺(N−D) (5.114) Using (5.112), (5.113) and (5.100), we have
h(tX) Rem−h(tX(a,a)) Rem(a,a)
∈n
h(tX) Rem(a,a)−h(tX(a,a)) Rem(a,a) + 1
N ηF+O(N−D)
∈n 1
N ηF+O≺(N−D) (5.115)
It implies (4.19).
For (4.20), recallBm(X)is defined as Bm(X) := 1
m!(N1−εη)(m−1)
mh(m−1)(tX) +h(m)(tX)tX
Then using (5.112), (5.114) and (5.100), we obtain (4.20).
Similarly, for (4.21), the terms appearing in the definition (3.20) have been all bounded in (5.104), (5.69), (5.106), (5.104), (5.107) and (5.101). With a simple calculation, one can obtain (4.21) and complete the proof.
References
[1] Y. Ameur, H. Hedenmalm, and N. Makarov,Fluctuations of eigenvalues of random normal matrices, Duke Mathematical Journal159(2011), 31–81.
[2] Y. Ameur and J. Ortega-Cerdà,Beurling-Landau densities of weighted Fekete sets and correlation kernel estimates, preprint arXiv:1110.0284(2011).
[3] Z. D. Bai,Circular law, Ann. Probab.25(1997), no. 1, 494–529.
[4] Z. D. Bai and J. Silverstein,Spectral Analysis of Large Dimensional Random Matrices, Mathematics Monograph Series, vol. 2, Science Press, Beijing, 2006.
[5] Patrick Billingsley,Probability and measure, 3rd ed., Wiley Series in Probability and Mathematical Statistics, John Wiley
& Sons Inc., New York, 1995. A Wiley-Interscience Publication.
[6] A. Bloemendal, L. Erdoes, A. Knowles, H.T. Yau, and J. Yin, to appear (2013).
[7] Pavel Bleher and Robert Mallison Jr.,Zeros of sections of exponential sums, Int. Math. Res. Not. (2006), Art. ID 38937, 49.
[8] A. Borodin and C. D. Sinclair,The Ginibre ensemble of real random matrices and its scaling limits, Comm. Math. Phys.
291(2009), no. 1, 177–224.
[9] P. Bourgade, H.-T. Yau, and J. Yin,Local circular law for random matrices, preprintarXiv:1206.1449(2012).
[10] ,The local circular law II: the edge case, preprintarXiv:1206.3187(2012).
[11] R. Boyer and W. Goh,On the zero attractor of the Euler polynomials, Adv. in Appl. Math.38(2007), no. 1, 97–132.
[12] O. Costin and J. Lebowitz,Gaussian fluctuations in random matrices, Phys. Rev. Lett.75(1995), no. 1, 69–72.
[13] E. B. Davies,The functional calculus, J. London Math. Soc. (2)52(1995), no. 1, 166–176.
[14] A. Edelman,The probability that a random real Gaussian matrix has k real eigenvalues, related distributions, and the circular law, J. Multivariate Anal.60(1997), no. 2, 203–232.
[15] L. Erdős, H.-T. Yau, and J. Yin, Bulk universality for generalized Wigner matrices, to appear in PTRF, preprint:
arXiv:1001.3453(2010).
[16] ,Rigidity of Eigenvalues of Generalized Wigner Matrices, to appear in Adv. Mat., preprintarXiv:1007.4652(2010).
[17] P. J. Forrester and T. Nagao,Eigenvalue Statistics of the Real Ginibre Ensemble, Phys. Rev. Lett.99(2007).
[18] J. Ginibre,Statistical ensembles of complex, quaternion, and real matrices, J. Mathematical Phys.6(1965), 440–449.
[19] V. L. Girko,The circular law, Teor. Veroyatnost. i Primenen.29(1984), no. 4, 669–679 (Russian).
[20] F. Götze and A. Tikhomirov,The circular law for random matrices, Ann. Probab.38(2010), no. 4, 1444–1491.
[21] T. Kriecherbauer, A. B. J. Kuijlaars, K. D. T.-R. McLaughlin, and P. D. Miller,Locating the zeros of partial sums of ez with Riemann-Hilbert methods, Integrable systems and random matrices, Contemp. Math., vol. 458, Amer. Math. Soc., Providence, RI, 2008, pp. 183–195.
[22] Eugene Lukacs,Characteristic functions, Griffin’s Statistical Monographs& Courses, No. 5. Hafner Publishing Co., New York, 1960.
[23] G. Pan and W. Zhou,Circular law, extreme singular values and potential theory, J. Multivariate Anal.101(2010), no. 3, 645–656.
[24] N. Pillai and J. Yin,Universality of Covariance matrices, preprintarXiv:1110.2501(2011).
[25] B. Rider and B. Virág,The noise in the circular law and the Gaussian free field, Int. Math. Res. Not. IMRN2(2007).
[26] M. Rudelson,Invertibility of random matrices: Norm of the inverse, Ann. of Math.168(2008), no. 2, 575–600.
[27] M. Rudelson and R. Vershynin, The Littlewood-Offord problem and invertibility of random matrices, Adv. Math. 218 (2008), no. 2, 600–633.
[28] C. D. Sinclair,Averages over Ginibre’s ensemble of random real matrices, Int. Math. Res. Not. IMRN5(2007).
[29] A. Soshnikov,Gaussian fluctuation for the number of particles in Airy, Bessel, sine, and other determinantal random point fields, J. Statist. Phys.100(2000), no. 3-4, 491–522.
[30] T. Tao and V. Vu,Random matrices: the circular law, Commun. Contemp. Math.10(2008), no. 2, 261–307.
[31] ,Random matrices: universality of ESDs and the circular law, Ann. Probab.38(2010), no. 5, 2023–2065. With an appendix by Manjunath Krishnapur.
[32] ,Random matrices: Universality of local spectral statistics of non-Hermitian matrices, preprintarXiv:1206.1893 (2012).
[33] P. Wood, Universality and the circular law for sparse random matrices, The Annals of Applied Probability 22(2012), no. 3, 1266 - 1300.