• Aucun résultat trouvé

Crystallization of random matrix orbits

N/A
N/A
Protected

Academic year: 2021

Partager "Crystallization of random matrix orbits"

Copied!
25
0
0

Texte intégral

(1)

Crystallization of random matrix orbits

The MIT Faculty has made this article openly available. Please share

how this access benefits you. Your story matters.

Citation Gorin, Vadim, and Adam W. Marcus, "Crystallization of random matrix orbits." International mathematics research notices 2020, 3 (February 2020): p. 883-913 doi 10.1093/imrn/rny052 ©2020 Author(s)

As Published 10.1093/IMRN/RNY052

Publisher Oxford University Press (OUP)

Version Author's final manuscript

Citable link https://hdl.handle.net/1721.1/124436

Terms of Use Creative Commons Attribution-Noncommercial-Share Alike Detailed Terms http://creativecommons.org/licenses/by-nc-sa/4.0/

(2)

arXiv:1706.07393v2 [math.PR] 6 Mar 2018

VADIM GORIN AND ADAM W. MARCUS

Abstract. Three operations on eigenvalues of real/complex/quaternion (correspond-ing to β = 1, 2, 4) matrices, obtained from cutt(correspond-ing out principal corners, add(correspond-ing, and multiplying matrices can be extrapolated to general values of β > 0 through associated special functions. We show that the β → ∞ limit for these operations leads to the finite free projection, additive convolution, and multiplicative convolution, respectively.

The limit is the most transparent for cutting out the corners, where the joint distribu-tion of the eigenvalues of principal corners of a uniformly-random general β self–adjoint matrix with fixed eigenvalues is known as the β–corners process. We show that as β → ∞ these eigenvalues crystallize on an irregular lattice consisting of the roots of derivatives of a single polynomial. In the second order, we observe a version of the discrete Gaussian Free Field (dGFF) put on top of this lattice, which provides a new explanation as to why the (continuous) Gaussian Free Field governs the global asymptotics of random matrix ensembles.

1. Matrix operations at general β

Fix N and consider two N × N self–adjoint matrices AN, BN with either real, or complex, or quaternion entries. In this article we are mostly interested in the (real) eigenvalues of these matrices. In particular, we consider three natural matrix operations, which have non-trivial influence on the eigenvalues.

(1) We can cut out the principal top–left k × k corner of the matrix, AN 7→ Ak, where Ak is the projection of AN.

(2) We can add the matrices, (AN, BN) 7→ AN + BN, (3) We can multiply the matrices (AN, BN) 7→ ANBN.

In principle, all three operations can be expreseed in terms of multiplication, since for small ε, (1 + εAN)(1 + εBN) ≈ 1 + ε(AN+ BN), thus reducing addition to multiplication (all three operations can be expressed in terms of addition as well, but in a less obvious way, see (7)). On the other hand, if the eigenvalues of BN are positive, then ANBN and (BN)1/2AN(BN)1/2 differ by conjugation, and therefore have the same spectrum. If now BN is the matrix of the projector onto the first k basis vectors, then (BN)1/2AN(BN)1/2 is precisely the k × k corner of AN, i.e. the projection Ak. Nevertheless, we will consider all three operations, as this will provide more insights.

For deterministic matrices, the relations between the spectra of AN and Ak is folklore; it is given by simple interlacing conditions (cf. [N] and Definition 1.3). The result of the operation (AN, BN) → AN + BN on the spectrum is the subject of the celebrated Horn’s (ex-)conjecture, and similar results are now known for (AN, BN) → ANBN, see [Fu] for a review.

Our point of view is different, as we consider random AN, BN with invariant distribu-tions, which means that given the eigenvalues of a matrix, its eigenvectors are conditionally

(3)

uniform. This is the same as declaring the distribution to be invariant under the action of orthogonal/unitary/symplectic group (depending on the base field) by conjugations — hence, the name.

It suffices to study the case when the eigenvalues of AN and BN are deterministic, since other cases can be obtained as mixtures. We therefore fix two N–tuples of reals a = (a1, . . . , aN), b = (b1, . . . , bN) and define AN, BN, to be uniformly random independent matrices with corresponding prescribed eigenvalues.

For each of the three operations on matrices, we arrive at an operation of the eigenvalues, whose result is a random spectrum.

(1) a 7→ πN →kβ (a) = (a (k) 1 , . . . , a

(k)

k ), where the latter is the random k (real) eigenvalues of Ak, the corner of AN;

(2) (a, b) 7→ a ⊞βb, where the latter is the random N (real) eigenvalues of AN+ BN (3) Assume that all eigenvalues in a, b are positive, and define (a, b) 7→ a ⊠β b, where the latter is the random N (real) eigenvalues of either ANBN, or BNAN, or (BN)1/2AN(BN)1/2, or (AN)1/2BN(AN)1/2. (the eigenvalues of all 4 matrices are the same, and the last two matrices are self–adjoint, which shows that these eigenvalues are real).

The subscript β in the operations πN →k, ⊞ββ , ⊠β serves as an indication that they depend on whether we deal with real/complex/quaternion matrices, corresponding to β = 1, 2, 4. More generally, these operations can be extrapolated to general values of the real pa-rameter β > 0. For the projection πβN →k the result is known as β–corners process, cf. [N],[GS],[BG15]. For addition ⊞β and multiplication ⊠β this is done by identifying the random eigenvalues with their Laplace-type integral transforms related to multivariate Bessel functions and Heckman–Opdam hypergeometric functions, respectively. The op-erations then turn into simple multiplication of these special functions and re-expansion of the result in terms of the same functions1, see Section 2 for the details.

An alternative, yet conjectural, approach to the operations on random matrices at general values of β has been proposed in [Ed] where the framework of β–ghosts and shadows was developed. The idea there is to treat arbitrary β > 0 as the dimension of a (typically non-existent) real–division algebra, by expressing all probabilistic properties of interest through Dirichlet distributions. We do not know whether the technique of [Ed] can be pushed through to the point of reproducing the operations ⊞β, ⊠β.

Our first result concerns the dependence on β of such operations.

1There is a tricky point in the definition: the positivity of the coefficients in the re-expansion is a

well-known conjecture, which is still open. In the event that the positivity is not true for some values of β, the distributions of a ⊞βb, a ⊠βbmight fail to be probability measures, but rather be tempered

distributions, i.e. continuous linear functionals on smooth test functions. For πN →kβ the situation is simpler, as the available explicit formulas make positivity immediate.

(4)

Theorem 1.1. Let z be a formal variable. For fixed a and b, define the polynomials QN →k(z), Q(z), Q(z) of degrees k, N, N, respectively, as expected characteristic poly-nomials of the corresponding matrices, i.e.

QN →k(z) = E Y α∈πN →kβ (a) (z − α), (1) Q⊞(z) = E Y α∈a⊞βb (z − α) (2) Q⊠(z) = E Y α∈a⊠βb (z − α). (3)

Then the polynomials QN →k(z), Q⊞(z), Q(z) (i.e. their coefficients) do not depend on the choice of β > 0. They can be computed as follows:

QN →k(z) = 1 N(N − 1) · · · (k + 1)  ∂ ∂z N −k N Y i=1 (z − ai) (4) Q⊞(z) = 1 N! X σ∈SN N Y i=1 (z − ai− bσ(i)) (5) Q⊠(z) = 1 N! X σ∈SN N Y i=1 (z − aibσ(i)) (6)

The proof of Theorem 1.1 is given in Section 3.2. Note that Theorem 1.1 includes the fact that the expectations of the elementary symmetric functions in the variables πβN →k(a), a ⊞β b, a ⊠β b do not depend on β. A similar observation was recently used in [B2] in the context of the Macdonald measures (following an earlier observation [Mac, Page 318] that the application of the Macdonald q–difference operator to the Macdon-ald reproducing kernel produces an independent in q factor). We believe that this is more than a coincidence, but rather a manifestation of a general phenomenon. Indeed, Macdonald polynomials can be degenerated to Heckman-Opdam and multivariate Bessel functions that we rely on, cf. [BG15], [Su1] and discussion in the next section. A parallel degeneration leads to a class of Gibbs probability measures on (continuous) interlacing particle configurations, as studied in [OV]. On the other hand, the same article (mostly for β = 2, see however the very last paragraph there) explains that such measures can be also obtained from β–corners processes with fixed top rows. An extension of Theorem 1.1 to Macdonald polynomials is also explained in Section 4.

For classical Gaussian/Laguerre/Jacobi ensembles of random matrices the independence on β of the expectation of the characteristic polynomial was also previously noticed by some authors, see e.g. [BG15, Lemma 5.3]. Theorem 1.1 in fact holds for a much larger class of distributions. In [Mar2], it is shown that Theorem 1.1 holds for any distribution that is invariant under conjugation by signed permutation matrices, and it has been noted in [PuSa] that the same holds for any distribution that is invariant under conjugation by matrices in the standard representation of SN +1.

The polynomial operations defined by (5) and (6) have a long history in the literature (dating at least back to [Wa, Page 176]) due to an interest in understanding operations

(5)

that preserve real rootedness of polynomials. These ideas were more recently extended to the realm of stable polynomials (a multivariate version of real rootedness) in a series of works by Borcea and Br¨and´en, see, for example, [BB]. One of their results implies that any convolution that treats the coefficients of polynomials linearly can be written using the additive convolution (sometimes at the cost of needing to use extra variables). In particular, for fixed a and b, one has

(7) X σ∈SN N Y i=1 (z − aibσ(i)) = N Y i=1 bi ! X σ∈SN N Y i=1 (y − ai− z/bσ(i)) y=0

which, in theory, would allow one to compute Q⊠(z) using Q(z). We have not found any advantage to treating both as Q⊞(z) (or treating both as Q(z) as was mentioned earlier).

Recent interest in such operations has come from new techniques involving the expected characteristic polynomials of certain combinations of random matrices. In particular, The-orem 1.1 links us directly to the finite free probability developed in [MSS], [Mar], where the operations producing Q⊞(z), Q(z) from a, b are called finite free additive and multi-plicative convolutions. These convolutions are a useful tool in the “method of interlacing polynomials” as introduced in [MSS1] — in particular, the additive convolution plays an important role in the same authors’ proof of the existence of Ramanujan (multi)graphs of any degree and any size [MSS4].

The second main result deals with β → ∞ limit, a proof of which appears in Section 3.3.

Theorem 1.2. Fix k ≤ N, a and b. The distributions of πβN →k(a), a ⊞βb, a ⊠βb (in k, N, N dimensional spaces, respectively) weakly converge on polynomial test–functions as β → ∞ to δ–measures on the roots of polynomials QN →k(z), Q(z), Q(z) of Theorem 1.1, respectively.

Theorem 1.2 implies that QN →k(z), Q(z), Q(z) are real-rooted polynomials. For QN →k(z) this is easy to prove directly; proofs of the other two appear in [MSS] using the techniques of [BB], but also follow directly from the (vastly more general) main result in [BHVW].

There is a remarkable link of Theorem 1.2 for πβN →k(a) to classical ensembles of ran-dom matrices. It is well-known (cf. [Ker1], [DE], [BG15]) that the eigenvalues in Her-mite/Laguerre/Jacobi N–particle ensembles (i.e. GβE/LβE/JβE) concentrate as β → ∞ near the roots of the corresponding orthogonal polynomials. Theorem 1.2 predicts that varying N should be the same as taking derivatives (see [N] and [Su2] for an explanation why classical ensembles agree with the operation πN →kβ ), and indeed, the derivatives of Hermite/Laguerre/Jacobi orthogonal polynomials are again such orthogonal polynomials of smaller degree — this is a relation known as the forward shift operator, see e.g. [KoSw].

For the β–corners processes, capturing πN →k

β (a) simultaneously for all k = 1, . . . , N gives a more precise β → ∞ asymptotic theorem, which we now present. For each N = 1, 2, . . . , let GN be the set of all Gelfand–Tsetlin patterns of rank N, which are arrays {xk

i}1≤i≤k≤N satisfying xk+1i ≤ xki ≤ xk+1i+1. We refer to the coordinate xki as the position of the ith particle in the kth row.

(6)

Definition 1.3. The β–corners process with top row y1 < · · · < yN is the unique proba-bility distribution on the arrays {xk

i}1≤i≤k≤N ∈ GN, such that xNi = yi, i = 1, . . . , N, and the remaining N(N − 1)/2 particles have the density

(8) 1 ZN N −1 Y k=1 " Y 1≤i<j≤k (xk j − xki)2−β # · " k Y a=1 k+1 Y b=1 |xk a− xk+1b |β/2−1 # , where ZN is the normalizing constant computed as

(9) ZN = N Y k=1 Γ(β/2)k Γ(kβ/2) · Y 1≤i<j≤N (yj− yi)β−1.

Remark 1.4. The inductive (in N) computation of ZN in (9) is the Dixon–Anderson integration formula, see [Di],[And], [Fo, chapter 4].

When β = 1, 2, 4, the β–corners process admits a random matrix realization. In this setting, one considers a uniformly random self–adjoint real/complex/quaternion (corre-sponding to β = 1, 2, 4, respectively) N ×N matrix [Ai,j] with fixed eigenvalues a1, . . . , aN. Neretin shows in [N]2(see also [Bar] for β = 2 case) that xk

i can be identified with k eigen-values of the k × k principal corner [Ai,j]k

i,j=1.

Definition 1.5. The ∞–corners process with top row a = (a1 < · · · < aN), is a de-terministic array of particles {xk

i}1≤i≤k≤N ∈ GN such that for each k = 1, 2 . . . , N, xk

1 < xk2 < · · · < xkk are k roots of QN →k(z), which means (10) k Y i=1 (z − xki) = 1 N(N − 1) · · · (k + 1)  ∂ ∂z N −k" N Y i=1 (z − ai) # , and equipped with a Gaussian field {ξk

i}1≤i≤k≤N, such that ξ1N = ξ2N = · · · = ξNN = 0 and the remaining coordinates have density proportional to

(11) exp N −1 X k=1 " X 1≤i<j≤k (ξk i − ξjk)2 2(xk i − xkj)2 − k X a=1 k+1 X b=1 (ξk a− ξbk+1)2 4(xk a− xk+1b )2 #! . We call {xk

i}1≤i≤k≤N ∈ GN the deterministic part of the ∞–corners process and {ξk

i}1≤i≤k≤N the discrete Gaussian Free Field on top of it. The fact that the ∞–corners process captures β → ∞ behavior of the β–corners processes is proved in Section 3.4. Theorem 1.6. Fix y1 < · · · < yN, take a β–corners process {xki(β)}1≤i≤k≤N with top row y1 < · · · < yN, and ∞–corners process {˜xki, ξki}1≤i≤k≤N with the same top row. Then we have the convergence in distribution:

(12) x˜ki = lim β→∞x k i(β), ξik = lim β→∞ p β xki(β) − ˜xki, 1 ≤ i ≤ k ≤ N.

The proof of Theorem 1.6 is given in Section 3. The difference between Theorem 1.2 for πN →k

β (a) and Theorem 1.6 is that the latter captures not only a deterministic limit (Law of Large Numbers), but also Gaussian fluctuations around it. It would be interesting to 2 Neretin remarks that the statement is a folklore going back at least to the work of Gelfand and

(7)

do the same for a ⊞β b and a ⊠β b as β → ∞, but we do not have any theorems in this direction so far.

There is a significant amount of literature studying the fluctuations of β–corners pro-cesses as N → ∞ with β being fixed. When the top row a is random with specific distribution (rather than deterministically fixed in our setting), then the asymptotic cen-tered fluctuations were identified with a pullback of the 2d Gaussian Free Field with Dirichlet boundary conditions in [B1], [BG15], [GZ]. For a discrete analogue of β = 2 case (cf. [BuG, Section 1.3] for the link between discrete and continuous setting) similar results are also known [Pet], [BuG2] in the deterministic a case. Despite these works, the conceptual reasons for the appearance of the Gaussian Free Field in β–random matrices has remained somewhat unclear.

On the other hand, the β = ∞ case, which is the joint distribution of {ξk

i} in Definition 1.3 is a version of the Discrete Gaussian Free Field. In many examples the convergence of dGFF to GFF is known, cf. [She], [ShSc], [We], which provides a new insight on why the continuous Gaussian Free Field should show up in N → ∞ limit of the random matrices. Let us however emphasize that the convergence of the ∞-corners process toward GFF does not follow from any known results, since the points {xk

i}, where dGFF lives, vary in a very non-trivial way as N → ∞, and, in addition, the exponent in (11) has both positive and negative terms.

At β = 1, 2, 4 the N → ∞ limit of the operations a ⊞β b and a ⊠β b are also well-studied. In particular, the first–order behavior (“Law of Large Numbers”) is linked to the free probability theory cf. [VDN], [NS], and the limiting operations are known as free (additive and multiplicative) convolutions, see e.g. [Vo1], [Vo2], [Vo3]. Analogous statements for β = +∞ and fixed N are proven in [Mar]. Heuristically, one would like to be able to claim that the β-independence of the expectations of Theorem 1.1 should imply the β-independence of the N → ∞ limit for a ⊞βband a ⊠βb; however, we do not know how to produce a rigorous proof along this line, and the problem of N → ∞ limits for general values of β > 0 remains open.

The global Gaussian fluctuations as N → ∞ for a ⊞βb and a ⊠βb has been addressed for β = 2 in a free probability context using “second order freeness” [MiS], [MiSS].

There are additional questions concerning the asymptotics as N → ∞. One recent topic is the study of the global fluctuations for the difference between two adjacent levels in the corners processes (again at β = 1, 2, 4) [Ker1], [Buf], [ED], [GZ], [So]. The link with taking derivatives of a polynomial that appears in Theorem 1.2 is somewhat visible in the results of [So].

Another popular theme is to study local limits, e.g. the spacings between individual particles (eigenvalues) in the bulk. At β = 2 the local limits of the corners–processes were shown to coincide with the sine process in [Met], and similar local asymptotics for a⊞β=2b was proven in [CL]. At general values of β > 0 one would expect an appearance of the β–Sine process, but this is not proven rigorously. What is known, is that as β → ∞ the β–Sine processes crystallize on a lattice, see [SaSe], [LS], and indeed for β = ∞ version of the corners process such limit theorems leading to a lattice are known, see [FR].

(8)

The rest of the article is organized as follows. In Section 2 we give formal definitions of the operations πN →k

β (a), a ⊠β b, a ⊞β b. Proofs of Theorems 1.1, 1.2, 1.6 are given in Section 3. Section 4 discusses extensions to discrete (q, t)–setting related to Macdonald polynomials.

Acknowledgements. V.G. is partially supported by the NSF grants DMS-1407562, DMS-1664619, by the Sloan Research Fellowship, and by The Foundation Sciences Math´ematiques de Paris. A. W. M. is partially supported by NSF grant DMS-1552520. The authors would like to thank P. Br¨and´en, A. Borodin, G. Olshanski, and S. Sodin for useful discussions. The authors thank the referee for the comments and suggestions.

2. Special functions and operations

We use several classes of symmetric functions of representation–theoretic origin, which are degenerations of Macdonald polynomials. We refer to [Mac] for general information about these polynomials and only summarize here the required facts.

Let ΛN denote the algebra of symmetric polynomials in N–variables x1, . . . , xN. Define a difference operator Dq,t acting in ΛN through

Dq,tf = N X i=1 " Y j6=i txi − xj xi− xj # Tq,xi, where Tq,xi is the q–shift operator acting as

[Tq,xif ](x1, . . . , xN) = f (x1, . . . , xi−1, qxi, xi+1, . . . , xN).

Dq,t possesses a complete set of eigenfunctions in ΛN, which are the Macdonald polyno-mials Pλ(x1, . . . , xN; q, t) parameterized by N–tuples of non-negative integers λ = (λ1 ≥ · · · ≥ λN), which are called non-negative signatures (of rank N):

(13) Dq,tPλ(·; q, t) = " N X i=1 qλitN −i # Pλ(·; q, t).

(·; q, t) is a homogeneous polynomial of degree |λ| := λ1 + · · · + λN. The leading monomial (with respect to the lexicographic ordering) of Pλ(·; q, t) is xλ11 xλ22 · · · xλNN . We normalize the polynomials by declaring the coefficient of this monomial to be 1.

The definition (13) readily implies the shift property

Pλ+1(x1, . . . , xN; q, t) = (x1x2· · · xN)Pλ(x1, . . . , xN; q, t), λ+1 = (λ1+1, λ2+1, . . . , λN+1), which allows one to extend the definition of Pλ to general signatures λ with (possibly) negative integer coordinates λi. The result is a Laurent polynomial.

Another property that links negative and positive signatures is

(14) Pλ(x−11 , . . . , x−1N ; q, t) = P−λ(x1, . . . , xN; q, t), −λ = (−λN, −λN −1, . . . , −λ1). The property (14) is somewhat hard to locate in standard references in the explicit form, but it readily follows from the combinatorial formula for Macdonald polynomials and [Mac, Example VI.6.2 (b)].

(9)

If q = t, then Macdonald polynomials coincide with Schur polynomials, and so can be given by an explicit determinantal formula

Pλ(x1, . . . , xN; q, q) = sλ(x1, . . . , xN) = det  xλj+N −ji Ni,j=1 Q 1≤i<j≤N(xi− xj) .

We mostly do not use Macdonald polynomials directly, instead relying on three different degenerations. The Jack symmetric polynomials Jλ are defined for θ > 0 through (cf. [Mac, Chapter VI, Section 10]):

Jλ(x1, . . . , xN; θ) = lim

q→1Pλ(x1, . . . , xN; q, q θ).

The Heckman–Opdam hypergeometric (for root system of type A, cf. [HO], [Op1], [HS]) functions Fr are defined for θ > 0 and N–tuples of distinct real labels r = (r1 > r2 > · · · > rN) through (cf. [BG15, Section 6], [Su1] and references therein)

(15) t = qθ, q = exp(−ε), λ = ⌊ε−1(r1, . . . , rN)⌋, xi = exp(εyi), Fr(y1, . . . , yN; θ) = lim

ε→0ε

θN (N −1)/2Pλ(x1, . . . , xN; q, t).

The multivariate Bessel functions Br (cf. [Du], [Op3], [Jeu], [OO1], [GK]) are defined for θ > 0 and N–tuples of distinct real labels r = (r1 > r2 > · · · > rN) through

Br(z1, . . . , zN; θ) = lim ε→0Fεr(ε

−1z1, . . . , ε−1zN; θ). Or, equivalently (cf. [OO1, Section 4]),

(16) λ = ⌊ε−1(r1, . . . , rN)⌋, xi = exp(εzi), Br(z1, . . . , zN; θ) = lim

ε→0ε

θN (N −1)/2Jλ(x1, . . . , xN; θ). We also need the normalized versions of all these symmetric functions:

ˆ Pλ(·; q, t) = Pλ(·; , q, t) Pλ(1, t, . . . , tN −1; q, t), Jλˆ(·; θ) = Jλ(·; θ) Jλ(1, 1, . . . , 1; θ), ˆ Fr(·; θ) = F(·; θ) Fr(0, −θ, . . . , (1 − N)θ; θ) , Bˆr(·; θ) = Br(·; θ) Br(0, . . . , 0; θ).

One advantage of the normalized functions ˆF and ˆB is that one can now extend their definition to labels r = (r1 ≥ · · · ≥ rN) with possibly coinciding coordinates through limit transitions.

The following property is the label–variable symmetry of the Macdonald (Laurent) polynomials, see [Mac, Chapter VI, Section 6]:

(17) Pˆµ(qλ1tN −1, qλ2tN −2, . . . , qλN; q, t) = ˆPλ(qµ1tN −1, qµ2tN −2, . . . , qµN; q, t). It implies the following two symmetries for the degenerations:

(18) ˆ

Fr(−µ1−(N−1)θ, −µ2−(N−2)θ, . . . , −µN; θ) = ˆJµ(exp(−r1), exp(−r2), . . . , exp(−rN); θ) (19) Bˆr(ℓ1, . . . , ℓN; θ) = ˆBℓ(r1, . . . , rN; θ).

(10)

The Bessel function ˆBr coincides with a (partial) Laplace tranform of the β = 2θ– corners process {xk

i}1≤i≤k≤N with top row r = (r1, . . . , rN) of Definition 1.3, given by (20) Brˆ(z1, . . . , zN; θ) = E{xk i} " exp N X k=1 zk· k X i=1 xk i − k−1 X j=1 xk−1 j !!# .

This connection is known as the combinatorial formula for Bessel functions; (20) is a limit of similar combinatorial formulas for Macdonald and Jack polynomials, Heckman–Opdam hypergeometric functions.

We will now discuss the connection of the degenerations of Macdonald polynomials to the addition and multiplication of random matrices. For that we define four types of con-nection coefficients, which are the variants/generalizations of the Littlewood–Richardson coefficients. They are determined by the following decompositions

ˆ Pλ(·; q, t) ˆ(·; q, t) = X ν cνλ,µ( ˆP ; q, t) ˆPν(·; q, t), (21) ˆ Jλ(·; θ) ˆ(·; θ) = X ν cνλ,µ( ˆJ ; θ) ˆJν(·; θ), (22) ˆ Fr(·; θ) ˆFℓ(·; θ) = Z s csℓ,r( ˆF; θ) ˆFs(·; θ) ds, (23) ˆ Br(·; θ) ˆBℓ(·; θ) = Z s csℓ,r( ˆB; θ) ˆBs(·; θ) ds. (24)

The meaning of the decomposition in (21), (22) is straightforward, as this is an expansion of a symmetric (Laurent) polynomial on the left–hand side by functions forming a linear basis of the space of all such polynomials. The sums in (21), (22) have only finitely many non-zero terms; in more details, the coefficients cν

λ,µ( ˆP ; q, t) and cνλ,µ( ˆJ; θ) are non-zero only if

(25) ν1+ · · · + νN = (λ1+ µ1) + · · · + (λN+ µN), λN + µN ≤ νi ≤ λ1+ µ1, 1 ≤ i ≤ N. Although, in principle, cνλ,µ( ˆJ; θ), csℓ,r( ˆF; θ), and csℓ,r( ˆB; θ) can be all obtained from

λ,µ( ˆP ; θ) by limit transitions, the exact mathematical meaning of (23), (24) is a bit more delicate. There is a solid amount of literature devoted to the expansions of functions in appropriate spaces into the integrals of F and B functions; this is a far–reaching generalization of the conventional Fourier transform, and it is known under the name Cherednik transform for F and Dunkl transform for B. We refer to [Ank] for a recent brief review with many references to the original articles and more detailed treatments. In particular, it is rigorously known that

f 7→ Z s csℓ,r( ˆF; θ) · f ds, f 7→ Z s csℓ,r( ˆB; θ) · f ds,

are well-defined as distributions with compact support. The support of cs

ℓ,r( ˆF; θ), cs

ℓ,r( ˆB; θ), is a subset of those s which satisfy

(11)

However, much more is conjectured, as we discuss below. Conjecture 2.1. All the coefficients cν

λ,µ( ˆP ; q, t) for 0 < q, t < 1, (and hence also cν

λ,µ( ˆJ; θ), csℓ,r( ˆF; θ), csℓ,r( ˆB; θ) for θ > 0) are non-negative. In the q = t case, cν

λ,µ( ˆP ; q, q) are (up to a normalization) conventional Littlewood– Richardson coefficients, which are known to be non-negative due to either representation– theoretic interpretations or combinatorial formulas, see e.g. [Mac, Chapter I, Section 10]. For θ = 1/2, 1, 2, the coefficients cν

λ,µ( ˆJ ; θ), csℓ,r( ˆF; θ), csℓ,r( ˆB; θ) are again known to be non-negative due to representation–theoretic interpretations, cf. [Mac, Chapter VII]. When q = 0, the non-negativity of the coefficients cν

λ,µ( ˆP ; q, t) are due to known combi-natorial formulas, see [Ra, Theorem 4.9], [Sc, Theorem 1.3], [KM] and references therein. The non-negativity of cν

λ,µ( ˆJ; θ) would also follow from a (still open) conjecture of Stan-ley [St, Conjecture 8.3]. We refer to [Ro] for progress concerning the non-negativity of cs

ℓ,r( ˆB; θ).

Since our definitions of ˆP , ˆJ, ˆF, ˆB imply that X ν cνλ,µ( ˆP ; q, t) =X ν cνλ,µ( ˆJ; θ) = Z s csℓ,r( ˆF; θ)ds = Z s csℓ,r( ˆB; θ)ds = 1,

Conjecture 2.1 would then allow for these coefficients (with fixed λ, µ or r, ℓ, and varying ν or s) can be treated as probability measures. Without Conjecture 2.1 the coefficients in (21), (22) define signed measures of total mass 1, while the coefficients in (23), (24) have only distributional meaning.

The connection of these definitions to random matrix operations is explained in the next two propositions. For r = (r1, . . . , rN), let exp(r) denote the vector (exp(r1), . . . , exp(rN)) Proposition 2.2. Choose θ = 1

2, 1, or 2, and let β = 2θ. Take r = (r1, . . . , rN), ℓ = (ℓ1, . . . , ℓN) and let s = (s1, . . . , sN) be cs

r,ℓ( ˆF; θ)–distributed random vector. Then in the notations of Section 1

(27) exp(s)= exp(r) ⊠βd exp(ℓ).

Proof. Combining (23) with (18) we conclude that for cs

r,ℓ( ˆF; θ)–distributed random vec-tor and each integral vecvec-tor λ = (λ1 ≥ λ2 ≥ · · · ≥ λN), we have

E ˆJλ(exp(−s1), . . . , exp(−sN); θ)

= ˆJλ(exp(−r1), . . . , exp(−rN); θ) ˆJλ(exp(−ℓ1), . . . , exp(−ℓN); θ) Taking into account the Jack limit of (14), we can equivalently write

(28)

E ˆJλ(exp(s1), . . . , exp(sN); θ) = ˆJλ(exp(r1), . . . , exp(rN); θ) ˆJλ(exp(ℓ1), . . . , exp(ℓN); θ). Note that ˆJλ form a basis in symmetric Laurent polynomials, and therefore, the ex-pectations of the form (28) uniquely define the distribution of exp(s1), . . . , exp(sN). On the other hand, the identity (28) is well-known to hold with exp(s) replaced by exp(r) ⊠β exp(ℓ), see [Mac, Chapter VII], [Fo, Section 13.4.3] and references therein. 

(12)

Proposition 2.3. Choose θ = 1

2, 1, or 2, and let β = 2θ. Take r = (r1, . . . , rN), ℓ = (ℓ1, . . . , ℓN) and let s = (s1, . . . , sN) be cs

r,ℓ( ˆB; θ)–distributed random vector. Then in the notations of Section 1

(29) s= r ⊞βd ℓ.

Proof. The formula (20) identifies the Bessel functions with Laplace transforms of the β–corners processes. At θ = 1/2, 1, 2, this implies an identification with Laplace trans-forms of real/complex/quaternion random matrices, see the discussion after Definition 1.3. Laplace transform of the sum of independent random variables (matrices) is a product of Laplace transforms, hence the definition of r ⊞β ℓ coincides with that of cs

r,ℓ( ˆB; θ)–

distributed s. 

Propositions 2.2, 2.3 motivate extrapolation of matrix operations to general β > 0. Definition 2.4. Choose θ > 0, and let β = 2θ. Take a = (a1, . . . , aN), b = (b1, . . . , bN) and assume a = exp(r), b = exp(ℓ). Let s = (s1, . . . , sN) be csr,ℓ( ˆF; θ)–distributed random vector. Then in the notations of Section 1

(30) a ⊠βb := exp(s).

Definition 2.5. Choose θ > 0, and let β = 2θ. Take a = (a1, . . . , aN), b = (b1, . . . , bN) and let s = (s1, . . . , sN) be cs

a,b( ˆB; θ)–distributed random vector. Then in the notations of Section 1

(31) a ⊞β b:= s.

Definition 2.6. Choose θ > 0, and let β = 2θ. Take a = (a1, . . . , aN), k ≤ N and let s = (s1, . . . , sk) be the kth row of β–corners process with top row a, i.e. the particles xk

1, xk2, . . . , xkk in the notations of Definition 1.3. Then

(32) πβN →k(a) := s.

Comparing with (20), we see the following property (which can be taken as an equivalent definition of πN →k β (a)): (33) EBπN →k β (a)(z1, . . . , zk; β/2) = Ba(z1, . . . , zk, 0, . . . , 0 | {z } N −k ). 3. Proofs

3.1. The expectation identities. The proofs of Theorems 1.1, 1.2 are based on the evaluations of the expectations of Jack polynomials summarized in the next three propo-sitions.

For a finite integer vector λ = (λ1 ≥ λ2 ≥ · · · ≥ 0), we define (t)λ;θ = Y

(i,j)∈Z>0×Z>0 j≤λi

t + (j − 1) − θ(i − 1).

We need to introduce yet another normalization of Jack polynomials. The dual Jack polynomials Jdual

(13)

and x1, . . . , xN (see [Mac, Chapter VI, Section 10]) in such a way that makes the following identity true: X λ1≥···≥λN≥0 Jλ(x1, . . . , xN; θ)Jλdual(y1, . . . , yN; θ) = N Y i,j=1 (1 − xiyj)−θ. We also need corresponding Jack–Littlewood–Richardson coefficients cλ

µ,ν(Jdual, θ), de-fined by

Jµdual(x1, . . . , xN; θ) · Jνdual(x1, . . . , xN; θ) =X λ

µ,ν(Jdual; θ) · Jλdual(x1, . . . , xN; θ). Proposition 3.1. Fix positive vectors of eigenvalues a and b, and let a ⊠β b denote the corresponding N–dimensional random vector in the notations of Section 1. For each λ = λ1 ≥ λ2 ≥ · · · ≥ λN ≥ 0 we have

(34) EJλ(a ⊠β b; β/2) = Jλ(a; β/2) · Jλ(b; β/2) Jλ(1, . . . , 1; β/2) .

Proof. This is equivalent to (28), whose proof, in fact, did not use θ = 1/2, 1, 2.  Proposition 3.2. Fix k ≤ N and a, and let πβN →k(a) denote the corresponding k– dimensional random vector in the notations of Section 1. For each λ = λ1 ≥ λ2 ≥ · · · ≥ λk≥ 0 we have

(35) EJλ(πN →kβ (a); β/2) = Jλ(a; β/2) ·

(kβ/2)λ;β/2 (Nβ/2)λ;β/2,

where in the right–hand side we view λ as a signature of rank N by adding N − k zeros. Proof. We use the series expansion of the Bessel functions, see [OO1, Section 4]:

(36) Ba1,...,aN(x1, . . . , xN; β) =

X

λ=(λ1≥···≥λN≥0)

Jλ(x1, . . . , xN; β/2)Jλdual(a1, . . . , aN; β/2)

(Nβ/2)λ;β/2 .

We plug (36) into both sides of (33) and use the stability property of Jack polynomials valid for all λ = (λ1 ≥ · · · ≥ λN ≥ 0):

Jλ(z1, . . . , zk, 0, . . . , 0 | {z } N −k ; θ) = ( Jλ(z1, . . . , zk; θ), λk+1 = · · · = λN = 0, 0, otherwise,

where in the right–hand side we treat λ as a signature of rank k by removing N − k zero coordinates. We get (37) E X λ=(λ1≥···≥λk≥0) Jλ(πN →k β (a); β/2)Jλdual(z1, . . . , zk; β/2) (kβ/2)λ;β/2 , = X λ=(λ1≥···≥λk≥0) Jλ(a; β/2)Jdual λ (z1, . . . , zk; β/2) (Nβ/2)λ;β/2 , Comparing the coefficient of Jdual

λ (z1, . . . , zk; β/2) in both sides of (37) we arrive at the

(14)

Proposition 3.3. Fix a and b, and let a ⊞βb denote the corresponding N–dimensional random vectors (in k, N, N dimensional spaces, respectively) in the notations of Section 1. For each λ = λ1 ≥ λ2 ≥ · · · ≥ λN ≥ 0 we have

(38) EJλ(a ⊞β b; β/2) = (Nβ/2)λ;β/2 X ν,µ cλµ,ν(Jdual; β/2) · Jµ(a; β/2) (Nβ/2)µ;β/2 · Jν(b; β/2) (Nβ/2)ν;β/2. Remark 3.4. The sum in (38) is finite, since cλ

µ,ν(Jdual; β/2) vanishes unless 0 ≤ µi ≤ λi, 0 ≤ νi ≤ λi, PNi=1(µi+ νi− λi) = 0.

Proof of Proposition 3.3. The proof is similar to that of Proposition 3.2. We rewrite (23) as

EBa⊞βb(z1, . . . , zN; β/2) = Ba(z1, . . . , zN; β/2) · Bb(z1, . . . , zN; β/2), then plug (36) into both sides, and compare the coefficients of Jdual

λ (z1, . . . , zN; β/2).  3.2. Proof of Theorem 1.1. We use Propositions 3.1, 3.2, 3.3 with a particular choice of λ: λ1 = · · · = λℓ = 1, λℓ+1 = λℓ+2 = · · · = 0. For such λ, Jack polynomial coincides with elementary symmetric function, cf. [Mac, Chapter VI]:

J1ℓ,0N −ℓ(x1, . . . , xN; θ) = eℓ(x1, . . . , xN). Then (34) becomes

(39) Eeℓ(a ⊠βb) = eℓ(a)eℓ(b)N

ℓ  .

On the other hand, evaluating the coefficient of zN −ℓ, (6) is equivalent to

(40) Eeℓ(a ⊠β b) = 1

N! X

σ∈SN

eℓ(a1bσ(1), a2bσ(2), . . . , aNbσ(N )).

It is straightforward to check that the right–hand sides of (39) and (40) are the same. Further, (35) becomes

(41) Eeℓ(πN →kβ (a)) = eℓ(a) · k(k − 1) · · · (k − ℓ + 1) N(N − 1) · · · (N − ℓ + 1), which is the same expression as the coefficient of zk−ℓ in (4).

We finally apply Proposition 3.3 with λ1 = λ2· · · = λℓ = 1, λℓ+1 = · · · = λN = 0. Using Remark 3.4 we conclude that the right–hand side has ℓ + 1 term and the (p + 1)st term has µ1 = µ2· · · = µp = 1, µp+1 = · · · = µN = 0; ν1 = ν2· · · = νℓ−p = 1, νℓ−p+1 = · · · = νN = 0. We get (42) Eeℓ(a ⊞βb) = N(N − 1) · · · (N − ℓ + 1) ℓ X p=0 c11ℓp,1ℓ−p(Jdual; β/2) × ep(a) N(N − 1) · · · (N − p + 1) · eℓ−p(b) N(N − 1) · · · (N − ℓ + p + 1).

(15)

It remains to find the value of the constant c1ℓ

1p,1ℓ−p(Jdual; β/2). For that we use the automorphism ωθ of the algebra of symmetric functions in infinitely many variables (cf. [Mac, Chapter VI, Section 10]) with θ = β/2. It has the following action on eℓ:

ωθ J1dualℓ (·; θ) 

= J(ℓ,0,0,... )(·; θ−1), and therefore c1ℓ

1p,1ℓ−p(Jdual; β/2) is the coefficient of λ = (ℓ, 0, 0, . . . ) in the expansion (43) J(p,0,... )(·; 2/β) · J(ℓ−p,0,... )(·; 2/β) =

X

λ

(p,0,... ),(ℓ−p,0,... )(J; 2/β) · Jλ(·; 2/β). This coefficient is readily found by comparing the coefficient of the leading monomial xℓ

1 in both sides of (43), and therefore c1ℓ

1p,1ℓ−p(Jdual; β/2) = 1. We conclude that (44) Eeℓ(a ⊞βb) = ℓ X p=0 ep(a)eℓ−p(b) N(N − 1) · · · (N − ℓ + 1) N(N − 1) · · · (N − p + 1) · N(N − 1) · · · (N − ℓ + p + 1). On the other hand, the coefficient of zN −ℓ in (5) gives

(45) Eeℓ(a ⊞β b) = 1 N!

X

σ∈S(N )

eℓ(a1+ bσ(1), a2 + bσ(2), . . . , aN + bσ(N )).

One readily checks that the right–hand sides of (44) and (45) give the same expression. 3.3. Proof of Theorem 1.2. The proof is based on the following two limit relations, which can be found e.g. in [St, Proposition 7.6]3:

(46) lim θ→∞Jλ(x1, . . . , xN; θ) = N Y i=1 [ei(x1, . . . , xN)]λi−λi+1. (47) lim θ→0Jλ(x1, . . . , xN; θ) = mλ(x1, . . . , xN),

where ek is the elementary symmetric function and mλ is the monomial symmetric func-tion.

We start with limβ→∞a ⊠β b. Take any collection of polynomials f1, . . . , fm in N variables. We aim to prove that the following limits exist and satisfy

(48) lim β→∞E " m Y i=1 fi(a ⊠βb) # = lim β→∞ m Y i=1 " Efi(a ⊠β b) # .

This precisely means that limβ→∞a ⊠β b is a delta function at a certain point, and then Theorem 1.1 would identify this point with roots of Q⊠(z). Note that under Conjecture 2.1, a ⊠βb is a bona–fide random variable, and therefore, convergence to a delta–function for polynomial test functions fiwould imply a similar convergence for arbitrary continuous test–functions (without Conjecture 2.1, formally, there might be no such implication).

Since a ⊠βbis supported on ordered N–tuples of reals, it suffices to consider symmetric polynomials fi in (48). Further, since (48) is multi-linear in fi, it suffices to check it on an

3The parameter α of [St] is θ1

(16)

arbitrary basis of the algebra ΛN of symmetric polynomials. This algebra is generated by elementary symmetric functions e1, . . . , eN, and so we can choose functions of the form (49) e(λ) := eλ1−λ21 eλ2−λ32 · · · eN −1λN −1−λNeλNN , λ1 ≥ λ2 ≥ · · · ≥ λN ≥ 0

as such basis. Therefore, (48) reduces to the statement that for any λ:

(50) lim β→∞E  e(λ)(a ⊠β b)= lim β→∞ N Y i=1 E h eλi−λi+1i (a ⊠β b)i.

We now prove (50). Fix λ and decompose e(λ) into a linear combination of Jack polyno-mials

(51) e(λ)(x1, . . . , xN) = X

µ

Tλµ(β/2)Jµ(x1, . . . , xN; β/2).

Note that the sum in (51) is finite, as µ = (µ1 ≥ µ2 ≥ · · · ≥ µN ≥ 0) is required to satisfy PN

i=1µi = PN

i=1λi. Further, (46) implies that lim β→∞T µ λ(β/2) = ( 1, λ = µ, 0, otherwise. Thus, using Proposition 3.1, (46), and (39) we get

lim β→∞E  e(λ)(a ⊠β b)  = lim β→∞ X µ Tλµ(β/2)Jµ(a; β/2)Jµ(b; β/2) Jµ(1, . . . , 1; β/2) = QN i=1e λi−λi+1 i (a) QN i=1e λi−λi+1 i (b) QN i=1e λi−λi+1 i (1, . . . , 1 | {z } N ) = N Y i=1

Eeλi−λi+1i (a⊠βb)= lim β→∞ N Y i=1 Eeλi−λi+1i (a⊠βb), which proves (50).

Next, we investigate limβ→∞πN →k

β (a). Again it suffices to prove that for every λ1 ≥ · · · ≥ λk≥ 0,

(52) lim

β→∞E 

e(λ)(πβN →k(a))= lim β→∞ k Y i=1 E h eλi−λi+1iβN →k(a))i. Using Proposition 3.2, (46), and (41) we have

lim β→∞E  e(λ)(πβN →k(a))  = lim β→∞ X µ Tλµ(β/2)Jµ(a; β/2) (kβ/2)µ;β/2 (Nβ/2)µ;β/2 = k Y i=1 eλi−λi+1i (a) Y (i,j)∈Z>0×Z>0 j≤λi k − i + 1 N − i + 1 = k Y i=1

eλi−λi+1i (πN →kβ (a)) = lim β→∞

k Y

i=1

(17)

Finally, we turn to limβ→∞a ⊞β b, and again prove that for every λ1 ≥ · · · ≥ λN ≥ 0 (53) lim β→∞E  e(λ)(a ⊞β b)= lim β→∞ N Y i=1 Eheλi−λi+1i (a ⊞β b)i. Thus, using Proposition 3.3 and (46) we get

lim β→∞E  e(λ)(a ⊞βb)  =X ν,µ lim β→∞  cλµ,ν(Jdual; β/2) · (Nβ/2)λ;β/2 (Nβ/2)ν;β/2(Nβ/2)µ;β/2  N Y i=1 h

eµi−µi+1i (a)eνi−νi+1i (b)i. By degree considerations, cλ

µ,ν(Jdual; β/2) is non-zero only if P i(µi+ νi) = P iλi. In this case lim β→∞ (Nβ/2)λ;β/2 (Nβ/2)ν;β/2(Nβ/2)µ;β/2 = Q j≤λi N + 1 − i  Q j≤λi N + 1 − i  · Q j≤λi N + 1 − i  = N Y i=1 (N + 1 − i)λi−µi−νi For the value of the constant limβ→∞cλ

µ,ν(Jdual; β/2) we use the automorphism ωθ of the algebra of symmetric functions in infinitely many variables (cf. [Mac, Chapter VI, Section 10]) with θ = β/2. It has the following action on Jack polynomials:

ωθ Jλdual(·; θ)= Jλ′(·; θ−1), where λ′ is the transpose partition, defined through

λ′

j = |{i ∈ Z>0 : λi ≥ j}|, j = 1, 2, . . . . Then using (47) we conclude

lim β→∞c λ µ,ν(Jdual; β/2) = cλ ′ µ′′(m),

where m stands for the monomial symmetric functions; that is, the coefficients are defined from the decomposition

mµ′m ν′ =

X

λ′

µ′′′(m)mλ′.

The monomial symmetric functions are easy to multiply directly: cλ′

µ′′(m) counts the number of ways to represent the vector (λ′

i)i=1,2,...as a sum of (µ′i)i=1,2,...and a permutation of the coordinates of the vector (ν′

i)i=1,2,.... It remains to check that

(54) X ν,µ cλµ′′′(m) N Y i=1 (N + 1 − i)λi−µi−νi N Y i=1 h eµi−µi+1i (a)e νi−νi+1 i (b) i = N Y ℓ=1 " X p=0 ep(a)eℓ−p(b) N(N − 1) · · · (N − ℓ + 1) N(N − 1) · · · (N − p + 1) · N(N − 1) · · · (N − ℓ + p + 1) #λℓ−λℓ+1 , which is straightforward given the above description of cλ′

µ′ ,ν′(m).

(18)

3.4. Proof of Theorem 1.6. Let us start by computing the constant ZN in Definition 1.3. For that we use the Dixon–Anderson integration formula, see [Di], [Fo, Exercise 4.2, q. 2], which reads (55) Z T Y 1≤i<j≤n (ti− tj) n Y i=1 n+1 Y j=1 |ti− aj|αj−1 |b − ti|αj dt1· · · dtn = n+1Q j=1 Γ(αj) Γ n+1P j=1 αj ! Y 1≤i<j≤n+1 (ai− aj)αi+αj−1 n+1 Y i=1 |b − ai|αi− Pn+1 j=1αj,

where the domain of integration T is given by

a1 < t1 < a2 < t2· · · < tn< an+1. Choosing b = ∞, αi = β/2, we verify (9) by induction in N.

The next step is to find ˜xk

i. We do this sequentially: first, for k = N − 1 and 1 ≤ i ≤ N − 1, then for k = N − 2, etc, until we reach k = 1. Using the definition of the β–corners process and the formula for ZN we write down the conditional distribution of xk−1i , 1 ≤ i ≤ k − 1 given xk i, 1 ≤ i ≤ k: (56) P (xk−11 , . . . , xk−1k−1 | xk1, . . . , xkk) = Γ(kβ/2) Γ(β/2)k Q 1≤i<j≤k−1 (xk−1 j − xk−1i ) Q 1≤i<j≤k (xk j − xki)β−1 × k Y i=1 k−1 Y j=1 xki − xk−1j !β/2−1 . As β → ∞, the density (as a function of xk−11 , . . . , xk−1k−1) concentrates near the point where the second line of (56) is maximized, so we need to solve the maximization problem: (57) k Y i=1 k−1 Y j=1

xki − xk−1j → max, with fixed xk1, . . . , xkk.

Taking logarithmic derivatives of (57) in xk−11 , . . . , xk−1k−1 we find that the optimal point (˜xk−11 , . . . , ˜xk−1k−1) should satisfy the k − 1 equations

(58) k X j=1 1 ˜ xk−1 i − xkj = 0, i = 1, . . . , k − 1.

Observe that (58) is precisely the collection of equations that define the k − 1 roots of the derivative of the polynomial f (u) = Qkj=1(u − xk

j). This implies the first part of (12) of Theorem 1.6.

(19)

For the second part we Taylor-expand the density of the β–corners process near the point ˜xk i, 1 ≤ i ≤ k ≤ N. Set xki = ˜xki +∆x k i √ β and rewrite (8) as 1 ZN N −1 Y k=1 " Y 1≤i<j≤k (˜xkj − ˜xki)2−β # · " k Y a=1 k+1 Y b=1 |˜xka− ˜xk+1b |β/2−1 # × N −1 Y k=1   Y 1≤i<j≤k 1 + √1 β · ∆xk j − ∆xki ˜ xk j − ˜xki !2−β · " k Y a=1 k+1 Y b=1  1 + √1 β · ∆xk a− ∆xk+1b ˜ xk a− ˜xk+1b β/2−1# = 1 ZN N −1 Y k=1 " Y 1≤i<j≤k (˜xkj − ˜xki)2−β # · " k Y a=1 k+1 Y b=1 |˜xka− ˜xk+1b |β/2−1 # × N −1 Y k=1 exp " −pβ X 1≤i<j≤k ∆xk j − ∆xki ˜ xk j − ˜xki + √ β 2 k X a=1 k+1 X b=1 ∆xk a− ∆xk+1b ˜ xk a− ˜xk+1b # × N −1 Y k=1 exp " X 1≤i<j≤k (∆xk j − ∆xki)2 2(˜xk j − ˜xki)2 − k X a=1 k+1 X b=1 (∆xk a− ∆xk+1b )2 4(˜xk a− ˜xk+1b )2 + O(β−1/2) # (59)

As β → ∞, the last line of (59) gives the desired density of ξk

i in the ∞–corners process, and it remains to show that the fourth line of (59) is identically equal to 1. Indeed, the coefficient of ∆xk

i in the expression under exponent is

(60) pβ X 1≤j≤k, j6=i 1 ˜ xk i − ˜xkj + √ β 2 k−1 X j=1 1 ˜ xk i − ˜xk−1j + √ β 2 k+1 X j=1 1 ˜ xk i − ˜xk+1j

The last term in (60) is zero because of the equations (58). For the first two terms, recall that ˜xk

i and ˜xik−1 are roots of a polynomial and its derivative, i.e. (61) k X a=1 Y j6=a (u − ˜xkj) = k k−1 Y a=1 (u − ˜xk−1a ) We then differentiate (61) in u and plug in u = ˜xk

i to get (62) 2 k X a=1 Y j6=a,i (˜xk i − ˜xkj) = k k−1 X j=1 Y a6=j (˜xk i − ˜xk−1a ) and plug u = ˜xk i into (61) to get (63) Y j6=i (˜xk i − ˜xkj) = k k−1 Y a=1 (˜xk i − ˜xk−1a ).

(20)

Dividing (62) by (63), we conclude that the first two terms in (60) cancel out. As a result, we see that the contribution of the fourth line of (59) must vanish as β → ∞.

The last ingredient of the proof of Theorem 1.6 is to explain that the density (11) is integrable, i.e., that the inverse covariance matrix arising in this density is indeed positive definite. This would have been immediate, if all the terms in the exponent had negative signs, yet the i < j sum is positive, and therefore, an additional clarification is necessary. We prove that the integral of (11) is finite by integrating the variables ξk

i sequentially in k; that is, we first integrate over ξ1

1, then over ξ12, ξ22, etc. Each step is an integration over k − 1 variables ξ1k−1, . . . , ξk−1k−1, and the integral is the limit (as β → ∞) of the identity expressing the unit total mass of the conditional probability (56). Repeating the argument (59), this limit is an identity holding for any k reals ζ1, . . . , ζk:

(64) Z · · · Z exp k X a=1 k−1 X b=1 (ζa− ξb)2 (xk a− xk−1b )2 ! dξ1. . . dξk−1 = Z · exp −2 X 1≤a<b≤k (ζa− ζb)2 (xk a− xkb)2 ! , where Z > 0 does not depend on ζ1, . . . , ζk; it can be explicitly evaluated, but we do not need its value here. Note that in the integrand of (64) the expression in the exponent is clearly negative, and therefore the question of convergence of the integral does not arise. However, iteratively using (64) for k = 2, . . . , N, we compute the (finite) normalizing constant for the density (11).

4. Discrete versions and generalities

4.1. Expectation identities at general (q, t). The main ingredient of our proofs, which is the expectation computations of Section 3.1, admits a generalization up to the hierarchy of symmetric functions to the level of Macdonald polynomials.

For Propositions 3.1, 3.3 the Macdonald version is as follows.

Proposition 4.1. Fix signatures λ = (λ1 ≥ · · · ≥ λN) and µ = (µ1 ≥ · · · ≥ λN), and let ν = (ν1 ≥ . . . νN) be a random signature with distribution given by the weight ν 7→ cν

λ,µ( ˆP ; q, t). For each positive signature ρ = ρ1 ≥ ρ2 ≥ · · · ≥ ρN ≥ 0 we have (65) EPρ(qν1tN −1, qν2tN −2, . . . , qνN; q, t)

= Pρ(q

λ1tN −1, qλ2tN −2, . . . , qλN; q, t) · Pρ(qµ1tN −1, qµ2tN −2, . . . , qµN; q, t) Pρ(tN −1, tN −2, . . . , 1; q, t) . Proof. Using (17) and definition of cν

λ,µ( ˆP ; q, t) we have E Pρ(q ν1tN −1, . . . , qνN; q, t) Pρ(tN −1, . . . , 1; q, t)  = E Pν(q ρ1tN −1, . . . , qρN; q, t) Pν(tN −1, . . . , 1; q, t)  =X ν Pν(qρ1tN −1, . . . , qρN; q, t) Pν(tN −1, . . . , 1; q, t) c ν λ,µ( ˆP ; q, t) = Pλ(qρ1tN −1, . . . , qρN; q, t) Pλ(tN −1, . . . , 1; q, t) · Pµ(qρ1tN −1, . . . , qρN; q, t) Pµ(tN −1, . . . , 1; q, t) = Pρ(q λ1tN −1, . . . , qλN; q, t) Pρ(tN −1, . . . , 1; q, t) · Pρ(qλ1tN −1, . . . , qλN; q, t) Pρ(tN −1, . . . , 1; q, t) .  For an analogue of Proposition 3.2 we need a new definition generalizing πβN →k.

(21)

Definition 4.2. Fix an 0 < k < N, and a signature λ = (λ1 ≥ y2 ≥ · · · ≥ λN). Define a random signature ν = (ν1 ≥ νk) with distribution πN →kq,t (λ) through the following decomposition (66) Pλ(t 1−N, t2−N, . . . , t−k, z1, . . . , zk; q, t) Pλ(t1−N, . . . , t−1, 1; q, t) = X ν πq,tN →k(λ)[ν] · Pν(z1, . . . , zk; q, t) Pν(t1−k, . . . , 1; q, t). Plugging zi = t1−iinto (66) one proves that P

νπ q,t

N →k(λ)[ν] = 1. The non-negativity of weights follows from the branching rules for the Macdonald polynomials, see [Mac].

Let us emphasize, that we use negative powers of t in (66). On the other hand, the normalization of Macdonald polynomials ˆPλentering the definition of cν

λ,µ( ˆP ; q, t) involved positive powers. For the latter this difference is not important due to homogeneity of Pλ and vanishing of cν

λ,µ( ˆP ; q, t) unless P

i(λi+ µi− νi) = 0. However, for (66) this becomes important.

Proposition 4.3. Fix k < N, a signature λ of rank N, and let ν be πq,tN →k(λ)–distributed. Then for any non-negative signature ρ = (ρ1 ≥ · · · ≥ ρk≥ 0) of rank k we have

(67) EPρ(qν1tk−1, . . . , qνk; q, t) = Pρ(t

k−1, . . . , t, 1; q, t) Pρ(tN −1, . . . , t, 1; q, t)· Pρ(q

λ1tN −1, . . . , qλN; q, t)

where we also treated ρ as a signature of rank N by adding N − k zero coordinates. Proof. Using (17) and homogeneity of Macdonald polynomials we have

(68) E Pρ(q ν1tk−1, . . . , qνk; q, t) Pρ(tk−1, . . . , t, 1; q, t)  = E Pν(q ρ1tk−1, . . . , qρk; q, t) Pν(tk−1, . . . , 1; q, t)  =X ν πN →kq,t (λ)[ν]Pν(q ρ1tk−1, . . . , qρk; q, t) Pν(tk−1, . . . , t, 1; q, t) = X ν πN →kq,t (λ)[ν]Pν(q ρ1, . . . , qρkt1−k; q, t) Pν(t1−k, . . . , t−1, 1; q, t) = Pλ(q ρ1, . . . , qρkt1−k, t−k, . . . , t1−N; q, t) Pλ(t1−N, . . . , t−1, 1; q, t) = Pλ(qρ1tN −1, . . . , qρktN −k, tN −k−1, . . . , 1; q, t) Pλ(tN −1, . . . , t, 1; q, t) = Pρ(q λ1tN −1, . . . , qλN; q, t) Pρ(tN −1, . . . , t, 1; q, t) .  An analogue of Proposition 4.3 at q = t is implicitly used in [GO], [Ol1], [Ol2] for the study of the extended Gelfand–Tsetlin graph.

As in Section 3.2, if we choose ρ1 = · · · = ρℓ = 1, ρℓ+1 = ρℓ+2 = · · · = 0 in Propositions 4.1, 4.3, then the Macdonald polynomials would turn into elementary symmetric functions eℓ, and we get formulas for the expectations of eℓ. In particular, q does not enter into these formulas in any explicit form, which is a (q, t)–analogue of the β–independence in Theorem 1.1.

4.2. Crystallization for general (q, t). The Law of Large Numbers (crystallization) of Theorems 1.2, 1.6 is obtained from operations on Macdonald polynomials Pλ(·; q, t) by a triple limit transition:

(22)

In these theorems, we made the limit transitions in a particular order (first, q → 1, λi → ∞ to degenerate into random matrices, and only then θ → ∞), but different orders of taking limits are also possible and would lead to another set of answers. We do not address the full classification of the limiting behaviors here (it probably deserves a separate publication), but only mention two possible scenarios.

(1) If we start with θ → ∞ (so t → 0), then Macdonald polynomials degenerate to q–Whittaker functions, as discussed in details in [GLO], [BC]. Two different further q → 1 limits were studied in the literature. The first one is parallel to the degeneration of q–Whittaker functions to Whittaker functions: the particles crystallize on a perfect lattice, while fluctuations are related to directed polymers in random media, see [BC]. Another limit in [BCF] leads to more complicated Law of Large Numbers and Gaussian fluctuations.

(2) We can first degenerate Macdonald polynomials into Jacks and the latter into products of elementary symmetric functions, as in (46). After taking these limits, an analogue of the β–corners process would involve weights given by products of Binomial coefficients, while the top–row (which was y1 < · · · < yN in Definition 1.3) is still discrete. Linearly rescaling the coordinates of the top row one finds yet another Law of Large Numbers. Using Stirling’s formula and solving the associated maximization formula (as in the proof of Theorem 1.6) one can explicitly find the limit then. It has the following description: kth particle of level M − 1 splits the interval between kth and (k + 1)st particles on level M in the proportion k : (N − k).

References

[And] G. W. Anderson, A short proof of Selberg’s generalized beta formula, Forum Math. 3 (1991), 415–417.

[Ank] J.-P. Anker, An introduction to Dunkl theory and its analytic aspects, arXiv:1611.08213 [Bar] Y. Baryshnikov, GUEs and queues, Probability Theory and Related Fields, 119, no. 2 (2001),

256–274.

[BB] J. Borcea, P. Br¨and´en, The Lee-Yang and Polya-Schur Programs. II. Theory of Stable Polyno-mials and Applications, Comm. Pure Appl. Math. 62 (2009), no. 12, 1595-1631, arXiv:0809.3087 [B1] A. Borodin, CLT for spectra of submatrices of Wigner random matrices, Moscow Mathematical

Journal, 14, no. 1 (2014), 29–38, arXiv:1010.0898.

[B2] A. Borodin, Stochastic higher spin six vertex model and Macdonald measures, arXiv:1608.01553 [BC] A. Borodin, I. Corwin, Macdonald processes, Probab. Theory Relat. Fields (2014) 158: 225.

arXiv:1111.4408

[BCF] A. Borodin, I. Corwin, P. L. Ferrari, Anisotropic (2+1)d growth and Gaussian limits of q-Whittaker processes, arXiv:1612.00321

[BG15] A. Borodin, V. Gorin, General beta Jacobi corners process and the Gaussian Free Field, Com-munications on Pure and Applied Mathematics, 68, no. 10, 17741844, (2015). arXiv:1305.3627. [BHVW] P. Br¨and´en, J. Haglund, M. Visontai, D. G. Wagner, Proof of the monotone column permanent conjecture, Notions of positivity and the geometry of polynomials. Dedicated to the memory of Julius Borcea. Basel: Birkh¨auser. Trends in Mathematics, 63-78 (2011), arXiv:1010.2565 [Buf] A. Bufetov. Kerov’s interlacing sequences and random matrices. J. Math. Phys., 54(11), Nov

2013, arXiv:1211.1507.

[BuG] A. Bufetov, V. Gorin. Representations of classical Lie groups and quantized free convolution, Geom. Funct. Anal., Vol. 25 (2015) 763-814, arXiv:1311.5780.

(23)

[BuG2] A. Bufetov, V. Gorin. Fluctuations of particle systems determined by Schur generating functions, arXiv:1604.01110

[CL] Z. Che, B. Landon, Local spectral statistics of the addition of random matrices, arXiv:1701.00513.

[Di] A. L. Dixon, Generalizations of Legendre’s formula KE′

− (K − E)K′

= 1

2π, Proc. London

Math. Soc. 3 (1905), 206–224.

[DE] I. Dumitriu, A. Edelman, Eigenvalues of Hermite and Laguerre ensembles: large beta asymp-totics, Annales de l’Institut Henri Poincare (B) Probability and Statistics, 41, no. 6, (2005), 1083–1099, arXiv:math-ph/0403029

[Du] C. Dunkl, Hankel transforms associated to finite reflection groups, Hypergeometric functions on domains of positivity, Jack polynomials, and applications (Tampa, FL, 1991), Contemp. Math., vol. 138, 1992, pp. 123–138.

[Ed] A. Edelman, The random matrix technique of ghosts and shadows. Markov Processes and Related Fields, 16, no. 4 (2010), 783-790.

[ED] L. Erd˝os and D. Schr¨oder. Fluctuations of rectangular Young diagrams of interlacing Wigner eigenvalues. 2016, arXiv:1608.05163.

[FR] D. Farmer, R. C. Rhoades, Differentiation evens out zero spacings, Trans. Amer. Math. Soc. 357 (2005), 3789-3811, arXiv:math/0310252

[Fo] P. J. Forrester, Log-gases and Random Matrices. Princeton University Press, 2010.

[Fu] W. Fulton, Eigenvalues, invariant factors, highest weights, and Schubert calculus, Bull. Amer. Math. Soc. 37 (2000), 209-249

[GLO] A. Gerasimov, D. Lebedev, S. Oblezin. On a classical limit of q-deformed Whittaker functions, Lett. Math. Phys (2012) 100: 279, arXiv:1101.4567.

[GO] V. Gorin, G. Olshanski, A quantization of the harmonic analysis on the infinite-dimensional unitary group, Journal of Functional Analysis, 270, 375-418 (2016), arXiv:1504.06832

[GS] V. Gorin, M. Shkolnikov, Multilevel Dyson Brownian motions via Jack polynomials, Probability Theory and Related Fields, 163, no. 3 (2015), 413-463. arXiv:1401.5595

[GZ] V. Gorin, L. Zhang, Interlacing adjacent levels of β-Jacobi corners processes, arXiv:1612.02321. [GK] T. Guhr and H. Kohler. Recursive construction for a class of radial functions. I. Ordinary space.

J. Math. Phys., 43(5):27072740, 2002

[HC1] Harish-Chandra. Differential operators on a semisimple Lie algebra. Amer. J. Math. 79 (1957), 87–120.

[HC2] Harish-Chandra. Fourier transforms on a semisimple Lie algebra. I. Amer. J. Math. 79 (1957), 193–257.

[HO] G. J. Heckman, E. M. Opdam, Root systems and hypergeometric functions. I, II, Compositio Mathematica64, no. 3 (1987), 329–352 and 353–373.

[HS] G. J. Heckman, H. Schlichtkrull, Harmonic Analysis and Special Functions on Symmetric Spaces, Acad. Press, 1994.

[IZ] C. Itzykson, and J. B. Zuber, The planar approximation. II. J. Math. Phys. 21, 3 (1980), 411– 421.

[Jeu] M. F. E. de Jeu, The Dunkl transform, Invent. Math. 113 (1993), no. 1, 147162.

[KM] M. Kapovich and J. J. Millson, A path model for geodesics in Euclidean buildings and its applications to representation theory, Groups, Geometry and Dynamics, Vol 2, 2008, p. 405-480. arXiv: math.RT/0411182

[Ker1] S. Kerov, Asymptotics of the separation of roots of orthogonal polynomials, St. Petersburg Math. J. 5 (1994), 925-941.

[Ker2] S. V. Kerov, Equilibrium and orthogonal polynomials (Russian), Algebra i Analiz, 12 no. 6 (2000), 224–237, English translation: St. Petersburg Mathematical Journal, 2001, 12 no. 6, 1049–1059.

[KN] A. N. Kirillov and M. Noumi, q-Difference raising operators for Macdonald polynomials and the integrality of transition coefficients, in: Algebraic methods and q-special functions, pp. 227– 243, CRM Proc. Lecture Notes 22, (Amer. Math. Soc., Providence, RI, 1999). 3.1, 5. arXiv:q-alg/9605005

(24)

[KoSw] R. Koekoek and R. F. Swarttouw, The Askey-scheme of hypergeometric orthogonal polynomials and its q-analogue, Report 98-17, Faculty of Technical Mathematics and Informatics, Delft University of Technology, 1998; http://aw.twi.tudelft.nl/ koekoek/askey/.

[LS] T. Leble, S. Serfaty, Large deviation principle for empirical fields of log and Riesz Gases, arXiv:1502.02970.

[Mac] I. G. Macdonald, Symmetric functions and Hall polynomials, Second Edition. The Clarendon Press, Oxford University Press, New York, 1995

[Mar] A. W. Marcus, Polynomial convolutions and (finite) free probability, preprint. [Mar2] A. W. Marcus, Discrete unitary invariance, arXiv:1607.06679

[MSS] A. W. Marcus, D. A. Spielman, N. Srivastava, Finite free convolutions of polynomials, arXiv:1504.00350

[MSS1] A. W. Marcus, D. A. Spielman, N. Srivastava, Interlacing families I: bipartite Ramanujan graphs of all degrees, Ann. of Math. 182-1 (2015), 307-325, arXiv:1304.4132

[MSS2] A. W. Marcus, D. A. Spielman, N. Srivastava, Interlacing families II: mixed characteristic poly-nomials and the Kadison-Singer Problem, Ann. of Math. 182-1 (2015), 327-350. arXiv:1306.3969. [MSS4] A. W. Marcus, D. A. Spielman, N. Srivastava, Interlacing families IV: bipartite Ramanujan

graphs of all sizes, FOCS (2015), 1358-1377, arXiv:1505.08010

[Met] A. Metcalfe, Universality properties of Gelfand-Tsetlin patterns, Probab. Theory Relat. Fields 155: 303. (2013) arXiv:1105.1272.

[MiS] J. A. Mingo, R. Speicher, Second order freeness and fluctuations of random matrices: I. Gaussian and Wishart matrices and cyclic Fock spaces, J. Funct. Anal. 235 (2006), no. 1, 226–270, arXiv:math/0405191.

[MiSS] J. A. Mingo, P. Sniady, R. Speicher, Second order freeness and fluctuations of random ma-trices: II. Unitary Random Matrices, Advances in Mathematics, 209, no. 1, (2007), 212–240, arXiv:math/0405258.

[N] Yu. A. Neretin, Rayleigh triangles and non-matrix interpolation of matrix beta integrals, Sbornik: Mathematics (2003), 194(4), 515–540.

[NS] A. Nica and R. Speicher, Lectures on the Combinatorics of Free Probability, Cambridge Univer-sity Press, 2006.

[OO1] A. Okounkov, G. Olshanski, Shifted Jack polynomials, binomial formula, and applications, Math. Res. Letters, 4 (1997), 69–78, arXiv:q-alg/9608020

[OO2] A. Okounkov, G. Olshanski, Asymptotics of Jack polynomials as the number of variables tends to infinity. International Mathematics Research Notices 1998, no. 13, 641–682, arXiv:q-alg/9709011.

[OV] G. Olshanski, A. Vershik, Ergodic unitarily invariant measures on the space of infinite Her-mitian matrices, In: Contemporary Mathematical Physics. F. A. Berezin’s memorial volume. Amer. Math. Transl. Ser. 2, vol. 175 (R. L. Dobrushin et al., eds), 1996, pp. 137–175, arXiv:math/9601215.

[Ol1] G. Olshanski, Extended Gelfand-Tsetlin graph, its q-boundary, and q-B-splines, Functional Analysis and its Applications 50 (2016), no. 2, 107-130, arXiv:1607.04201

[Ol2] G. Olshanski, An analogue of the Big q–Jacobi polynomials in the algebra of symmetric func-tions, in preparation.

[Op1] E. M. Opdam, Root systems and hypergeometric functions. III, IV, Compositio Mathematica 67, no. 1 (1988),: 21–49; 67 (2): 191–209

[Op3] E. M. Opdam, Dunkl operators, Bessel functions, and the discriminant of a finite Coxeter group, Compos. Math. 85 (1993), 333-373

[Pet] L. Petrov, Asymptotics of uniformly random lozenge tilings of polygons. Gaussian free field, Annals of Probability, 43 (2014), no. 1, 143, arXiv:1206.5123.

[PuSa] D. Puder, W. Sawin, private communication.

[Ra] A. Ram, Alcove walks, Hecke algebras, spherical functions, crystals and column strict tableaux, Pure Appl. Math. Q. 2 no. 4 (2006) 9631013

[Ro] M. R¨osler, A positive radial product formula for the Dunkl kernel, Trans. Amer. Math. Soc. 355 (2003), 2413-2438, arXiv:math/0210137

(25)

[SaSe] E. Sandier, S. Serfaty, 1D log gases and the renormalized energy : crystallization at vanishing temperature, Ann. Probab., 43, no. 4 (2015), 2026–2083. arXiv:1303.2968

[Sc] C. Scwher, Galleries, Hall-Littlewood polynomials, and structure constants of the spherical Hecke algebra, Int. Math. Res. Not. (2006) 131.

[Se] J. Sekiguchi, Zonal spherical functions on some symmetric spaces, Publications RIMS Kyoto Unieristy, 12 Suppl. (1977), 455–464.

[She] S. Sheffield, Gaussian free fields for mathematicians, Probability Theory and Related Fields 139 (2007), 521–541, arXiv:math.PR/0312099

[ShSc] O. Schramm, S. Sheffield, Contour lines of the two-dimensional discrete Gaussian free field, Acta Math., 202(1):21 137, 2009, arXiv:math/0605337

[So] S. Sodin. Fluctuations of interlacing sequences. 2016, arXiv:1610.02690.

[St] R. Stanley, Some combinatorial properties of Jack symmetric functions, Advances in Mathemat-ics 77, 76-l 15 (1989)

[Su1] Y. Sun, A new integral formula for Heckman-Opdam hypergeometric functions, Advances in Mathematics 289 (2016), 1157-1204.

[Su2] Y. Sun, Matrix models for multilevel Heckman-Opdam and multivariate Bessel measures, arXiv:1609.09096.

[Vo1] D. Voiculescu, Symmetries of some reduced free product C∗

-algebras, Operator Algebras and Their Connections with Topology and Ergodic Theory (Busteni, 1983), Lecture Notes in Math-ematics, vol. 1132, Springer-Verlag, 1985, pp. 556–588.

[Vo2] D. Voiculescu, Addition of certain non-commuting random variables, J. Funct. Anal., 66 (1986), 323-346.

[Vo3] D. Voiculescu, Limit laws for random matrices and free products, Invent. Math., 104 (1991), 201-220

[VDN] D. Voiculescu, K. Dykema, and A. Nica, Free random variables, CRM Monograph Series, vol. 1, American Mathematical Society, Providence, RI, 1992.

[Wa] J. L. Walsh. On the location of the roots of certain types of polynomials. Transactions of the American Mathematical Society, 24(3):163–180, 1922.

[We] W. Werner, Topics on the two-dimensional Gaussian Free Field, Preprint.

(Vadim Gorin) Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA, and Institute for Information Transmission Problems of Russian Academy of Sciences, Moscow, Russia. E-mail: vadicgor@gmail.com

(Adam W. Marcus) Department of Mathematics, Princeton University, Princeton, NJ, USA

Références

Documents relatifs

It remains to construct walks indexed by the edges of the scheme. If e is not incident to a null-vertex of degree 2, then the labels of the successive vertices of the chain define

suffices to study the ordinary generating series of the species F Ω ◦. A rooted tree with vertex outdegrees in the set Ω ∗ consists of a root-vertex together with an unordered list

Then, combining the empirical cross-correlation matrix with the Random Matrix Theory, we mainly examine the statistical properties of cross-correlation coefficients, the

We now study the scaling limits of discrete looptrees associated with large conditioned Galton–Watson trees, which is a model similar to the one of Boltzmann dissections: With

However, on the example of the foundation embedded in a layer of random soil on a rigid bedrock, this model error seems to have little effect on the shaking element of the

Annales de l’lnstitut Henri Poincare - Physique théorique.. Our demonstration of universality was done explicitly for the density of complex eigenvalues of matrices with

As a consequence of Theorem 1.4, using the results of [24], one shows that one has convergence of the unfolded local level spacings distribution at any point of the almost sure

Namely, we show that for any given Lasso regularization parameter λ and any observed data y belonging to a set of full (Lebesgue) measure, the cardinality of the support of a