• Aucun résultat trouvé

Asymptotic theory of multiple-set linear canonical analysis

N/A
N/A
Protected

Academic year: 2022

Partager "Asymptotic theory of multiple-set linear canonical analysis"

Copied!
22
0
0

Texte intégral

(1)

HAL Id: hal-01511413

https://hal.archives-ouvertes.fr/hal-01511413

Preprint submitted on 20 Apr 2017

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Asymptotic theory of multiple-set linear canonical analysis

Guy Martial Nkiet

To cite this version:

Guy Martial Nkiet. Asymptotic theory of multiple-set linear canonical analysis. 2017. �hal-01511413�

(2)

Noname manuscript No.

(will be inserted by the editor)

Asymptotic theory of multiple-set linear canonical analysis

Guy Martial Nkiet

Received: date / Revised: date

Abstract This paper deals with asymptotics for multiple-set linear canonical analysis (MSLCA). A definition of this analysis, that adapts the classical one to the context of Euclidean random variables, is given and properties of the related canonical coefficients are derived. Then, estimators of the MSLCA’s elements, based on empirical covariance operators, are proposed and asymp- totics for these estimators are obtained. More precisely, we prove their consis- tency and we obtain asymptotic normality for the estimator of the operator that gives MSLCA, and also for the estimator of the vector of canonical coeffi- cients. These results are then used to obtain a test for mutual non-correlation between the involved Euclidean random variables.

Keywords Multiple set canonical analysis · asymptotic study · non- correlation tests

1 Introduction

Multiple-set linear canonical analysis (MSLCA), also known as generalized canonical correlation analysis, has been extensively discussed in the literature, see Kettenring (1971), Gifi (1991), Gardner et al. (2006), Takane et al. (2008), Tenenhaus and Tenenhaus (2011), as well as the further references contained therein. It is a statistical method that generalizes linear canonical analysis (LCA) to the case where more than two sets of variables are considered, which is of a real interest since in applied statistical studies it is common to collect data from the observation of several sets of variables on a given population.

However, although this interest, several aspects under which LCA has been studied have never been addressed to MSLCA. For example, asymptotic the- ory for LCA and related applications have been tackled by several authors

Guy Martial Nkiet

Universit´e des Sciences et Techniques de Masuku, D´epartement de Math´ematiques et Infor- matique, BP 943 Franceville, Gabon. E-mail: gnkiet@hotmail.com

(3)

(e.g., Muirhead and Waternaux (1980), Anderson (1999), Pousse (1992), Fine (2000), Dauxois et al. (2004)). It would be natural to wonder how the obtained results extend to the case of MSLCA but, to the best of our knowledge, such an approach has never been tackled.

In this paper, we introduce an asymptotic theory for MSLCA. For doing that, we first define in Section 2 the notion of MSLCA for Euclidean random variables, that is random variables valued into Euclidean vector spaces. This analysis is defined from a maximization problem under specified constraints, and shown to be obtained from spectral analysis of a suitable operator. Prop- erties of the related eigenvalues, called canonical coefficients, are then given. In Section 3, we tackle the problem of estimating MSLCA. More precisely, estima- tors based on empirical covariance operators are introduced. Then, consistency of the obtained estimators is proved. Further, we derive the asymptotic distri- bution of the used estimator of the aforementioned operator, and also that of the estimator of the vector of canonical coefficients in the general case as well as in the case of elliptical distribution. Section 4 is devoted to the introduction of a test for mutual non-correlation between the random variables involved in MSLCA. The results obtained for asymptotic theory of MSLCA are then used in order to derive the asymptotic distribution of the used test statistic under null hypothesis.

2 Multiple-set canonical linear analysis of Euclidean random variables

For an integer K ≥2, let us consider random variables X1,· · ·, XK defined on a probability space (Ω,A, P) and valued into Euclidean vector spaces X1,· · ·,XK respectively. Denoting byEthe mathematical expectation related to P, we assume that, for any k ∈ {1,· · ·, K}, we have E(kXkk2k) < +∞ wherek · kk denotes the norm induced by the inner producth·,·ik ofXk, and, without loss of generality, thatE(Xk) = 0. Each vectorαin the vector space X :=X1× · · · XK will be writen as

α=

 α1

... αK

,

and we recall that X is an Euclidean vector space equipped with the inner producth·,·iX defined by:

∀α∈ X, ∀β∈ X, hα, βiX = XK k=1

k, βkik.

(4)

We denote byk · kX the norm induced by this inner product. Considering the X-valued random variable

X =

 X1

... XK

,

we can give the following definition which adapts the classical definition of multiple-set canonical analysis (e.g., Gifi (1991), Gardner et al. (2006), Takane et al. (2008)) to the context of Euclidean random variables.

Definition 2.1. The multiple-set linear canonical analysis (MSLCA) of X is the search of a sequence α(j)

1jq of vectors of E, where q = dim(X), satisfying:

α(j)= arg max

αCj

E < α, X >2X

, (1)

where

C1= (

α∈ X/ XK k=1

var(< αk, Xk >k) = 1 )

, (2)

and, forj≥2: Cj=

(

α∈C1/ XK k=1

cov

< α(r)k , Xk>k, < αk, Xk>k

= 0, ∀r∈ {1,· · · , j−1} )

. (3)

Remark 2.1

1) The constraints sets given in (2) and (3) can be expressed by using covari- ance operators defined for (k, ℓ)∈ {1,· · · , K}2by:

Vkℓ=E(X⊗Xk) =Vℓk andVk:=Vkk,

where⊗denotes the tensor product such that, for any (x, y),x⊗yis the linear map : h 7→< x, h > y, and T denotes the adjoint of T. Indeed, it is easily seen that, for (α, β)∈ X2,

var(hαk, Xkik) =E hαk, Xki2k

=E(hαk,(Xk⊗Xk)(αk)ik) =hαk, Vkαkik, and

cov(hαk, Xkik,hβ, Xi) =E(hαk, Xkik, Xi) =E(hαk,(X⊗Xk)(β)ik)

=hαk, Vkℓβik. Therefore,

C1= (

α∈ X/ XK k=1

< αk, Vkαk >k= 1 )

, (4)

(5)

and

Cj= (

α∈C1/ XK k=1

< α(r)k , Vkαk>k= 0, ∀r∈ {1,· · · , j−1} )

. (5) 2) For anyα∈C1, one has:

E < α, X >2X

=E

 XK k=1

< αk, Xk>k

!2

= XK k=1

XK ℓ=1

E(< αk, Xk >k< α, X>)

= XK k=1

E < αk, Xk >2k +

XK k=1

XK ℓ=16=k

E(< αk, Xk >k< α, X>)

= XK k=1

var(< αk, Xk>k) + XK k=1

XK

=1

6=k

< αk, Vkℓα>k

= 1 + XK k=1

XK ℓ=16=k

< αk, Vkℓα>k= 1 +ϕ(α),

where

ϕ(α) = XK k=1

XK ℓ=16=k

< αk, Vkℓα>k. (6)

Then, the MSLCA ofX is obtained by minimizingϕ(α) under the constraints expressed in (4) and (5).

Fork∈ {1,· · · , K}, the covariance operatorVk is a self-adjoint non-negative operator. From now on, we assume that it is invertible. Letτkbe the canonical projection defined as

τk : α∈ X 7→αk ∈ Xk; its adjointτk ofτk is the map given by:

τk : t∈ Xk7→( 0,· · ·,0

| {z }

k1 times

, t,0,· · ·,0)T ∈ X,

where we denote byaT the transposed ofa. Now, let us consider the operators ofL(X) given by:

Φ= XK k=1

τkVkτk and Ψ = XK k=1

XK ℓ=16=k

τkVkℓτ.

From the fact thatτkτkℓIk, whereδkℓ is the usual Kronecker symbol and Ikis the identity operator ofXk, it is easily seen thatΦis also an invertible self- adjoint and non-negative operator, withΦ1=PK

k=1τkVk1τk and Φ1/2 =

(6)

PK

k=1τkVk1/2τk. The following theorem shows how to obtain a MSLCA ofX. It just repeats a known result (e.g., Gifi (1991), Takane et al. (2008)) within the framework used for this paper.

Theorem 2.1. Letting

β(1),· · · , β(q) be an orthonormal basis ofX such that β(j) is an eigenvector of the operatorT =Φ1/2Ψ Φ1/2 associated with thej-th largest eigenvalueρj of T. Then, the sequence α(j)

1jq given by:

α(j)1/2β(j)=

V11/2β1(j),· · · , VK1/2βK(j) ,

consists of solutions of (1) under the constraints (2) and (3), and we have:

ρj=< β(j), T β(j)>E=ϕ(α(j)).

Proof.Puttingβk=Vk1/2αk andβ(r)=Vk1/2α(r)k , we have:

ϕ(α) = XK k=1

XK ℓ=16=k

< Vk1/2βk, VkℓV1/2β>k

= XK k=1

XK

=1

6=k

< βk, Vk1/2VkℓV1/2β>k=:ψ(β), (7)

where

β=

 β1

... βK

∈ X.

SinceVk=Vk1/2Vk1/2, havingα∈Cj is equivalent to havingβ ∈Cj, where:

C1 = (

β ∈ X/ XK k=1

kk2k= 1 )

=

β∈ X/kβk2X = 1 , (8) and forj≥2:

Cj = (

β ∈C1/ XK k=1

< βk(r), βk >k= 0, ∀r∈ {1,· · ·, j−1} )

=n

β ∈C1/ < β(r), β >X= 0, ∀r∈ {1,· · ·, j−1}o

. (9)

Further, for anyβ ∈ X: Ψ Φ1/2β=

XK k=1

XK ℓ=16=k

XK j=1

τkVkℓττjVj1/2τjβ = XK k=1

XK ℓ=16=k

XK j=1

δℓjτkVkℓVj1/2τjβ

= XK k=1

XK ℓ=16=k

τkVkℓV1/2τβ,

(7)

and

Φ1/2Ψ Φ1/2β = XK k=1

XK

=1

6=k

XK j=1

τjVj1/2τjτkVkℓV1/2τβ

= XK k=1

XK ℓ=16=k

XK j=1

δjkτjVj1/2VkℓV1/2τβ

= XK k=1

XK

=1

6=k

τkVk1/2VkℓV1/2τβ

= XK k=1

XK ℓ=16=k

τkVk1/2VkℓV1/2β.

Thus,

< β, Φ1/2Ψ Φ1/2β >X = XK k=1

XK ℓ=16=k

< β, τkVk1/2VkℓV1/2β>X

= XK k=1

XK

=1

6=k

< τkβ, Vk1/2VkℓV1/2β>k

= XK k=1

XK ℓ=16=k

< βk, Vk1/2VkℓV1/2β>k=ψ(β),

where ψ is defined in (7). Then, the MSLCA optimization problem reduces to the maximization of < β, Φ1/2Ψ Φ1/2β >X under the constraints (8) and (9). Since T = Φ1/2Ψ Φ1/2 is a self-adjoint operator, this is a well known maximization problem for which a solution is obtained from the spectral

analysis ofT as stated in the theorem.

Definition 2.2.Theρj’s are termed the canonical coefficients. Theα(j)’s are termed vectors of canonical directions.

The following theorem gives some properties of the canonical coefficients.

Theorem 2.2.

(i)∀j ∈ {1,· · ·, q},−1≤ρj≤K(K−1).

(ii)∀j ∈ {1,· · · , q},ρj= 0⇔ ∀(k, ℓ)∈ {1,· · ·, K}2, k6=ℓ, Vkℓ = 0.

Proof.

(8)

(i) First, using (6), we have for anyj∈ {1,· · ·, q}, ρj =ϕ(α(j)) =E

< α(j), X >2X

−1≥ −1.

On the other hand, we have:

ρj=ϕ(α(j)) = XK k=1

XK

=1

6=k

E

< α(j)k , Xk >k< α(j) , X>

≤ XK k=1

XK ℓ=16=k

r E

< α(j)k , Xk>2kr E

< α(j) , X>2 .

Since, for anyk∈ {1,· · ·, K}, one has:

E

< α(j)k , Xk>2k

=var

< α(j)k , Xk >k

≤ XK ℓ=1

var

< α(j) , X>

= 1,

it follows that:

ρj≤ XK k=1

XK ℓ=16=k

1 =K(K−1).

(ii) Since theρj’s are the eigenvalues ofT, we have:

∀j∈ {1,· · · , q}, ρj= 0⇔T = 0⇔Ψ = 0⇔ ∀(k, ℓ)∈ {1,· · ·, K}2, k6=ℓ, Vkℓ= 0.

Remark 2.2.

1) When K = 2, one hasΦ =τ1V1τ12V2τ2 and Ψ =τ1V12τ22V21τ1. Then it is easy to check thatT =τ122Sτ1, whereS=V11/2V12V21/2. Let xbe an eigenvector of T associated with an eigenvalueρ 6= 0. We have T x=ρx, that is equivalent to having:

τ1(Sτ2x−ρτ1x) =−τ2(Sτ1x−ρτ2x).

This implies:

2x=ρ τ1x Sτ1x=ρ τ2x and, puttingx11xandx22x, we obtain

x21Sx1 and Rx12x1, (10) where

R=SS=V11/2V12V21V21V21/2.

(9)

Conversely, if (10) holds then, puttingx=τ1x12x2, we have:

T x=τ12x+τ2Sτ1x=τ1Sx22Sx11τ1SSx1+ρ τ2x2

1τ1Rx1+ρ τ2x2=ρ(τ1x12x2) =ρ x.

Moreover, since

kx2k21kSx1k11p

< Sx1, Sx1>21p

< SSx1, x1>1=kx1k1

and

kxk2X =kx1k21+kx2k22

it follows that

kx1k1=kx2k2= 1

√2kxkX.

2) The preceding remark shows the equivalence between MSLCA and linear canonical analysis (LCA) when K = 2. Recall that LCA of X1 and X2 is obtained from the spectral analysis ofR(see, e.g., Dauxois and Pouse (1975), Pousse (1992), Fine (2000)). More precisely,

β(j), ρj 1jq is defined as in Theorem 2.1 if, and only if, n

u(j)1 , u(j)2 ρ2jo

1jq, where u(j) = 12τβ(j) (ℓ ∈ {1,2}), is a LCA ofX1andX2.

3 Estimation and asymptotic theory

In this section, we deal with estimation of MSLCA. For k = 1,· · ·, K, let {Xk(i)}1in be an i.i.d. sample ofXk. We use empirical covariance operators for defining estimators of MSLCA elements. Then, consistency and asymptotic normality are obtained for the resulting estimators of the vectors of canonical directions and the canonical coefficients.

3.1 Estimation and almost sure convergence

For (k, ℓ) ∈ {1,· · ·, K}2, let us consider the sample means and covariance operators:

Xk·n= 1 n

Xn i=1

Xk(i), Vbkℓ·n= 1 n

Xn i=1

X(i)−X·n

Xk(i)−Xk·n

, Vbk·n:=Vbkk·n, and the random operators valued intoL(X) defined as

Φbn= XK k=1

τkVbk·nτk and Ψbn= XK k=1

XK

=1

6=k

τkVbkℓ·nτ. Then, we estimateT by

Tbn=Φbn1/2ΨbnΦbn1/2.

(10)

Considering the eigenvaluesbρ1·n≥ρb2·n· · · ≥ρbq·n ofTbn, andn

βbn(1),· · ·,βbn(q)

o an orthonormal basis ofXsuch thatβb(j)n is an eigenvector ofTbnassociated with b

ρj·n. Then, we estimate ρj byρbj·n, and β(j) byβbn(j). The following theorem establishes strong consistency for these estimators.

Theorem 3.1. For any integerj∈ {1,· · ·, q}: (i)ρbj·n converge almost surely, asn→+∞, toρj.

(ii)sign(hβb(j)n , β(j)iX)βb(j)n converges almost surely, asn→+∞, toβ(j)inX. Proof. From obvious applications of the strong law of large numbers, it is easily seen that Tbn converges almost surely inL(X), asn→+∞toT. Then using Lemma 1 in Ferr´e and Yao (2003), we obtain the inequality|bρj·n−ρj| ≤ kTbn−Tkfrom what (i) is deduced. Clearly, eachβ(j)⊗β(j)and is a projector onto an eigenspace. Thefore, using Proposition 3 in Dossou-Gbete and Pousse (1991), we deduce thatβbn(j)⊗βbn(j)converges almost surely inL(X) toβ(j)⊗β(j), as n → +∞. Using again Lemma 1 in Ferr´e and Yao (2003), we obtain the inequality

sign(hβbn(j), β(j)iX)βbn(j)−β(j)

X ≤2√

2 bβn(j)⊗βb(j)n −β(j)⊗β(j)

from what we deduce (ii).

3.2 Asymptotic distribution

In this section, we assume that, for k ∈ {1,· · ·, K}, we have E kXkk4k

<

+∞ and Vk = Ik, where Ik denotes the identity operator of Xk. We first derive an asymptotic distribution forTbn, then we obtain these of the canonical coefficients.

Theorem 3.2. √ n

Tbn−T

converges in distribution, as n → +∞, to a random variable U having a normal distribution in L(X), with mean 0 and covariance operatorΓ equal to that of the random operator:

Z= XK k=1

XK ℓ=16=k

−1

2(τk(Xk⊗Xk)VkℓτVℓk(Xk⊗Xkk) +τk(X⊗Xk.

Proof. Under the above assumptions,

Φ= XK k=1

τkVkτk = XK k=1

τkτk=IX,

(11)

whereIX is the indentity operator ofX, and

√n

Tbn−T

=√ n

Φbn1/2ΨbnΦbn1/2−Ψ

=√ n

Φbn1/2−IX

ΨbnΦbn1/2+√ n

Ψbn−Ψ

Φbn1/2+Ψ√

n(Φbn1/2−IX)

=−Φbn1

n(Φbn−IX) Φbn1/2+IX1

ΨbnΦbn1/2+√ n

Ψbn−Ψ Φbn1/2

−ΨΦbn1

n(Φbn−IX) Φbn1/2+IX1

. (11)

Clearly,

Vkℓ=E(τ(X)⊗τk(X)) =τkV τ, (12) whereV =E(X⊗X). Moreover, putting

X(i)=



 X1(i)

... XK(i)



,

we have

Vbkℓ·n = 1 n

Xn i=1

X(i)⊗Xk(i)−X·n⊗Xk·n

= 1 n

Xn i=1

τ(X(i))⊗τk(X(i))−τ(Xn)⊗τk(Xn)

kVbnτ, (13)

whereXn=n1Pn

i=1X(i) and Vbn = 1

n Xn i=1

X(i)⊗X(i)−Xn⊗Xn. (14) Therefore, using (12) and (13), we obtain

√n

Ψbn−Ψ

= XK k=1

XK ℓ=16=k

τkτkHbnττ=f(Hbn), (15)

whereHbn=√

n(Vbn−V) andf is the operator defined as f : A∈ L(X)7→

XK k=1

XK

=1

6=k

τkτkτ∈ L(X).

Further, sinceIX =PK

k=1τkτk, we obtain

√n(Φbn−IX) = XK k=1

τkτkHbnτkτk=g(Hbn), (16)

(12)

where g is the operator g : A ∈ L(X) 7→ PK

k=1τkτkkτk ∈ L(X). Then, using (11), (15) and (16), we obtain√

n

Tbn−T

=ϕbn(Hbn), whereϕbn is the random operator fromL(X) to itself defined by

b

ϕn(A) =−(Φbn1/2+IX)1g(A)Φbn1ΨbnΦbn1/2+f(A)Φbn1/2−Ψ(Φbn1/2+IX)1g(A)Φbn1. Considering the operator

ϕ : A∈ L(X)7→ −1

2g(A)Ψ+f(A)−1

2Ψ g(A)∈ L(X),

and denoting byk·k(resp.k·k∞∞) the norm ofL(X) (resp.L(L(X))) defined

bykAk= supx∈X −{0}kAxkX/kxkX(resp.khk∞∞= supB∈L(X)−{0}kh(B)k/kBk) for anyA(resp.h) inL(X) (resp.L(L(X))), we have

kϕbn(Hbn)−ϕ(Hbn)k= −

(Φbn1/2+IX)1−1 2IX

g(Hbn)bΦn1ΨbnΦbn1/2

− 1

2g(Hbn)

Φbn1ΨbnΦbn1/2−Ψ

+f(Hbn)(bΦn1/2−IX)

− Ψ

(Φbn1/2+IX)1−1 2IX

g(Hbn)Φbn1−1

2Ψ(Φbn1−IX)

≤ k(Φbn1/2+IX)1−1

2IXkkg(Hbn)kkΦbn1ΨbnΦbn1/2k. + 1

2kg(Hbn)kkΦbn1ΨbnΦbn1/2−Ψk+kf(Hbn)kkΦbn1/2−IXk

+kΨkk(Φbn1/2+IX)1−1

2IXkkg(Hbn)kkΦbn1k

+ 1

2kΨkkg(Hbn)kkΦbn1−IXk

k(Φbn1/2+IX)1−1

2IXkkgk∞∞kΦbn1ΨbnΦbn1/2k

+ 1

2kgk∞∞kΦbn1ΨbnΦbn1/2−Ψk+kfk∞∞kΦbn1/2−IXk

+kΨkk(Φbn1/2+IX)1−1

2IXkkgk∞∞kΦbn1k

+ 1

2kΨkkgk∞∞kΦbn1−IXk

kHbnk. (17) Using the strong law of large numbers, it is easy to verify that, for any (k, ℓ)∈

{1,· · ·, K}2withk6=ℓ,Vbkℓ·n (resp.Vbk·n) converge almost surely toVkℓ (resp.

Vbk), as n → +∞. Consequently, Φbn (resp. Ψbn) converge almost surely to Φ= IX (resp.Ψ), as n→ +∞. This implies the almost sure convergence of (Φbn1/2+IX)1 (resp.Φbn1ΨbnΦbn1/2; resp.Φbn1; resp. Φbn1/2) to 12IX (resp.Ψ; resp.IX; resp.IX), asn→+∞. Furthermore, denoting byk · k the norm of L(X) defined bykAk=p

tr(AA) and using the properties (a⊗b)(c⊗d) =<

(13)

a, d > c⊗band tr(a⊗b) =< a, b >of the tensor product (see Dauxois et al.

(1994)), we have:

E kX⊗Xk2

=E(tr((X⊗X)(X⊗X)) =E kXk4X

=E

 XK k=1

kXkk2k

!2

= XK k=1

E(kXkk4k) + XK k=1

XK ℓ=16=k

E(kXkk2kkXk2)

≤ XK k=1

E(kXkk4k) + XK k=1

XK ℓ=16=k

q

E(kXkk4k) q

E(kXk4)<+∞.

Then, the central limit theorem can be used. It gives the convergence in distri- bution, asn→+∞, of√

n n1P

i=1X(i)⊗X(i)−V

to an random variable H having the normal distribution inL(X) with mean equal to 0 and a covari- ance operator equal to that ofX⊗X. Since, by the central limit theorem again,

√nXn converges in distribution, asn→+∞, to an random variable having a normal distribution inX with mean equal to 0 and a covariance operator equal toV, we deduce from the equality√n Xn⊗Xn

=n1/2 √nXn

⊗ √nXn

that√n Xn⊗Xn

converges in probability to 0, asn→+∞. Therefore, from (14) and Slutsky’s theorem, we deduce thatHbn converges in distribution, as n→+∞toH. Then, from (17), we conclude thatϕbn(Hbn)−ϕ(Hbn) converges in probability to 0, as n → +∞. Then, using again Slutsky’s theorem, we deduce that ϕbn(Hbn) and ϕ(Hbn) both converge in distribution to the same distribution. Sinceϕ is a linear map (and is, therefore, continuous), this dis- tribution just is that of the random variableU = ϕ(H), that is the normal distribution in L(X) with mean 0 and covariance operator equal to that of Z=ϕ(X⊗X). Clearly,

g(X⊗X) = XK k=1

τkτk(X⊗X)τkτk = XK k=1

τk((τk(X))⊗(τk(X)))τk= XK k=1

τk(Xk⊗Xkk,

and

f(X⊗X) = XK k=1

XK ℓ=16=k

τkτk(X⊗X)ττ= XK k=1

XK ℓ=16=k

τk(X⊗Xk.

Then, sinceτkτjkjIk, it follows

g(X⊗X)Ψ = XK k=1

XK j=1

XK ℓ=16=j

τk(Xk⊗XkkτjVjℓτ= XK k=1

XK ℓ=16=k

τk(Xk⊗Xk)Vkℓτ

(14)

and

Ψ g(X⊗X) = XK k=1

XK ℓ=16=k

XK j=1

τkVkℓττj(Xj⊗Xjj= XK k=1

XK ℓ=16=k

τkVkℓ(X⊗X

= XK k=1

XK

=1

6=k

τVℓk(Xk⊗Xkk.

Thus, Z=

XK k=1

XK ℓ=16=k

−1

2(τk(Xk⊗Xk)VkℓτVℓk(Xk⊗Xkk) +τk(X⊗Xk. Using the preceding theorem and results in Eaton and Tyler (1991,1994), we can now give asymptotic distributions for the canonical coefficients. We denote by ρj

1jr (with r ∈ N) the sequence of distinct eigienvalues of T in decreasing order, that is ρ1 > · · · > ρr. Putting m0 = 0, denoting by mj

the multiplicity ofρj and putting νj =Pj1

k=0mk for any j ∈ {1,· · ·r}, it is clear that for anyi∈ {νj1+ 1,· · ·, νj} one hasρij. Further, considering the eigenspace Ej = ker(T −ρjI), we have the following decomposition in orthogonal direct sum:X =E1⊕ · · · ⊕Er. We denote by Πj the orthogonal projector fromX ontoEj, and by∆ the continuous map which associates to each self-adjoint operatorAthe vector∆(A) of its eigenvalues in nonincreasing order. For j ∈ {1,· · ·r}, we consider mj-dimensional vector given by υj = ρjJmj, where Jq denotes the q-dimensional vector with elements all equal to 1, and theRmj- valued random vector:

ˆ υjn=

 b ρνj−1+1·n

... b ρνj·n

.

Then, putting

Λbn=

 ˆ υ1n

... ˆ υrn

 and Λ=

 υ1

... υr

,

we have:

Theorem 3.3. √ n

Λbn−Λ

converges in distribution, asn→+∞, to the Rp-valued random vector

ζ=



∆(Π1W Π1) ...

∆(ΠrW Πr)

, (18)

(15)

where W is a random variable having a normal distribution in L(X), with mean0and covariance operatorΘ given by:

Θ= X

1m,r,s,tp

C(m, r, s, t) (em⊗er)e⊗(es⊗et) with

C(m, r, s, t) = XK k=1

XK j=1

XK

=1

6=k

XK qq=16=j

γkℓjqm,r,s,tkℓjqm,r,t,skℓjqr,m,s,tkℓjqr,m,t,s

−θm,r,s,tkℓjq −θr,m,s,tkℓjq −θs,t,m,rkℓjq −θkℓjqt,s,m,rm,r,s,tkℓjq , γkℓjqa,b,c,d= 1

4E

< Xk, τkβ(a)>k< Xk, Vkℓτβ(b)>k< Xj, τjβ(c)>j< Xj, Vjqτqβ(d)>j

,

θa,b,c,dkℓjq = 1 2E

< Xk, τkβ(a)>k< Xk, Vkℓτβ(b)>k< Xj, τjβ(c)>j< Xq, τqβ(d)>q

and γkℓjqa,b,c,d=E

< Xk, τkβ(a)>k< X, τβ(b)>< Xj, τjβ(c)>j< Xq, τqβ(d)>q

. Proof. Since∆(Tbn) =Λbn and ∆(T) = Λ, we deduce from Theorem 3.2 and the Theorem 2.1 of Eaton and Tyler (1994) that √

n

Λbn−Λ

converges in distribution, as n → +∞, to the random variable given in (18) with W = PU P, whereP =Pp

ℓ=1e⊗β(ℓ). Clearly,W has a normal distribution with mean 0 and covariance operatorΘequal to that ofPZP. In order to give an explicit expression ofΘ, let us first note that:

PZP = XK k=1

XK

=1

6=k

−1

2(Pτk(Xk⊗Xk)VkℓτP+PτVℓk(Xk⊗XkkP) +Pτk(X⊗XkP

= XK k=1

XK ℓ=16=k

−1

2((PτVℓkXk)⊗(PτkXk) + (PτkXk)⊗(PτVℓkXk)) +(PτX)⊗(PτkXk).

Since

PτVℓkXk = Xp m=1

β(m)⊗em

!

τVℓkXk= Xp m=1

< β(m), τVℓkXk>X em

= Xp m=1

< τβ(m), VℓkXk>em

(16)

and, similarly,PτkXk =Pp

m=1< τkβ(m), Xk>kem, it follows:

PZP = Xp m=1

Xp r=1

 XK k=1

XK

=1

6=k

− 1

2(< τβ(m), VℓkXk >< τkβ(r), Xk>k

+< τβ(r), VℓkXk >< τkβ(m), Xk>k) + < τβ(m), X>< τkβ(r), Xk >k

i em⊗er.

From:

E

< τβ(m), VℓkXk>< τkβ(r), Xk >k

=E

<(Xk⊗Xk)(τkβ(r)), Vkℓτβ(m)>k

=<E(Xk⊗Xk)(τkβ(r)), Vkℓτβ(m)>k

=< Vkτkβ(r), Vkℓτβ(m)>k

=< τkβ(r), Vkℓτβ(m)>k, E

< τβ(r), VℓkXk >< τkβ(m), Xk>k

=< τkβ(m), Vkℓτβ(r)>k

and E

< τβ(m), X>< τkβ(r), Xk>k

=E

<(X⊗Xk)(τβ(m)), τkβ(r)>k

=<E(X⊗Xk)(τβ(m)), τkβ(r)>k

=< Vkℓτβ(m), τkβ(r)>k, we deduce thatE(PZP) = 0. Thus,

Θ=E (PZP)e⊗(PZP)

= X

1m,r,s,tp

C(m, r, s, t) (em⊗er)⊗e(es⊗et),

where

C(m, r, s, t) = XK k=1

XK j=1

XK

=1

6=k

XK qq=16=j

E Ykℓm,rYjqs,q

with Ykℓm,r= −1

2

< τβ(m), VℓkXk>< τkβ(r), Xk >k+< τβ(r), VℓkXk >< τkβ(m), Xk>k

+< τβ(m), X>< τkβ(r), Xk >k. Further calculations give

E Ykℓm,rYjqs,q

= γkℓjqm,r,s,tkℓjqm,r,t,skℓjqr,m,s,tkℓjqr,m,t,s

−θkℓjqm,r,s,t−θkℓjqr,m,s,t−θkℓjqs,t,m,r−θt,s,m,rkℓjqm,r,s,tkℓjq .

(17)

WhenT has simple eigenvalues, that isρ1> ρ2>· · ·> ρq, the preceding theorem has a simpler statement. We have:

Corollary 3.1. When the eigenvalues of T are simple, √n

Λbn−Λ con- verges in distribution, as n → +∞, to a random variable having a normal distribution inRp with mean0 and covariance matrixΣ= (σij)1i,jp with:

σij = X

1m,r,s,tp

βm(i)βr(i)β(j)s βt(j)C(m, r, s, t).

Proof. In this case, m1 = · · ·mp = 1 and, for any j ∈ {1,· · ·, p}, Πj = β(j)⊗β(j). Thus

ΠjW Πj= ((β(j)⊗β(j))W(β(j)⊗β(j)) = (β(j)⊗β(j))(β(j)⊗(W β(j)))

=< β(j), W β(j)>X β(j)⊗β(j),

and, therefore,∆(ΠjW Πj) =< β(j), W β(j)>X. Then,ζis a linear function of W and, consequently, it has a normal distribution with mean 0 and covariance matrix Σ = (σij)1i,jp with σij = E < β(i), W β(i)>X< β(j), W β(j)>X

. Denoting by <·,· >the inner product of operators defined by < A, B >=

tr (AB), we have:

< W, β(j)⊗β(j)>= tr

W(β(j)⊗β(j))

= tr

β(j)⊗(W β(j))

=< β(j), W β(j)>X, it follows that

σij =E

< β(i), W β(i)>X< β(j), W β(j)>X

=E

< W, β(i)⊗β(i)>< W, β(j)⊗β(j)>

=E

<(W⊗eW)(β(i)⊗β(i)), β(j)⊗β(j)>

=<E(W⊗eW)(β(i)⊗β(i)), β(j)⊗β(j)>

=< Θ(β(i)⊗β(i)), β(j)⊗β(j)>

= X

1m,r,s,tp

C(m, r, s, t)< (em⊗er)⊗e(es⊗et)

(i)⊗β(i)), β(j)⊗β(j)>

= X

1m,r,s,tp

C(m, r, s, t)< em⊗er, β(i)⊗β(i)>< es⊗et, β(j)⊗β(j)> .

Then, the required result is obtained from

< em⊗er, β(i)⊗β(i)>= tr

(em⊗er)(β(i)⊗β(i)>)

= tr

< em, β(i)>X β(i)⊗er

=< em, β(i)>X< er, β(i)X >

(i)mβr(i)

and< es⊗et, β(j)⊗β(j)>=βs(j)βt(j).

Références

Documents relatifs

If a second order formula © is provable with the recurrence axiom, then the restricted formula ©int is provable without it, using the axioms.. ∀

to an external magnetic field. Semi-classical analysis is introduced in order to give very accurate asymptotic behavior of the supercooling field as the.. thickness

Canonical theory of the two-body problem in the classical relativistic electrodynamics.. Annales

In Section 1, a simple construction of a group of birational canonical transformations of the sixth equation isomorphic to the affine Weyl group of D4 root system

Toute utilisation commerciale ou impression systématique est constitutive d’une infrac- tion pénale.. Toute copie ou impression de ce fichier doit contenir la présente mention

Rosiglitazone induction of Insig-1 in white adipose tissue reveals a novel interplay of peroxisome proliferator-activated receptor gamma and sterol regulatory

(c) Each data matrix consisted of a 165 × 15 matrix of val- ues derived from fMRI responses of 10 subjects in response to 165 sounds.. Prior to MCCA the 6309 voxels were reduced to

[ 19 ] Our study thus suggests that the generation and maintenance of fluid pressure (even at low pressure levels) and the associated hydromechanical effects and the