• Aucun résultat trouvé

Testing the number of parameters with multidimensional MLP

N/A
N/A
Protected

Academic year: 2021

Partager "Testing the number of parameters with multidimensional MLP"

Copied!
9
0
0

Texte intégral

(1)

HAL Id: hal-00258206

https://hal.archives-ouvertes.fr/hal-00258206

Submitted on 21 Feb 2008

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Testing the number of parameters with

multidimensional MLP

Joseph Rynkiewicz

To cite this version:

Joseph Rynkiewicz. Testing the number of parameters with multidimensional MLP. ASMDA 2005,

2005, Brest, France. pp.561-568. �hal-00258206�

(2)

hal-00258206, version 1 - 21 Feb 2008

multidimensional MLP

Joseph Rynkiewicz1

SAMOS - MATISSE Universit´e de Paris I

72 rue Regnault, 75013 Paris, France

(e-mail: joseph.rynkiewicz@univ-paris1.fr)

Abstract. This work concerns testing the number of parameters in one hidden layer multilayer perceptron (MLP). For this purpose we assume that we have iden-tifiable models, up to a finite group of transformations on the weights, this is for example the case when the number of hidden units is know. In this framework, we show that we get a simple asymptotic distribution, if we use the logarithm of the determinant of the empirical error covariance matrix as cost function.

Keywords: Multilayer Perceptron, Statistical test, Asymptotic distribution.

1

Introduction

Consider a sequence (Yt, Zt)t∈N of i.i.d.1 random vectors (i.e. identically

distributed and independents). So, each couple (Yt, Zt) has the same law

that a generic variable (Y, Z)∈ Rd

× Rd′

.

1.1 The model

Assume that the model can be written

Yt= FW0(Zt) + εt

where

• FW0is a function represented by a one hidden layer MLP with parameters

or weights W0 and sigmoidal functions in the hidden unit.

• The noise, (εt)t∈N, is sequence of i.i.d. centered variables with unknown

invertible covariance matrix Γ (W0). Write ε the generic variable with

the same law that each εt.

Notes that a finite number of transformations of the weights leave the MLP functions invariant, these permutations form a finite group (see [Sussman, 1992]). To overcome this problem, we will consider equivalence classes of MLP : two

1 It is not hard to extend all what we show in this paper for stationary mixing

(3)

2 Joseph Rynkiewicz

MLP are in the same class if the first one is the image by such transformation of the second one, the considered set of parameter is then the quotient space of parameters by the finite group of transformations.

In this space, we assume that the model is identifiable, this can be done if we consider only MLP with the true number of hidden units (see [Sussman, 1992]). Note that, if the number of hidden units is over-estimated, then such test can have very bad behavior (see [Fukumizu, 2003]). We agree that the assumption of identifiability is very restrictive, but we want empha-size the fact that, even in this framework, classical test of the number of parameters in the case of multidimensional output MLP is not satisfactory and we propose to improve it.

1.2 testing the number of parameters

Let q be an integer lesser than s, we want to test “H0: W ∈ Θq ⊂ Rq” against

“H1 : W ∈ Θs ⊂ Rs”, where the sets Θq and Θs are compact. H0 express

the fact that W belongs to a subset of Θswith a parametric dimension lesser

than s or, equivalently, that s− q weights of the MLP in Θs are null. If we

consider the classic cost function : Vn(W ) = P n

t=1kYt− FW(Zt)k2 where

kxk denotes the Euclidean norm of x, we get the following statistic of test : Sn= n×  min W ∈Θq Vn(W )− min W ∈Θs Vn(W ) 

It is shown in [Yao, 2000], that Sn converges in law to a ponderated sum of

χ21 Sn D → s−q X i=1 λiχ2i,1 where the χ2

i,1are s− q i.i.d. χ21variables and λi are strictly positives values,

different of 1 if the true covariance matrix of the noise is not the identity. So, in the general case, where the true covariance matrix of the noise is not the identity, the asymptotic distribution is not known, because the λiare not

known and it is difficult to compute the asymptotic level of the test. To overcome this difficulty we propose to use instead the cost function

Un(W ) := ln det 1 n n X t=1 (Yt− FW(Zt))(Yt− FW(Zt))T ! . (1)

we will show that, under suitable assumptions, the statistic of test :

Tn= n×  min W ∈Θq Un(W )− min W ∈Θs Un(W )  (2)

will converge to a classical χ2s−q so the asymptotic level of the test will be

very easy to compute. The sequel of this paper is devoted to the proof of this property.

(4)

2

Asymptotic properties of T

n

In order to investigate the asymptotic properties of the test we have to prove the consistency and the asymptotic normality of ˆWn= arg minW ∈ΘsUn(W ).

Assume, in the sequel, that ε has a moment of order at least 2 and note

Γn(W ) = 1 n n X t=1 (Yt− FW(Zt))(Yt− FW(Zt))T

remark that these matrix Γn(W ) and it inverse are symmetric. in the same

way, we note Γ (W ) = limn→∞Γn(W ), which is well defined because of the

moment condition on ε

2.1 Consistency of ˆWn

First we have to identify contrast function associated to Un(W )

Lemma 1

Un(W )− Un(W0) a.s.

→ K(W, W0) with K(W, W0)

≥ 0 and K(W, W0) = 0 if and only if W = W0.

Proof : By the strong law of large number we have

Un(W )− Un(W0) a.s.

→ ln det(Γ (W )) − ln det(Γ (W0)) = ln det(Γ (W )) det(Γ (W0)) =

ln det Γ−1(W0) Γ (W )

− Γ (W0) + I d

where Id denotes the identity matrix of Rd. So, the lemme is true if Γ (W )−

Γ(W0) is a positive matrix, null only if W = W0. But this property is true

since Γ(W ) = E (Y − FW(Z))(Y − FW(Z))T = E (Y − FW0(Z) + FW0(Z)− FW(Z))(Y − FW0(Z) + FW0(Z)− FW(Z))T = E (Y − FW0(Z))(Y − FW0(Z))T + E (FW0(Z)− FW(Z))(FW0(Z)− FW(Z))T = Γ(W0) + E (F W0(Z)− FW(Z))(FW0(Z)− FW(Z))T 

We deduce then the theorem of consistency :

Theorem 1 If E kεk2 < ∞,

ˆ Wn

P

(5)

4 Joseph Rynkiewicz

Proof Remark that it exist a constant B such that

supW ∈ΘskY − FW(Z)|

2<

kY k2+ B

because Θs is compact, so FW(Z) is bounded. For a matrix A∈ Rd×d, let

kAk be a norm, for example kAk2= tr AAT. We have

lim infW ∈ΘskΓn(W )k = kΓ (W

0)k > 0

lim supW ∈ΘskΓn(W )k := C < ∞

and since the function :

Γ 7→ ln det Γ, for C ≥ kΓ k ≥ kΓ (W0)k

is uniformly continuous, by the same argument that example 19.8 of [Van der Vaart, 1998] the set of functions Un(W ), W ∈ Θs is

Glivenko-Cantelli.

Finally, the theorem 5.7 of [Van der Vaart, 1998], show that ˆWnconverge

in probability to W0 .

2.2 Asymptotic normality

For this purpose we have to compute the first and the second derivative with respect to the parameters of Un(W ). First, we introduce a notation : if

FW(X) is a d-dimensional parametric function depending of a parameter W ,

write ∂FW(X)

∂Wk (resp.

∂2

FW(X)

∂Wk∂Wl) for the d-dimensional vector of partial

deriva-tive (resp. second order partial derivaderiva-tives) of each component of FW(X).

First derivatives : if Γn(W ) is a matrix depending of the parameter vector

W, we get from [Magnus and Neudecker, 1988] ∂ ∂Wk ln det (Γn(W )) = tr  Γn−1(W ) ∂ ∂Wk Γn(W )  Hence, if we note An(Wk) = 1 n n X t=1  −∂F∂WW(zt) k (yt− FW(zt))T 

using the fact

tr Γn−1(W )An(Wk) = tr AnT(Wk)Γn−1(W ) = tr Γn−1(W )A T n(Wk)  we get ∂ ∂Wk ln det (Γn(W )) = 2tr Γn−1(W )An(Wk)  (3)

(6)

Second derivatives : We write now Bn(Wk, Wl) := 1 n n X t=1 ∂FW(zt) ∂Wk ∂FW(zt) ∂Wl T! and Cn(Wk, Wl) := 1 n n X t=1 −(yt− FW(zt)) ∂2FW(zt) ∂Wk∂Wl T! We get ∂2 Un(W ) ∂Wk∂Wl = ∂ ∂Wl2tr Γ −1 n (W )An(Wk) = 2tr∂Γn−1(W ) ∂Wl An(Wk)  + 2tr Γ−1 n (W )Bn(Wk, Wl) + 2tr Γn(W )−1Cn(Wk, Wl) 

Now, [Magnus and Neudecker, 1988], give an analytic form of the derivative of an inverse matrix, so we get

∂2 Un(W ) ∂Wk∂Wl = 2tr Γ −1 n (W ) An(Wk) + ATn(Wk) Γn−1(W )An(Wk) + 2tr Γ−1 n (W )Bn(Wk, Wl) + 2tr Γn−1(W )Cn(Wk, Wl) so ∂2 Un(W ) ∂Wk∂Wl = 4tr Γ −1 n (W )An(Wk)Γn−1(W )An(Wk) +2tr Γ−1 n (W )Bn(Wk, Wl) + 2tr Γn−1(W )Cn(Wk, Wl) (4)

Asymptotic distribution of ˆWn : The previous equations allow us to give the

asymptotic properties of the estimator minimizing the cost function Un(W ),

namely from equation (3) and (4) we can compute the asymptotic properties of the first and the second derivatives of Un(W ). If the variable Z has a

moment of order at least 3 then we get the following lemma :

Theorem 2 Assume that E kεk2 < ∞ and E kZk3 < ∞, let ∆Un(W0)

be the gradient vector of Un(W ) at W0 and HUn(W0) be the Hessian matrix

of Un(W ) at W0. Write finally B(Wk, Wl) := ∂FW(Z) ∂Wk ∂FW(Z) ∂Wl T We get then 1. HUn(W0) a.s. → 2I0 2. √n∆Un(W0) Law → N (0, 4I0) 3. √n ˆWn− W0 Law → N (0, I0−1)

where, the component(k, l) of the matrix I0 is :

tr Γ0−1E B(Wk0, Wl0)

(7)

6 Joseph Rynkiewicz

proof : We can show easily that, for all x∈ Rd, we have :

k∂FW(Z) ∂Wk k ≤ Cte(1 + kZk) k∂2FW(Z) ∂Wk∂Wlk ≤ Cte(1 + kZk 2) k∂2FW(Z) ∂Wk∂Wl − ∂2 F0 W(Z) ∂Wk∂Wlk ≤ CtekW − W 0k(1 + kZk3) Write A(Wk) =  −∂F∂WW(Z) k (Y − FW(Z))T 

and U (W ) := log det(Y − FW(Z)).

Note that the component (k, l) of the matrix 4I0is:

E ∂U (W 0) ∂Wk ∂U(W0) ∂W0 l  = E 2tr Γ0−1A T (Wk0) × 2tr Γ −1 0 A(Wl0) 

and, since the trace of the product is invariant by circular permutation, E∂U(W 0 ) ∂Wk ∂U(W0 ) ∂W0 l  = 4E−∂FW0(Z)T ∂Wk Γ −1 0 (Y − FW0(Z))(Y − FW0(Z))TΓ −1 0  −∂FW0(Z)) ∂Wl  = 4E∂FW0(Z)T ∂Wk Γ −1 0 ∂FW0(Z) ∂Wl  = 4trΓ0−1E∂FW0(Z) ∂Wk ∂FW0(Z)T ∂Wl  = 4tr Γ0−1E B(W0 k, Wl0)  Now, the derivative ∂FW(Z)

∂Wk is square integrable, so ∆Un(W

0) fulfills

Linde-berg’s condition (see [Hall and Heyde, 1980]) and √

n∆Un(W0) Law

→ N (0, 4I0)

For the component (k, l) of the expectation of the Hessian matrix, remark first that lim n→∞tr Γ −1 n (W0)An(Wk0)Γn−1(W0)An(Wk0) = 0 and lim n→∞trΓ −1 n Cn(Wk0, Wl0) = 0 so limn→∞Hn(W0) = limn→∞4tr Γn−1(W0)An(Wk0)Γn−1(W0)An(Wk0) + 2trΓ−1 n (W0)Bn(Wk0, Wl0) + 2trΓn−1Cn(Wk0, Wl0) = = 2tr Γ0−1E B(Wk0, Wl0)  Now, sincek∂2FW(Z) ∂Wk∂Wlk ≤ Cte(1 + kZk 2) and k∂2FW(Z) ∂Wk∂Wl − ∂2 F0 W(Z) ∂Wk∂Wlk ≤ CtekW − W 0k(1 + kZk3), by standard arguments

found, for example, in [Yao, 2000] we get √

n ˆWn− W0

Law

→ N (0, I0−1)

(8)

2.3 Asymptotic distribution of Tn

In this section, we write ˆWn = arg minW ∈ΘsUn(W ) and

ˆ

Wn0= arg minW ∈ΘqUn(W ), where Θq is view as a subset of R

s. The

asymp-totic distribution of Tnis then a consequence of the previous section, namely,

if we have to replace nUn(W ) by its Taylor expansion around ˆWn and ˆWn0,

following [Van der Vaart, 1998] chapter 16 we have :

Tn=√n ˆWn− ˆWn0 T I0√n ˆWn− ˆWn0  + oP(1) D → χ2s−q

3

Conclusion

It has been show that, in the case of multidimensional output, the cost func-tion Un(W ) leads to a test for the number of parameters in MLP simpler than

with the traditional mean square cost function. In fact the estimator ˆWn is

also more efficient than the least square estimator (see [Rynkiewicz, 2003]). We can also remark that Un(W ) matches with twice the “concentrated

Gaus-sian log-likelihood” but we have to emphasize, that its nice asymptotic prop-erties need only moment condition on ε and Z, so it works even if the dis-tribution of the noise is not Gaussian. An other solution could be to use an approximation of the covariance error matrix to compute generalized least square estimator : 1 n n X t=1 (Yt− FW(Zt)) T Γ−1(Yt− FW(Zt)) ,

assuming that Γ is a good approximation of the true covariance matrix of the noise Γ (W0). However it take time to compute a good the matrix Γ and

if we try to compute the best matrix Γ with the data, it leads to the cost function Un(W ) (see for example [Gallant, 1987]).

Finally, as we see in this paper, the computation of the derivatives of Un(W ) is easy, so we can use the effective differential optimization techniques

to estimate ˆWn and numerical examples can be found in [Rynkiewicz, 2003].

References

Fukumizu, 2003.K. Fukumizu. Likelihood ratio of unidentifiable models and mul-tilayer neural networks. Annals of Statistics, 31:3:533–851, 2003.

Gallant, 1987.R.A. Gallant. Non linear statistical models. J. Wiley and Sons, New-York, 1987.

Hall and Heyde, 1980.P. Hall and C. Heyde. Martingale limit theory and its appli-cations. Academic Press, New-York, 1980.

Magnus and Neudecker, 1988.Jan R. Magnus and Heinz Neudecker. Matrix differ-ential calculus with applications in statistics and econometrics. J. Wiley and Sons, New-York, 1988.

(9)

8 Joseph Rynkiewicz

Rynkiewicz, 2003.J. Rynkiewicz. Estimation of multidimensional regression model with multilayer perceptrons. In J. Mira and J.R. Alvarez, editors, Computa-tional methods in neural modeling, volume 2686 of Lectures notes in computer science, pages 310–317, 2003.

Sussman, 1992.H.J. Sussman. Uniqueness of the weights for minimal feedforward nets with a given input-output map. Neural Networks, pages 589–593, 1992. Van der Vaart, 1998.A.W. Van der Vaart. Asymptotic statistics. Cambridge

Uni-versity Press, Cambridge, UK, 1998.

Yao, 2000.J. Yao. On least square estimation for stable nonlinear ar processes. The Annals of Institut of Mathematical Statistics, 52:316–331, 2000.

Références

Documents relatifs

The second mentioned author would like to thank James Cogdell for helpful conversation on their previous results when he was visiting Ohio State University.. The authors also would

Combined with Theorem 4, this enables us to show that the Milnor K-ring modulo the ideal of divisible elements (resp. of torsion elements) determines in a functorial way

At the conclusion of the InterNIC Directory and Database Services project, our backend database contained about 2.9 million records built from data that could be retrieved via

Once this constraint is lifted by the deployment of IPv6, and in the absence of a scalable routing strategy, the rapid DFZ RIB size growth problem today can potentially

While the IPv6 protocols are well-known for years, not every host uses IPv6 (at least in March 2009), and most network users are not aware of what IPv6 is or are even afraid

Preserving integrity of routing and data, or, in other words, preventing peers from returning corrupt responses to queries and routing through malicious peers, is an

The QSPEC objects QoS Desired, QoS Available, QoS Reserved and Minimum QoS MUST all be supported by QNEs and MAY appear in any QSPEC object carried in any QoS NSLP

While manifest cluster-mean centering is known to introduce Nickell’s bias, we also showed how some implementations of the latent mean centering approach in the Bayesian framework