• Aucun résultat trouvé

The Degrees of Freedom of the Group Lasso

N/A
N/A
Protected

Academic year: 2021

Partager "The Degrees of Freedom of the Group Lasso"

Copied!
5
0
0

Texte intégral

(1)

HAL Id: hal-00695292

https://hal.archives-ouvertes.fr/hal-00695292

Submitted on 7 May 2012

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

The Degrees of Freedom of the Group Lasso

Samuel Vaiter, Charles Deledalle, Gabriel Peyré, Jalal M. Fadili, Charles Dossal

To cite this version:

Samuel Vaiter, Charles Deledalle, Gabriel Peyré, Jalal M. Fadili, Charles Dossal. The Degrees of

Freedom of the Group Lasso. International Conference on Machine Learning Workshop (ICML),

2012, Edinburgh, United Kingdom. �hal-00695292�

(2)

Degrees of Freedom of the Group Lasso

Samuel Vaiter, Charles Deledalle, Gabriel Peyr´ e vaiter@ceremade.dauphine.fr CEREMADE, CNRS, Universit´e Paris-Dauphine, France

Jalal Fadili

GREYC, CNRS-ENSICAEN-Universit´e de Caen, France Charles Dossal

IMB, Universit´e Bordeaux 1, France

Abstract

This paper studies the sensitivity to the ob- servations of the block/group Lasso solution to an overdetermined linear regression model.

Such a regularization is known to promote sparsity patterns structured as nonoverlap- ping groups of coefficients. Our main contri- bution provides a local parameterization of the solution with respect to the observations.

As a byproduct, we give an unbiased esti- mate of the degrees of freedom of the group Lasso. Among other applications of such re- sults, one can choose in a principled and ob- jective way the regularization parameter of the Lasso through model selection criteria.

1. Introduction

This paper deals with the overdetermined linear re- gression model of the form y = Xβ

0

+ ε where y ∈ R

Q

is the observation/response vector, β

0

∈ R

N

the re- gression vector, X is the design matrix whose columns are linearly independent, and ε is an additive noise.

Note that Q > N and X

TI

X

I

is an invertible matrix.

1.1. Group Lasso

A block segmentation B corresponds to a disjoint union of the set of indices i.e. S

b∈B

= {1, . . . , N } and for each b, b

∈ B, b ∩ b

= ∅. For β ∈ R

N

, for each b ∈ B, x

b

= (β

i

)

i∈b

is a subvector of β whose entries are indexed by the block b, where |b| is the cardinality of b.

We consider the Group Lasso or Block Sparse regular- ization introduced by (Bakin, 1999; Yuan & Lin, 2006) which reads

β∈R

min

N

1

2 ||y − Xβ||

2

+ λ X

b∈B

||β

b

||, (P

λ

(y))

where λ > 0 is the so-called regularization parameter and || · || is the ℓ

2

-norm. Note that if each block b is of size 1, we recover the standard Lasso (Tibshirani, 1996).

1.2. Degrees of Freedom

We focus in this paper on the variations of the solu- tion β

(y) of P

λ

(y) with respect to the observations y.

This turns out to be a pivotal ingredient to compute the effective degrees of freedom (DOF) usually used to quantify the complexity of a statistical modeling procedure.

Let ˆ µ(y) = Xβ

(y) be the response or the prediction associated to the estimator β

(y) of β

0

, and let µ

0

= Xβ

0

. It is worth noting that ˆ µ(y) is always uniquely defined, although when β

(y) is not as is the case of rank-deficient or underdetermined design matrix X.

Note that any estimator ˆ µ of µ

0

might be considered.

We also make the assumption that ε is an additive white Gaussian noise term ε ∼ N (0, σ

2

I

Q

), hence y follows the law N (µ

0

, σ

2

I

Q

) and according to (Efron, 1986), the DOF is given by

df =

Q

X

i=1

cov(y

i

, [ˆ µ(y)]

i

)

σ

2

.

The well-known Stein’s lemma asserts that if ˆ µ is weakly differentiable then its divergence is an unbi- ased estimator of its DOF, i.e.

df ˆ = tr(∂ µ(y)) ˆ and E

ε

( ˆ df ) = df.

An unbiased estimator of the DOF provides an unbi-

ased estimate for the prediction risk of ˆ µ(y) through

e.g. the Mallow’s C

p

(Mallows, 1973), the AIC

(Akaike, 1973), the SURE (Stein, 1981) or the GCV

(Golub et al., 1979). These quantities can serve as

model selection criteria to assess the accuracy of a can-

didate model.

(3)

1.3. Previous Works

In the special case of standard Lasso with a linearly independent design, (Zou et al., 2007) show that the number of nonzero coefficients is an unbiased esti- mate for the degrees of freedom. This work is ex- tended in (Dossal et al.) to an arbitrary design ma- trix. The DOF of the analysis sparse regularization (a.k.a. generalized Lasso in statistics) is studied in (Tibshirani & Taylor, 2012; Vaiter et al., 2012). A for- mula of an estimate of the DOF for the group Lasso when the design is orthogonal within each group is con- jectured in (Yuan & Lin, 2006). (Kato, 2009) studies the DOF of a generalization of the Lasso where now the regression coefficients are constrained to a closed convex set. He provides an unbiased estimate of the DOF for the constrained version of the group Lasso under the same orthogonality assumption on X as (Yuan & Lin, 2006). An estimate of the DOF for the group Lasso is also given in (Solo & Ulfarsson, 2010) by an heuristic proof in the full column rank case, but its unbiasedness is not proved.

1.4. Contributions

This paper proves a general result (Theorem 1) on the variations of the solutions to P

λ

(y) with respect to the observation/response vector y. With such a result on hand, Theorem 2 provides a provably unbiased esti- mate of the DOF. These contributions are detailed in Sections 2.1 and 2.2 below. The proofs are deferred to Section 3 awaiting inspection by the interested reader.

1.5. Notations

We start by some notations used in the sequel. We ex- tend the notion of support, commonly used in sparsity by defining the B-support supp

B

(β) of β ∈ R

N

as

supp

B

(β) = {b ∈ B \ ||β

b

|| 6= 0} .

The size of supp

B

(β) is defined as | supp

B

(β)| = P

b∈B

|b|. We denote by X

I

, where I is a B-support, the matrix formed by the columns X

i

where i is an element of b ∈ I. We introduce the following block- diagonal operator

δ

β

: v ∈ R

|I|

7→ (v

b

/||β

b

||)

b∈B

∈ R

|I|

. and

P

β

: v ∈ R

|I|

7→ (P

β

b

(v

b

))

b∈B

∈ R

|I|

where P

β

b

is the projector orthogonal to x

b

. For any operator A, we denote A

T

its adjoint.

2. Main results

Note that as the X is assumed full column rank, P

λ

(y) has exactly one global minimizer β

(y). Hence, we define the single-valued mapping y 7→ β

(y).

2.1. Local Parameterization

Let I be the B-support of some vector β. For any block b 6∈ I, we define

H

I,b

= n

y ∈ R

Q

\∃β : ∀c ∈ I, (||X

Tb

r||, X

Tc

r) = (λ, λ β

c

||β

c

|| ) o . where r = y − X

I

β .

Definition 1. The transition space H is defined as H = [

I⊂I

[

b6∈I

H

I,b

,

where I is the set of sub-sets of {0, . . . , N −1} obtained as unions of blocks.

We prove the following sensitivity theorem

Theorem 1. Let y 6∈ H, and I = supp

B

(y)) the B-support of β

(y). There exists a neighborhood O of y such that

1. the B-support of β

(y) is constant on O, i.e.

∀y ∈ O, supp

B

(¯ y)) = I,

2. the mapping β

is C

1

on O and its differential is such that

[∂β

(¯ y)]

Ic

= 0 and [∂β

(¯ y)]

I

= d(y), (1) where

d(y) = X

TI

X

I

+ λδ

β(y)

◦ P

β(y)

−1

X

TI

and

I

c

= {b ∈ B \ b / ∈ I} . 2.2. Degrees of Freedom

We consider the estimator ˆ µ(y) = Xβ

(y).

Theorem 2. Let λ > 0. The mapping y 7→ µ(y) ˆ is of class C

1

on R

Q

\ H and,

div(ˆ µ(y)) = tr(X

I

d(y)), (2) where β

(y) is the solution of P

λ

(y) and I = supp

B

(y)). Moreover, the set H has zero Lebesgue measure, thus if ε ∼ N (0, σ

2

I

Q

), equation (2) is an unbiased estimate of the DOF of the group Lasso.

We specify this result for the Block Soft Thresholding Corollary 1. If X = Id, one has

df ˆ = |J| − λ X

b⊂J

|b| − 1

||y

b

||

where J = S {b ∈ B \ ||y

b

|| > λ}.

(4)

3. Proofs

This section details the proofs of our results. We in- troduce the following normalization operator

N (β

I

) = v where ∀b ∈ I, v

b

= β

b

||β

b

|| . We use the following lemma in our proofs which is a straightforward consequence of the first order nec- essary and sufficient condition of a minimizer of the group Lasso problem P

λ

(y).

Lemma 1. A vector β ∈ R

N

is the solution of P

λ

(y) if, and only if, these two conditions holds

1. On the B-support I = supp

B

(β), X

TI

(y − X

I

β

I

) = λN (β

I

).

2. For all b ∈ B such that b 6∈ I, one has

||X

Tb

(y − X

I

β

I

)|| 6 λ.

A proof of this lemma can be found in (Bach, 2008).

3.1. Proof of Theorem 1 We will use this following lemma.

Lemma 2. Let β ∈ R

N

and λ > 0. Then X

TI

X

I

+ λδ

β

◦ P

β

is invertible.

Proof. We prove that X

TI

X

I

+ λδ

β

◦ P

β

is symmet- ric positive definite. Remark that X

TI

X

I

is a positive definite matrix. Moreover, δ

β

◦ P

β

is symmetric pos- itive semi-definite since both δ

β

and P

β

are SDP and commute. We conclude using the fact that the sum of a symmetric positive definite matrix and a symmet- ric positive semi-definite matrix is symmetric positive definite.

Let y 6∈ H. We define I = supp

B

(y)) the B-support of the solution β

(y) of P

λ

(y). We define the following mapping

Γ(α

I

, y) = X

TI

(X

I

α

I

− y) + λN (α

I

).

Observe item 1 of Lemma 1 is equivalent to Γ([β

(y)]

I

, y) = 0.

Our proof is done in three steps. We first (1.) prove there exists a mapping ¯ y 7→ β(¯ y) such that for every element of a neighborhood of y one has Γ([β (¯ y)]

I

, y) = ¯ 0 and [β(¯ y)]

Ic

= 0. Then, we prove (2.) that β(¯ y) = β

(¯ y) is the solution of P

λ

(¯ y) in a neighborhood of y.

Finally, we obtain (3.) equation (1) from the implicit function theorem.

1. The derivative of Γ with respect to the first variable reads on (R

|I|

\ U ) × R

Q

1

Γ(β

I

, y) = X

TI

X

I

+ λδ

βI

◦ P

βI

. where U =

α ∈ R

|I|

\ ∃b ∈ I : α

b

= 0 . The map- ping ∂

1

Γ is invertible according to Lemma 2. Hence, using the implicit function theorem, there exists a neighborhood ˜ O of y such that we can define a map- ping β

I

: O → R

|I|

of class C

1

over ˜ O that satisfies for

¯ y ∈ O ˜

Γ(β

I

(¯ y), y) = 0 ¯ and β

I

(y) = [β

(y)]

I

. We then extend β

I

on I

c

as β

Ic

(¯ y) = 0, which defines a mapping β(¯ y) : ˜ O → R

N

.

2. Writting the first-order conditions on β

(y) on the blocks not included in the B-support, one has

∀b / ∈ I, ||X

Tb

(y − X

I

(y)]

I

)|| 6 λ.

Suppose there exists b / ∈ I such that ||X

Tb

(X

I

(y)]

I

− y)|| = λ. Then y ∈ H

I,b

since

||X

Tb

r|| = λ and X

Tc

r = λ [β

(y)]

c

||[β

(y)]

c

|| , for r = y − X

I

(y)]

I

, which is a contradiction with y / ∈ H. Hence,

∀ b / ∈ I, ||X

Tb

(X

I

(y)]

I

− y)|| < λ.

By continuity of ¯ y 7→ β

I

(¯ y) and since β

I

(y) = [β

(y)]

I

, we can find a neighborhood O included in ˜ O such that for every ¯ y ∈ O, one has

∀ b / ∈ I, ||X

Tb

(X

I

β

I

(¯ y) − y)|| ¯ 6 λ.

Moreover, by definition of the mapping β

I

, one has X

TI

(y−X

I

β

I

(¯ y)) = λN (β

I

(¯ y)) and supp

B

I

(¯ y)) = I.

According to Lemma 1, the vector β(¯ y ) is solution of P

λ

(¯ y). Since P

λ

(¯ y) admits a unique solution, β

(¯ y) = β(¯ y) for every ¯ y ∈ O.

3. Using the implicit function theorem, one obtains the derivative of [β(y)]

I

as

[∂β

(¯ y)]

I

= − ∂

1

Γ([β

(y)]

I

, y)

−1

◦ ∂

2

Γ([β

(y)]

I

, y) where ∂

2

Γ([β

(y)]

I

, y) = −X

TI

, which leads us to (1).

3.2. Proof of Theorem 2 We define for each b ∈ B

H

I,b

= n

(r, β) ∈ R

Q

× R

|I|

\

||X

Tb

r|| = 1 and ∀g ∈ I, X

Tg

r = β

g

||β

g

||

o .

(5)

We prove (1.) that for each b ∈ I, H

I,b

is a manifold of dimension Q − 1. Then (2.) we prove that H if of measure zero with respect to the Lebesgue measure on R

Q

. Finally (3.), we prove that ˆ df is an unbiased estimate of the DOF.

1. Note that H

I,b

= ψ

−1

({0}) where ψ(r, β) = ||X

Tb

r||

2

−1, X

TI

r − N (β)

.

Remark that the adjoint of the differential of ψ (∂ψ)

T

(r, β) =

2X

b

X

Tb

r X

I

0 λδ

β

◦ P

β

has full rank. Indeed, consider the matrix A = (2X

b

X

Tb

r|X

I

). Let α = (s, u)

T

∈ R × R

|I|

such that Aα = 0. Then (2sX

b

r, u)

T

∈ Ker(X

b

|X

I

). Since (X

b

|X

I

) has full rank, we conclude that α = 0. As a consequence, ∂ψ(r, β) is non-degenerated. Finally, H

I,b

is a manifold of dimension Q − 1.

2. We prove that H

I,b

is of Hausdorff dimension less or equal to Q − 1. Consider the following mapping

ϕ :

R

Q

× R

|I|

→ R

Q

× R

|I|

(r, β) 7→ (r + X

I

β, X

TI

r) . The mapping ϕ is a C

1

-diffeomorphism between R

Q

× R

|I|

and itself. Thus, A = ϕ(H

I,b

) is a manifold of dimension Q − 1. We now introduce the projection

π :

A → R

Q

(y, α) 7→ y .

Observe that H

I,b

= π(A). According to Haus- dorff measure properties (Rogers, 1998), since π is 1- Lipschitz, the Hausdorff dimension of π(A) is less or equal to the Hausdorff dimension of A which is the di- mension of A as a manifold, namely Q − 1. Hence, the measure of H

I,b

w.r.t the Lebesgue measure of R

Q

is zero.

3. According to Theorem 1, y 7→ β

(y) is C

1

on R

Q

\ H. Composing by X gives that ˆ µ is differen- tiable almost everywhere, hence weakly differentiable.

Moreover, taking the divergence of the differential (1), one obtains (2). This formula is verified almost every- where, outside the set H. Stein’s Lemma (Stein, 1981) gives the unbiased propertiy of our estimator ˆ df of the DOF.

3.3. Proof of Corollary 1

When X = Id, the solution of P

λ

(y) is a block soft thresholding

(y)]

b

=

( 0 if ||y

b

|| 6 λ (1 −

||yλ

b||

)y

b

otherwise . (3)

For every b ∈ J , we differentiate equation (3) [∂β

(y)]

b

: α ∈ R

|b|

7→ α − λ

||y

b

|| P

y b

(α).

Since P

y

b

(α) is a projector on space of dimension |b| − 1, one has tr(P

y

b

) = |b| − 1.

References

Akaike, H. Information theory and an extension of the maximum likelihood principle. In Second international symposium on information theory, volume 1, pp. 267–

281. Springer Verlag, 1973.

Bach, F.R. Consistency of the group lasso and multiple kernel learning. The Journal of Machine Learning Re- search, 9:1179–1225, 2008.

Bakin, S. Adaptive regression and model selection in data mining problems, 1999. Thesis (Ph.D.)–Australian Na- tional University, 1999.

Dossal, C., Kachour, M., Fadili, J., Peyr´e, G., and Chesneau, C. The degrees of freedom of the Lasso for general design matrix. Technical report. URL http://hal.archives-ouvertes.fr/hal-00638417.

Efron, B. How biased is the apparent error rate of a pre- diction rule? Journal of the American Statistical Asso- ciation, 81(394):461–470, 1986.

Golub, G.H., Heath, M., and Wahba, G. Generalized cross- validation as a method for choosing a good ridge param- eter. Technometrics, pp. 215–223, 1979.

Kato, K. On the degrees of freedom in shrinkage esti- mation. Journal of Multivariate Analysis, 100(7):1338–

1352, 2009.

Mallows, C. L. Some comments on cp. Technometrics, 15 (4):661–675, 1973.

Rogers, C.A. Hausdorff measures. Cambridge Univ Pr, 1998.

Solo, V. and Ulfarsson, M. Threshold selection for group sparsity. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pp.

3754–3757. IEEE, 2010.

Stein, C.M. Estimation of the mean of a multivariate nor- mal distribution. The Annals of Statistics, 9(6):1135–

1151, 1981.

Tibshirani, R. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B.

Methodological, 58(1):267–288, 1996.

Tibshirani, R. J. and Taylor, J. Degrees of freedom in lasso problems. Technical report, arXiv:1111.0653, 2012.

Vaiter, S., Deledalle, C., Peyr´e, G., Fadili, J., and Dossal, C. Local behavior of sparse analysis reg- ularization: Applications to risk estimation. Tech- nical report, Preprint Hal-00687751, 2012. URL http://hal.archives-ouvertes.fr/hal-00687751/.

Yuan, M. and Lin, Y. Model selection and estimation in regression with grouped variables. J. of The Roy. Stat.

Soc. B, 68(1):49–67, 2006.

Zou, H., Hastie, T., and Tibshirani, R. On the “degrees of freedom” of the lasso. The Annals of Statistics, 35(5):

2173–2192, 2007.

Références

Documents relatifs

We show that under a suitable condition on the design matrix, the number of nonzero coefficients of the Lasso solution is an unbiased estimate for the degrees of freedom.. An

Vaiter S, Deledalle C, Peyr´ e G, Fadili J, Dossal C (2012b) Local behavior of sparse analysis regularization: Applications to

Coverage of 95% noncentral t confidence intervals for the unbiased standard- ized mean difference in pooled paired samples experiments across a variety of sample sizes and effect

The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.. L’archive ouverte pluridisciplinaire HAL, est

The performance is studied when the TXs are faced with a distributed Channel State Information (CSI) configuration in which each TX has access to its own multi-user imperfect chan-

This paper presents a kinematical model of the knee joint that will be employed to specify the future Device kinematics and a preliminary control

The paper deals with the semi-classical behaviour of quantum dy- namics for a semi-classical completely integrable system with two degrees of free- dom near Liouville regular

In case of soft impact, with a finite time duration of the shock, and no external excitation, the existence of periodic solutions, with an arbitrary value of the