• Aucun résultat trouvé

Continuation of Nesterov’s Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging

N/A
N/A
Protected

Academic year: 2021

Partager "Continuation of Nesterov’s Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging"

Copied!
12
0
0

Texte intégral

Loading

Figure

TABLE I: The experimental setup for the simulation study.
TABLE II: Average rank of the convergence speed of the algorithms to reach precisions (f(β k ) − f (β ∗ )) ranging from 1 to 10 − 6
Fig. 1 presents the median error (i.e., the median over f (β k ) − f (β ∗ ), and the median absolute deviation, MAD), over all experimental settings (Tab
Fig. 2: Left panel: weight map found with elastic net (β ` 1 ` 2 ) and with the elastic net and TV penalties (β ` 1 ` 2 T V ) found using CONESTA
+2

Références

Documents relatifs

The purpose of the next two sections is to investigate whether the dimensionality of the models, which is freely available in general, can be used for building a computationally

Figure 2(c) compares model complexity mea- sured by the number of parameters for weighted models using structured penalties.. The ℓ T 2 penalty is applied on trie-structured

The corresponding penalty was first used by Zhao, Rocha and Yu (2009); one of it simplest instance in the context of regression is the sparse group Lasso (Sprechmann et al.,

This is also the case for algorithms designed for classical multiple kernel learning when the regularizer is the squared norm [111, 129, 132]; these methods are therefore not

We consider a structured sparse decomposition problem with overlapping groups of ℓ ∞ -norms, and compare the proximal gradient algorithm FISTA (Beck and Teboulle, 2009) with

Abstract: For the last three decades, the advent of technologies for massive data collection have brought deep changes in many scientific fields. What was first seen as a

Keywords and phrases: Gaussian graphical model, two-sample hypothesis testing, high-dimensional statistics, multiple testing, adaptive testing, minimax hypothesis testing,

The most studied techniques for high-dimensional regression under the sparsity scenario are the Lasso, the Dantzig selector, see, e.g., Cand`es and Tao (2007), Bickel, Ritov