• Aucun résultat trouvé

Échantillonnage compressé et réduction de dimension pour l'apprentissage non supervisé

N/A
N/A
Protected

Academic year: 2021

Partager "Échantillonnage compressé et réduction de dimension pour l'apprentissage non supervisé"

Copied!
129
0
0

Texte intégral

Loading

Figure

Figure 1.1: Core idea of LSH. Each initial vector is mapped to a signature for which each entry corresponds to the image of the vector by a randomly drawn hash function.
Figure 2.1: Compressive learning outline. The learning data X = { x 1 , . . . , x L } is com- com-pressed into a smaller representation, which can either consist in reducing the dimensions of each individual entry x r or in computing a more global compress
Figure 2.2: Real and reconstructed centroids (respectively represented as circles and squares) of 4 Gaussians in dimension 2 from 10 3 points drawn from the mixture
Figure 2.3: Quality of reconstruction in dimension n = 10, with N = 10 4 points, measured as a Hellinger distance
+7

Références

Documents relatifs

Aware of the grave consequences of substance abuse, the United Nations system, including the World Health Organization, has been deeply involved in many aspects of prevention,

Puisque les divergences de Bregman englobent un large éventail de mesures de distorsion, en dimension finie ou infinie, et que l’algorithme des k-means s’étend à ces divergences,

The good part of this proceeding is that it preserves the continuity of the text. You are reading the original work, as it was graphically composed by its author, exactly as if you

The present research program proposes to investigate the possibility of explicitly discovering and learning unknown invariant transformations, as a way to go beyond the limits

This definition of the loss implicitly implies that the calibration of the model is very similar to the task of the market maker in the Kyle model: whereas in the latter the job of

• utilisation : pour une entrée X (de  d ), chaque neurone k de la carte calcule sa sortie = d(W k ,X), et on associe alors X au neurone « gagnant » qui est celui de sortie la

Our main contribution is a multiscale approach for solving the unconstrained convex minimization problem introduced in [AHA98], and thus to solve L 2 optimal transport.. Let us

In this paper, we provide separation results for the computational hierarchy of a large class of algebraic “one-more” computational problems (e.g. the one-more discrete