works, [ HTF09 , BvdG11 ] **for** a review **and** references therein. More precisely, some of the most popular estimators in high-dimensional statistics remain the lasso [ Tib96 ] **and** the Dantzig selec- tor [ CT07 ]. A large amount of interest has been dedicated to the estimation, prediction or support recovery problems using these estimators. This body of work has been developed around suffi- cient conditions on the design matrix X (such that Restricted Isometry Property [ CT06 ], Restricted Eigenvalue [ BRT09 ], Compatibility [ VdGB09 , BvdG11 ], Universal Distortion [ dC13 , BLPR11 ], H s ,1 [ JN11 ], or Irrepresentability [ Fuc05 ], to name but a few) that enclose the spectral properties of the design matrix on the set of (almost) sparse vectors. Using one of these properties, one can exploits the Karush-Kuhn-Tucker conditions to get oracle inequalities or a control on the support recovery error.

En savoir plus
138 En savoir plus

To improve the SNR, many denoising methods are available, well suited **for** piece- wise smooth images, such as Non-Local Means (NL-means) [ Buades 2005 ], Total Variation Filtering (TV) [ Rudin 1992 ], non-linear isotropic **and** anisotropic diﬀu- sion [ Weickert 1998 ]. Applying these methods to ﬂuorescence microscopy corrupted with **Gaussian** noise typically require a pre-processing with a variance-stabilizing transform, as done **for** example in [ Boulanger 2008 ]. A novel anisotropic-type **for** ﬁltering was recently proposed in [ Wang 2010 ] **for** two-photon ﬂuorescence images. There are also methods which exploit the decomposition of the data onto wavelet- types of functions (including recent ridglets or curvelets basis functions) **and** shrink the coeﬃcients to eliminate noise components [ DeVore 1992 , Donoho 1995d ]. Re- cently, eﬃcient denoising methods were also developed based on sparsity **and** redun- dant representations over learned dictionaries [ Elad 2006 ], denoising image while simultaneously training a dictionary using the K-SVD algorithm [ Aharon 2005 ], or based on sparse code shrinkage **and** maximum likelihood estimation of non- **Gaussian** variables [ Hyvärinen 1999 ]. Wavelet shrinkage was used **for** exam- ple by [ Olivo-Marin 2002 ] to denoise ﬂuorescence images **and** count spot, or in [ Zhang 2007 , Delpretti 2008 , Luisier 2010 ] using variance stabilizing transforms prior to image decomposition **and** wavelet shrinkage.

En savoir plus
209 En savoir plus

I. I NTRODUCTION
Atrial fibrillation (AF) is the most common sustained ar- rhythmia encountered in clinical practice. Held responsible of up to 25% of strokes, this cardiac condition is considered as the last great frontier of cardiac electrophysiology as it continues to puzzle cardiologists [1]. In order to better char- acterize this arrhythmia, scientists are interested in analyzing the pattern of AF noninvasively by extracting the f -waves of atrial activity (AA) from surface electrocardiogram (ECG) recordings [2]. The main classical cardiac signal processing **tools** **for** non invasive AA signal extraction are 1) average beat subtraction (ABS) technique [3], **and** 2) blind source separation (BSS) [4], [5]. To provide adequate performance, these techniques require records of sufficient length. Other techniques like interpolation are adapted to AA extraction [6]. The present work aims at overcoming the limitations of ABS **and** BSS. We intend to extract the AA **and** separate it from the dominating ventricular activity (VA, QRST complex), using **compressed** **sensing** (CS). This method takes advantage of the sparsity property of the fibrillatory signal in the frequency domain. To our knowledge, this is the first time CS is applied to noninvasive AA extraction. Our second contribution consists in introducing a block sampling scheme as opposed

En savoir plus
practice as it does not produce bias. Therefore, the sparsity constraint is written as an ` 0 norm regularizer instead of an ` 1 norm.
The sparsity parameters {λ i } 1≤i≤N s can be implicitly interpreted as thresholds in equation (5.12). In addition, the choice of thresholds {λ i } 1≤i≤N s is a vital point in the source separation process. The DecGMCA algorithm utilizes an adapted thresholding strategy. The initial thresholds are set to high values so that the most significant features of the sources can be extracted to facilitate source separation. In addition, the high thresholds prevent the algorithm from being trapped on local optima. When the most discriminant features of the sources are extracted following the high thresholds, the sources are separated with high probability. Then, to re- trieve more detailed information about the sources, the thresholds decrease towards the final values. The final thresholds can be chosen as τσ i with σ i the standard de- viation of noise of the ith source. In practice, median absolute deviation (MAD) is a robust empirical estimator **for** **Gaussian** noise. The value τ ranges between 2 ∼ 4. In practice, there are many ways to chose the decreasing function of the thresh- old. We present our strategy of decreasing threshold called “percentage decreasing threshold”, which is the most robust according to our tests. Assuming at iteration i, as **for** an ordered absolute wavelet coefficient set of the j-th source |α j |, the current threshold is selected as the p (i)

En savoir plus
203 En savoir plus

In this paper, we study analytically the best achievable RD performance of any single–source **and** distributed CS scheme, under the constraint of high–rate quantization, providing sim- ulation results that perfectly match the theoretical analysis. In particular, we provide the following contributions. First, we derive the asymptotic (in the rate **and** in the number of measurements) distribution of the measurement vector. Even if the analysis is asymptotic, we show that the convergence to a **Gaussian** distribution occurs with parameter values of practical interest. Moreover, we provide an analytical expression of the rate gain obtained exploiting inter–source correlation at the decoder. Second, we provide a closed–form expression of the average reconstruction error using the oracle receiver, improving the results existing in literature, consisting only in bounds hardly comparable to the results of numerical simulations [11], [12]. The proof relies on recent results on random matrix theory [13]. Third, we provide a closed–form expression of the rate gain due to joint reconstruction from the measurements of multiple sources. We compare the results obtained by theory both with the ideal oracle receiver **and** with a practical algorithm [9], showing that the penalty with respect to the ideal receiver is due to the lack of knowledge of the sparsity support in the reconstruction algorithm. Despite this penalty, the theoretically–derived rate gain matches that obtained applying distributed source coding followed by joint reconstruction to a practical reconstruction scheme. With re- spect to [7], [8], we use information theoretic **tools** to provide an analytical characterization of the performance of CS **and** DCS, **for** a given number of measurements **and** set of system parameters.

En savoir plus
It is well known that some random matrices generated by certain probabilistic processes, like **Gaussian** or Bernoulli processes [2] [3], guarantee successful signal recovery with high probability. In terms of complexity, these dense matrices are allowed to reduce to more sparse form without obvious performance loss [4] [5] [6]. However, they are still impractical due to randomness. In this sense, it is of practical importance to explore deterministic **sensing** matrices of both favorable performance **and** feasible structure. Recently several deterministic **sensing** matrices have been sequentially proposed base on some families of codes, such as BCH codes [7] [8], Reed-Solomon codes [9], Reed-Muller codes [10] [11] [12], LDPC codes [13] [14], etc. These codes are exploited based the fact that coding theory attempts to maximize the distance between two distinct codes, while in some sense this rule is also preferred **for** **compressed** **sensing** that tends to minimize the correlation between distinct columns of a **sensing** matrix [13] [8]. From the viewpoint of application, it is interesting to know which kind of deterministic matrix is the best in performance. Unfortunately, to the best of our knowledge, there is still no impressively theoretical works covering this problem.

En savoir plus
3. **Compressed** **Sensing** Approach
Linear methods are easy to use, **and** the variance of each estimator is rather direct to compute. Furthermore, these methods generally rely on very common **tools** with efficient implementation. However, they are not the most power- ful, **and** including non-**Gaussian** priors is difficult, especially when such priors imply non-linear terms. Obviously, using better-adapted priors is required **for** building a more robust estimator. In this paper, we adopt a **compressed** **sensing** ap- proach in order to construct an estimator that exploits the sparsity of the signal that we aim to reconstruct. The esti- mator is modelled as a optimisation problem that is solved using recent developments from convex analysis **and** split- ting methods.

En savoir plus
176 En savoir plus

4. NUMERICAL RESULTS
In this section, we show the validity of the results of The- orem 1 by comparing the equations to the results of simu- lations. Here **and** in the following sections, signal length is N = 512 with sparsity K = 16. M = 128 measurements are taken. The nonzero elements of the signal are distributed as N (0, 1). The sparsity basis Ψ is the DCT matrix. The **sensing** matrix is composed by i.i.d. elements distributed as zero–mean **Gaussian** with variance 1/M . The noise vector is **Gaussian** with zero mean, while the covariance matrix de- pends on the specific test **and** will be discussed later. The reconstructed signal x is obtained using the oracle estimator. b A different realization of the signal, noise **and** **sensing** matrix is drawn **for** each trial, **and** the reconstruction error, evaluated as E h k b x − xk 2 2 i , is averaged over 1,000 trials.

En savoir plus
nates, the basis matrix can be computed offline. We solve the problem by means of a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) [23], an iterative algorithm where each it- eration involves a shrinkage step. We chose this algorithm be- cause of its speed. In a Matlab implementation, the technique proposed takes a few seconds (usually less than 0.25 seconds) to reconstruct the vector of coefficients **for** one voxel, on a In- tel Core 2 Duo CPU at 2.8 GHz. Compare to [13], this tech- nique significantly shorten the reconstruction time by about 80 times. This is important when dealing with thousands of voxel (**And** it is ussually the case). We evaluate ζ in such a way that the first order function of GL basis fits a typical **Gaussian** based signal P (r) = exp − r 2

En savoir plus
Fig. 1. Structure of a Fourier-Domain OCT system.
III. C OMPRESSED S ENSING B ASICS
Shannon **and** Nyquist’s theorem verifies that under uniform sampling, it is necessary to acquire data at a frequency twice the bandwidth. Actually, this condition remains valid from a theoretical point of view but is often violated in practice. CS is a perfect example of this reality as demonstrated by Candès **and** Romberg [5], [7]. Thereby, it becomes possible to reconstruct a signal (respectively an image) with few loss when there exist a basis Ψ in which the signal (respectively an image) is very sparse **and** a basis Φ in which the sampling is performed that are mutually incoherent. It results that a signal of size n can be reconstructed from m measurements (m << n) if the following conditions are respected:

En savoir plus
Index Terms— Instance optimality, null space property, restricted isometry property, union-of-subspaces
1. INTRODUCTION
Traditional results in sparse recovery relate certain properties of a dimensionality-reducing matrix M, considered as an en- coder, to performance guarantees of certain explicit or im- plicit decoders ∆. A popular family of performance guaran- tees is coined instance optimality: a decoder is instance opti- mal at order k with respect to M **and** two norms k · k X **and**

Historically, speech MRI have used spiral sampling schemes to enhance the acquisition rate [1]. This sampling scheme offers good image quality given the high acquisition rate. However, it may generate strong undesired artifacts, such as unrealistic tongue tip **and** lip elongations, that disturb the articulator contours estimation from these images. To prevent these artifacts, the proposed framework uses a Cartesian-based sampling scheme. The choice of a Cartesian-based sampling scheme is also motivated by its adaptability to be used with **compressed** **sensing** [4] **and** with homodyne reconstruction [5]. These mathematical frameworks allow images to be recovered from partial Fourier information. This paper presents a method to simultaneously integrate several acceleration techniques **for** MRI to be used in articulatory data acquisition. Sec. II details the theoretical background of **compressed** **sensing** **and** homo- dyne reconstruction applied in this paper, as well as the choice of the sampling scheme. Simulations **and** experimentations are presented in Sec. III, **and** the results in Sec. ??. The method **for** exploiting articulatory data is presented in Sec. IV.

En savoir plus
In many applications, the sampling strategy imposes to ac- quire data in the form of blocks of measurements (see Fig. 1(b) **for** block-structured sampling), instead of isolated measure- ments (see Fig. 1(a)). **For** instance, in medical echography, images are sampled along lines in the space domain, while, in magnetic resonance imaging (MRI), acquiring data along radial lines or spiral trajectories is a popular sampling strategy. In **compressed** **sensing** (CS), various theoretical conditions have been proposed to guarantee the exact reconstruction of a sparse vector from a small number of isolated measurements that are randomly drawn, see [1], [2], [3], **and** [4] **for** a detailed review of the most recent results on this topic.

En savoir plus
One may show that rank(A) = 3 iff u i , v i are linearly independent, i = 1, 2, 3. Since it is clear that rank(A n ) ≤ 2 by
construction **and** lim n →∞ A n = A, the rank-3 tensor A has no best rank-2 approximation. Such a tensor is said to have
border rank 2.
This phenomenon where a tensor fails to have a best rank-r approximation is much more widespread than one might imagine, occurring over a wide range of dimensions, orders, **and** ranks; happens regardless of the choice of norm (or even Br`egman divergence) used. These counterexamples occur with positive probability **and** in some cases with certainty (in R 2×2×2 **and** C 2×2×2 , no tensor of rank-3 has a best rank-2 approximation). We refer the reader to [12] **for** further details.

En savoir plus
size of 30000 × 30000 pixels. Dealing with such a volume of data has important consequences on embedded resources, which require more memory, more computing capacity **and** therefore more powerful electrical sources.
In a classical satellite imaging system, the observed image is sampled at the Shannon frequency to give N pixels **and** then **compressed** by some coding algorithms, like the JPEG standards. The purpose of the coding step is to represent the image on a limited number K << N of coefficients to match the low capacity of the on-board mass storage. However, using an expensive scheme to sample the whole image **for**, finally, retaining only K coefficients may appear to be wasteful. Many on-board resources could then be saved up if the **compressed** coefficients were directly acquired out of the sensor.

En savoir plus
k-space trajectories **and** various under-sampling factors are presented in Section IV. In Section V, we propose an quan- titative comparison of the performances of various multiscale decompositions (orthogonal Mallat WT, Meyer WT, B-spline WT, undecimated bi-orthogonal WT **and** fast curvelets) **for** combined CS-PI MR image reconstruction. MR image quality are compared in terms of peak SNR (pSNR) **and** structural similarity (SSIM) metrics across all tested transforms **and** **for** various input SNR **and** under-sampling schemes. We choose those various multiscale decompositions to represent previous choice done in the MRI image reconstruction field. We can distinguish three mains categories from the litterature: the orthogonal wavelet basis (e.g. Daubechie wavelet), tight frame (e.g. Ridgelet) **and** representation that sparsely encode geomet- rical properties like a sparse representation of curvature (e.g. Curvelet). Note that the fast curvelet transform has never been used **for** an MRI reconstruction problem. In short, in this paper, we show that tight frames provide better performances in terms of image quality, especially at low input SNR. Conclusions are drawn in Section VI.

En savoir plus
In this paper, a family of parsimonious **Gaussian** process models is reviewed **and** 5 additional models are proposed to provide more flexibility to the classifier in the context of hyperspectral image analysis. These models allow to build from a finite set of training samples, a **Gaussian** mixture model in the kernel feature space, where each covariance matrix is free. They assume that the data of each class live in a specific subspace of the kernel feature space. This assumption reduces the number of parameters needed to estimate the decision function **and** makes the numerical inversion tractable. A closed-form expression is given **for** the optimal parameters of the decision function. This work extends the models ini- tially proposed in [18], [19]. In particular, the common noise assumption is relaxed, leading to a new set of models **for** which the level of noise is specific to each class. Furthermore, a closed-form expression **for** the estimation of the parame- ters enables a fast estimation of the hyperparameters during the cross-validation step. The contributions of this letter are threefold. 1) The definition of new parsimonious models. 2) A comparison in terms of classification accuracy **and** processing time of the proposed models with state-of-the-art classifiers of hyperspectral images. 3) A fast cross-validation strategy **for** learning optimal hyperparameters.

En savoir plus
• Diﬀusion MRI measures the movement of water molecules **and** gives information about white matter microstructure.
• The acquisition sequences rely on magnetic ﬁeld gradients. • While pulsed gradient waveforms are the most used because of their simplicity, it has been shown that oscillating arbitrary waveforms provide better estimation of microstructure parameters (1) .