Haut PDF Gaussian geometry and tools for compressed sensing

Gaussian geometry and tools for compressed sensing

Gaussian geometry and tools for compressed sensing

works, [ HTF09 , BvdG11 ] for a review and references therein. More precisely, some of the most popular estimators in high-dimensional statistics remain the lasso [ Tib96 ] and the Dantzig selec- tor [ CT07 ]. A large amount of interest has been dedicated to the estimation, prediction or support recovery problems using these estimators. This body of work has been developed around suffi- cient conditions on the design matrix X (such that Restricted Isometry Property [ CT06 ], Restricted Eigenvalue [ BRT09 ], Compatibility [ VdGB09 , BvdG11 ], Universal Distortion [ dC13 , BLPR11 ], H s ,1 [ JN11 ], or Irrepresentability [ Fuc05 ], to name but a few) that enclose the spectral properties of the design matrix on the set of (almost) sparse vectors. Using one of these properties, one can exploits the Karush-Kuhn-Tucker conditions to get oracle inequalities or a control on the support recovery error.
En savoir plus

138 En savoir plus

A Compressed Sensing Framework for Biological Microscopy

A Compressed Sensing Framework for Biological Microscopy

To improve the SNR, many denoising methods are available, well suited for piece- wise smooth images, such as Non-Local Means (NL-means) [ Buades 2005 ], Total Variation Filtering (TV) [ Rudin 1992 ], non-linear isotropic and anisotropic diffu- sion [ Weickert 1998 ]. Applying these methods to fluorescence microscopy corrupted with Gaussian noise typically require a pre-processing with a variance-stabilizing transform, as done for example in [ Boulanger 2008 ]. A novel anisotropic-type for filtering was recently proposed in [ Wang 2010 ] for two-photon fluorescence images. There are also methods which exploit the decomposition of the data onto wavelet- types of functions (including recent ridglets or curvelets basis functions) and shrink the coefficients to eliminate noise components [ DeVore 1992 , Donoho 1995d ]. Re- cently, efficient denoising methods were also developed based on sparsity and redun- dant representations over learned dictionaries [ Elad 2006 ], denoising image while simultaneously training a dictionary using the K-SVD algorithm [ Aharon 2005 ], or based on sparse code shrinkage and maximum likelihood estimation of non- Gaussian variables [ Hyvärinen 1999 ]. Wavelet shrinkage was used for exam- ple by [ Olivo-Marin 2002 ] to denoise fluorescence images and count spot, or in [ Zhang 2007 , Delpretti 2008 , Luisier 2010 ] using variance stabilizing transforms prior to image decomposition and wavelet shrinkage.
En savoir plus

209 En savoir plus

Compressed sensing for the extraction of atrial fibrillation patterns from surface electrocardiograms

Compressed sensing for the extraction of atrial fibrillation patterns from surface electrocardiograms

I. I NTRODUCTION Atrial fibrillation (AF) is the most common sustained ar- rhythmia encountered in clinical practice. Held responsible of up to 25% of strokes, this cardiac condition is considered as the last great frontier of cardiac electrophysiology as it continues to puzzle cardiologists [1]. In order to better char- acterize this arrhythmia, scientists are interested in analyzing the pattern of AF noninvasively by extracting the f -waves of atrial activity (AA) from surface electrocardiogram (ECG) recordings [2]. The main classical cardiac signal processing tools for non invasive AA signal extraction are 1) average beat subtraction (ABS) technique [3], and 2) blind source separation (BSS) [4], [5]. To provide adequate performance, these techniques require records of sufficient length. Other techniques like interpolation are adapted to AA extraction [6]. The present work aims at overcoming the limitations of ABS and BSS. We intend to extract the AA and separate it from the dominating ventricular activity (VA, QRST complex), using compressed sensing (CS). This method takes advantage of the sparsity property of the fibrillatory signal in the frequency domain. To our knowledge, this is the first time CS is applied to noninvasive AA extraction. Our second contribution consists in introducing a block sampling scheme as opposed
En savoir plus

6 En savoir plus

Multichannel Compressed Sensing and its Application in Radioastronomy

Multichannel Compressed Sensing and its Application in Radioastronomy

practice as it does not produce bias. Therefore, the sparsity constraint is written as an ` 0 norm regularizer instead of an ` 1 norm. The sparsity parameters {λ i } 1≤i≤N s can be implicitly interpreted as thresholds in equation (5.12). In addition, the choice of thresholds {λ i } 1≤i≤N s is a vital point in the source separation process. The DecGMCA algorithm utilizes an adapted thresholding strategy. The initial thresholds are set to high values so that the most significant features of the sources can be extracted to facilitate source separation. In addition, the high thresholds prevent the algorithm from being trapped on local optima. When the most discriminant features of the sources are extracted following the high thresholds, the sources are separated with high probability. Then, to re- trieve more detailed information about the sources, the thresholds decrease towards the final values. The final thresholds can be chosen as τσ i with σ i the standard de- viation of noise of the ith source. In practice, median absolute deviation (MAD) is a robust empirical estimator for Gaussian noise. The value τ ranges between 2 ∼ 4. In practice, there are many ways to chose the decreasing function of the thresh- old. We present our strategy of decreasing threshold called “percentage decreasing threshold”, which is the most robust according to our tests. Assuming at iteration i, as for an ordered absolute wavelet coefficient set of the j-th source |α j |, the current threshold is selected as the p (i)
En savoir plus

203 En savoir plus

Operational Rate-Distortion Performance of Single-source and Distributed Compressed Sensing

Operational Rate-Distortion Performance of Single-source and Distributed Compressed Sensing

In this paper, we study analytically the best achievable RD performance of any single–source and distributed CS scheme, under the constraint of high–rate quantization, providing sim- ulation results that perfectly match the theoretical analysis. In particular, we provide the following contributions. First, we derive the asymptotic (in the rate and in the number of measurements) distribution of the measurement vector. Even if the analysis is asymptotic, we show that the convergence to a Gaussian distribution occurs with parameter values of practical interest. Moreover, we provide an analytical expression of the rate gain obtained exploiting inter–source correlation at the decoder. Second, we provide a closed–form expression of the average reconstruction error using the oracle receiver, improving the results existing in literature, consisting only in bounds hardly comparable to the results of numerical simulations [11], [12]. The proof relies on recent results on random matrix theory [13]. Third, we provide a closed–form expression of the rate gain due to joint reconstruction from the measurements of multiple sources. We compare the results obtained by theory both with the ideal oracle receiver and with a practical algorithm [9], showing that the penalty with respect to the ideal receiver is due to the lack of knowledge of the sparsity support in the reconstruction algorithm. Despite this penalty, the theoretically–derived rate gain matches that obtained applying distributed source coding followed by joint reconstruction to a practical reconstruction scheme. With re- spect to [7], [8], we use information theoretic tools to provide an analytical characterization of the performance of CS and DCS, for a given number of measurements and set of system parameters.
En savoir plus

12 En savoir plus

Near-optimal Binary Compressed Sensing Matrix

Near-optimal Binary Compressed Sensing Matrix

It is well known that some random matrices generated by certain probabilistic processes, like Gaussian or Bernoulli processes [2] [3], guarantee successful signal recovery with high probability. In terms of complexity, these dense matrices are allowed to reduce to more sparse form without obvious performance loss [4] [5] [6]. However, they are still impractical due to randomness. In this sense, it is of practical importance to explore deterministic sensing matrices of both favorable performance and feasible structure. Recently several deterministic sensing matrices have been sequentially proposed base on some families of codes, such as BCH codes [7] [8], Reed-Solomon codes [9], Reed-Muller codes [10] [11] [12], LDPC codes [13] [14], etc. These codes are exploited based the fact that coding theory attempts to maximize the distance between two distinct codes, while in some sense this rule is also preferred for compressed sensing that tends to minimize the correlation between distinct columns of a sensing matrix [13] [8]. From the viewpoint of application, it is interesting to know which kind of deterministic matrix is the best in performance. Unfortunately, to the best of our knowledge, there is still no impressively theoretical works covering this problem.
En savoir plus

25 En savoir plus

A Compressed Sensing Approach to 3D Weak Lensing

A Compressed Sensing Approach to 3D Weak Lensing

3. Compressed Sensing Approach Linear methods are easy to use, and the variance of each estimator is rather direct to compute. Furthermore, these methods generally rely on very common tools with efficient implementation. However, they are not the most power- ful, and including non-Gaussian priors is difficult, especially when such priors imply non-linear terms. Obviously, using better-adapted priors is required for building a more robust estimator. In this paper, we adopt a compressed sensing ap- proach in order to construct an estimator that exploits the sparsity of the signal that we aim to reconstruct. The esti- mator is modelled as a optimisation problem that is solved using recent developments from convex analysis and split- ting methods.
En savoir plus

18 En savoir plus

Block-constrained compressed sensing

Block-constrained compressed sensing

and 4 . Note that each chapter is self-contained. Chapter 1 In this chapter, we propose a first analysis of blocks sampling CS strategies. We focus on the exact recovery of an s-sparse vector x ∈ C n supported on S. We exhibit a quantity γ(s), which will play the role of the well-known coherence. We then show through a series of examples including Gaussian measurements, isolated measurements or blocks in time-frequency bases, that the main result is sharp in the sense that the minimum amount of blocks necessary to reconstruct sparse signals cannot be improved up to a multiplicative logarithmic factor. We also highlight the limitations of CS blocks sampling strategies. In particular, in the case of a 2D separable transform, we show that 2s blocks as horizontal lines are needed to identify any s-sparse vector. This theoretical result seems at odds with the good reconstruction results observed in practice with blocks sampling strategies, for instance in magnetic resonance imaging, radio-interferometry or ultra-sound imaging. This last observation suggests that a key feature is missing in this first study to fully understand the potential of block sampling in applications. A very promising perspective is therefore to couple the ideas of structured sparsity with block sampling, which is a tackled issue in the following chapter.
En savoir plus

176 En savoir plus

Exact Performance Analysis of the Oracle Receiver for Compressed Sensing Reconstruction

Exact Performance Analysis of the Oracle Receiver for Compressed Sensing Reconstruction

4. NUMERICAL RESULTS In this section, we show the validity of the results of The- orem 1 by comparing the equations to the results of simu- lations. Here and in the following sections, signal length is N = 512 with sparsity K = 16. M = 128 measurements are taken. The nonzero elements of the signal are distributed as N (0, 1). The sparsity basis Ψ is the DCT matrix. The sensing matrix is composed by i.i.d. elements distributed as zero–mean Gaussian with variance 1/M . The noise vector is Gaussian with zero mean, while the covariance matrix de- pends on the specific test and will be discussed later. The reconstructed signal x is obtained using the oracle estimator. b A different realization of the signal, noise and sensing matrix is drawn for each trial, and the reconstruction error, evaluated as E h k b x − xk 2 2 i , is averaged over 1,000 trials.
En savoir plus

6 En savoir plus

Spherical Polar Fourier EAP and ODF Reconstruction via Compressed Sensing in Diffusion MRI

Spherical Polar Fourier EAP and ODF Reconstruction via Compressed Sensing in Diffusion MRI

nates, the basis matrix can be computed offline. We solve the problem by means of a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) [23], an iterative algorithm where each it- eration involves a shrinkage step. We chose this algorithm be- cause of its speed. In a Matlab implementation, the technique proposed takes a few seconds (usually less than 0.25 seconds) to reconstruct the vector of coefficients for one voxel, on a In- tel Core 2 Duo CPU at 2.8 GHz. Compare to [13], this tech- nique significantly shorten the reconstruction time by about 80 times. This is important when dealing with thousands of voxel (And it is ussually the case). We evaluate ζ in such a way that the first order function of GL basis fits a typical Gaussian based signal P (r) = exp − r 2
En savoir plus

8 En savoir plus

Shearlet Transform: a Good Candidate for Compressed Sensing in Optical Coherence Tomography

Shearlet Transform: a Good Candidate for Compressed Sensing in Optical Coherence Tomography

Fig. 1. Structure of a Fourier-Domain OCT system. III. C OMPRESSED S ENSING B ASICS Shannon and Nyquist’s theorem verifies that under uniform sampling, it is necessary to acquire data at a frequency twice the bandwidth. Actually, this condition remains valid from a theoretical point of view but is often violated in practice. CS is a perfect example of this reality as demonstrated by Candès and Romberg [5], [7]. Thereby, it becomes possible to reconstruct a signal (respectively an image) with few loss when there exist a basis Ψ in which the signal (respectively an image) is very sparse and a basis Φ in which the sampling is performed that are mutually incoherent. It results that a signal of size n can be reconstructed from m measurements (m << n) if the following conditions are respected:
En savoir plus

5 En savoir plus

Compressed Sensing and Best Approximation from Unions of Subspaces: Beyond Dictionaries

Compressed Sensing and Best Approximation from Unions of Subspaces: Beyond Dictionaries

Index Terms— Instance optimality, null space property, restricted isometry property, union-of-subspaces 1. INTRODUCTION Traditional results in sparse recovery relate certain properties of a dimensionality-reducing matrix M, considered as an en- coder, to performance guarantees of certain explicit or im- plicit decoders ∆. A popular family of performance guaran- tees is coined instance optimality: a decoder is instance opti- mal at order k with respect to M and two norms k · k X and

6 En savoir plus

High spatiotemporal cineMRI films using compressed sensing for acquiring articulatory data

High spatiotemporal cineMRI films using compressed sensing for acquiring articulatory data

Historically, speech MRI have used spiral sampling schemes to enhance the acquisition rate [1]. This sampling scheme offers good image quality given the high acquisition rate. However, it may generate strong undesired artifacts, such as unrealistic tongue tip and lip elongations, that disturb the articulator contours estimation from these images. To prevent these artifacts, the proposed framework uses a Cartesian-based sampling scheme. The choice of a Cartesian-based sampling scheme is also motivated by its adaptability to be used with compressed sensing [4] and with homodyne reconstruction [5]. These mathematical frameworks allow images to be recovered from partial Fourier information. This paper presents a method to simultaneously integrate several acceleration techniques for MRI to be used in articulatory data acquisition. Sec. II details the theoretical background of compressed sensing and homo- dyne reconstruction applied in this paper, as well as the choice of the sampling scheme. Simulations and experimentations are presented in Sec. III, and the results in Sec. ??. The method for exploiting articulatory data is presented in Sec. IV.
En savoir plus

6 En savoir plus

Sampling by blocks of measurements in compressed sensing

Sampling by blocks of measurements in compressed sensing

In many applications, the sampling strategy imposes to ac- quire data in the form of blocks of measurements (see Fig. 1(b) for block-structured sampling), instead of isolated measure- ments (see Fig. 1(a)). For instance, in medical echography, images are sampled along lines in the space domain, while, in magnetic resonance imaging (MRI), acquiring data along radial lines or spiral trajectories is a popular sampling strategy. In compressed sensing (CS), various theoretical conditions have been proposed to guarantee the exact reconstruction of a sparse vector from a small number of isolated measurements that are randomly drawn, see [1], [2], [3], and [4] for a detailed review of the most recent results on this topic.
En savoir plus

5 En savoir plus

Multiarray Signal Processing: Tensor decomposition meets compressed sensing

Multiarray Signal Processing: Tensor decomposition meets compressed sensing

One may show that rank(A) = 3 iff u i , v i are linearly independent, i = 1, 2, 3. Since it is clear that rank(A n ) ≤ 2 by construction and lim n →∞ A n = A, the rank-3 tensor A has no best rank-2 approximation. Such a tensor is said to have border rank 2. This phenomenon where a tensor fails to have a best rank-r approximation is much more widespread than one might imagine, occurring over a wide range of dimensions, orders, and ranks; happens regardless of the choice of norm (or even Br`egman divergence) used. These counterexamples occur with positive probability and in some cases with certainty (in R 2×2×2 and C 2×2×2 , no tensor of rank-3 has a best rank-2 approximation). We refer the reader to [12] for further details.
En savoir plus

11 En savoir plus

A satellite imaging chain based on the Compressed Sensing technique

A satellite imaging chain based on the Compressed Sensing technique

size of 30000 × 30000 pixels. Dealing with such a volume of data has important consequences on embedded resources, which require more memory, more computing capacity and therefore more powerful electrical sources. In a classical satellite imaging system, the observed image is sampled at the Shannon frequency to give N pixels and then compressed by some coding algorithms, like the JPEG standards. The purpose of the coding step is to represent the image on a limited number K << N of coefficients to match the low capacity of the on-board mass storage. However, using an expensive scheme to sample the whole image for, finally, retaining only K coefficients may appear to be wasteful. Many on-board resources could then be saved up if the compressed coefficients were directly acquired out of the sensor.
En savoir plus

9 En savoir plus

Parsimonious Gaussian Process Models for the Classification of Multivariate Remote Sensing Images

Parsimonious Gaussian Process Models for the Classification of Multivariate Remote Sensing Images

For the other sub-models, p is a fixed parameter given by the user. ˆ 3. EXPERIMENTAL RESULTS In this section, results obtained on one real data set are presented. The data set is the University Area of Pavia, Italy, acquired with the ROSIS-03 sensor. The image has 103 spectral bands (d = 103) and is 610 ×340 pixels, see Figure 1.(a). Nine classes have been de- fined by a photo-interpret for a total of 42776 referenced pixels, see Table 2.(a). 50 pixels for each class have been randomly selected from the samples for the training set, and the remaining set of pix- els has been used for validation. The process has been repeated 50 times, each time a new training set has been generated and the vari- ables have been scaled between -1 and 1. The mean result in terms of overall accuracy (percentage of correctly classified pixels) are re- ported.
En savoir plus

5 En savoir plus

Analysis vs Synthesis-based Regularization for combined Compressed Sensing and Parallel MRI Reconstruction at 7 Tesla

Analysis vs Synthesis-based Regularization for combined Compressed Sensing and Parallel MRI Reconstruction at 7 Tesla

k-space trajectories and various under-sampling factors are presented in Section IV. In Section V, we propose an quan- titative comparison of the performances of various multiscale decompositions (orthogonal Mallat WT, Meyer WT, B-spline WT, undecimated bi-orthogonal WT and fast curvelets) for combined CS-PI MR image reconstruction. MR image quality are compared in terms of peak SNR (pSNR) and structural similarity (SSIM) metrics across all tested transforms and for various input SNR and under-sampling schemes. We choose those various multiscale decompositions to represent previous choice done in the MRI image reconstruction field. We can distinguish three mains categories from the litterature: the orthogonal wavelet basis (e.g. Daubechie wavelet), tight frame (e.g. Ridgelet) and representation that sparsely encode geomet- rical properties like a sparse representation of curvature (e.g. Curvelet). Note that the fast curvelet transform has never been used for an MRI reconstruction problem. In short, in this paper, we show that tight frames provide better performances in terms of image quality, especially at low input SNR. Conclusions are drawn in Section VI.
En savoir plus

6 En savoir plus

Parsimonious Gaussian process models for the classification of hyperspectral remote sensing images

Parsimonious Gaussian process models for the classification of hyperspectral remote sensing images

In this paper, a family of parsimonious Gaussian process models is reviewed and 5 additional models are proposed to provide more flexibility to the classifier in the context of hyperspectral image analysis. These models allow to build from a finite set of training samples, a Gaussian mixture model in the kernel feature space, where each covariance matrix is free. They assume that the data of each class live in a specific subspace of the kernel feature space. This assumption reduces the number of parameters needed to estimate the decision function and makes the numerical inversion tractable. A closed-form expression is given for the optimal parameters of the decision function. This work extends the models ini- tially proposed in [18], [19]. In particular, the common noise assumption is relaxed, leading to a new set of models for which the level of noise is specific to each class. Furthermore, a closed-form expression for the estimation of the parame- ters enables a fast estimation of the hyperparameters during the cross-validation step. The contributions of this letter are threefold. 1) The definition of new parsimonious models. 2) A comparison in terms of classification accuracy and processing time of the proposed models with state-of-the-art classifiers of hyperspectral images. 3) A fast cross-validation strategy for learning optimal hyperparameters.
En savoir plus

6 En savoir plus

Optimal selection of diffusion-weighting gradient waveforms using compressed sensing and dictionary learning

Optimal selection of diffusion-weighting gradient waveforms using compressed sensing and dictionary learning

• Diffusion MRI measures the movement of water molecules and gives information about white matter microstructure. • The acquisition sequences rely on magnetic field gradients. • While pulsed gradient waveforms are the most used because of their simplicity, it has been shown that oscillating arbitrary waveforms provide better estimation of microstructure parameters (1) .

2 En savoir plus

Show all 10000 documents...