j=1 ka ∗
j k 2 ∞ s ln(n/ε).
It was shown experimentally, however, that this result is not sufficient to explain the success of CS in applications such as **MRI** [1]. It is in particular due to the fact that in the above result we do not assume any structure (apart from sparsity) in the signals to be recovered. A natural extension would be to consider the structured sparsity approach, where one assumes that some prior information on the support S is known, e.g. sparsity by level in the wavelet domain (see [1] **for** a comprehensive theory **for** Fourier sampling, based on isolated measurements under a sparsity-by-levels assumption in the wavelet domain). This strategy allows to incorporate any kind of prior information on the structure of S and to study its influence on the quality of CS reconstructions.

En savoir plus
References
Conclusions
In this work, we have proposed an original computer-intensive approach to design efficient sampling schemes complying with the hardware constraints of **MRI** gradient systems. On the reconstructed images we have shown significant improvements in terms of image quality (pSNR) in very high resolution anatomical imaging, which makes sense **for** in-vivo exams at ultra-high magnetic field (ISEULT Project@NeuroSpin). MR **acquisitions** on 7T MR scanner showed superiority of developed sampling schemes and suggest the feasibility of very high acceleration factor at very high resolution in CS-**MRI**.

En savoir plus
has been successfully applied to **MRI** [ 3 ], first using Poisson-Disk sampling and then considering more efficient **non**-**Cartesian** trajec- tories (e.g. spirals, radial [ 4 ], Sparkling [ 5 , 6 ] ...) that allow to break down the coherence barrier [ 7 , 8 ] using 2D variable density sampling [ 9 , 8 ]. The resulting acceleration factor (e.g. 6 to 10) is larger than those resulting from standard parallel imaging (PI) accel- eration methods. However, **for** a given image resolution, the max- imum acceleration factor depends on the input Signal-to-Noise Ra- tio (SNR) [ 10 ]. Therefore, to ensure a certain level of SNR, CS has been combined with multiple receiver coils (e.g. parallel imaging or PI [ 11 ]). In this context, many algorithms have been proposed to re- construct MR images from subsampled measurements collected over

En savoir plus
Reducing acquisition time is a major challenge in high-resolution **MRI** that has been successfully addressed by **Compressed** **Sensing** (CS) theory. While the scan time has been massively accelerated by a factor up to 20 in 2D imaging, the complexity of image recovery algorithms has strongly increased, resulting in slower reconstruction processes. In this work we propose an online approach to shorten image reconstruction times in the CS setting. We leverage the segmented acquisition in multiple shots of k-space data to interleave the MR acquisition and im- age reconstruction steps. This approach is particularly appealing **for** 2D high-resolution T ∗ 2 -weighted anatomical imaging as the largest timing interval (i.e. Time of Repetition) between consecutive shots arises **for** this kind of imaging. During the scan, acquired shots are stacked together to form mini-batches and image reconstruction may start from incomplete data. **For** each newly available mini-batch, the previous partial solution is used as warm restart **for** the next sub-problem to be solved in a timing window compatible with the given TR and the number of shots stacked in a mini-batch. We demonstrate the interest and time savings of using online MR image reconstruction **for** **Cartesian** and **non**-**Cartesian** sampling strategies combined with a single receiver coil. Next, we extend the online formalism to address the more general multi-receiver phased array acquisition scenario. In this setting, calibrationless image reconstruction is adopted to remain compatible with the timing constraints of online delivery. Our retrospective and prospective results on ex-vivo 2D T ∗ 2 -weighted brain imaging show that high-quality MR images are recovered by the end of acquisition **for** single receiver acquisition and that additional iterations are required when parallel imaging is adopted. Overall, our approach implemented through the Gadgetron framework may be compatible with the data workflow on the scanner to provide the physician with reliable MR images **for** diagnostic purposes.

En savoir plus
There may be two obstacles to the enhanced performance of the proposed strategy **for** 2D imaging. First, the modest SNR associated with 2D **acquisitions** may reduce the effectiveness of our method, as **for** any other subsampled trajectory. Although our exper- iments benefited from relatively good SNR conditions owing to a strong magnetic field and the use of a multiple receiver coil, SNR limitations appeared beyond the highest presented in-plane resolution of 390 µm. The second potential limitation is the hard- ware capacity, namely, the maximum gradient amplitude, the maximum slew rate and the gradient and readout bandwidths, which together control the flexibility and thus, the efficiency of the k-space trajectory. In particular, the gradient raster time plays a critical role and should be as short as possible. Assuming a readout bandwidth larger or equal to the gradient bandwidth, the following practical rule **for** best SPARKLING use should be observed: the ratio of the number of gradient steps per shot to the image size should be as high as possible. As regards high resolution, long-readout scenarios will maximize this ratio and thus optimize SPARKLING performance, while short-readout **acquisitions** allow **for** less departure from simple geometric trajectories. When considering lower resolutions however, our method remains applicable and promising. Moreover, in view of the considerable efforts that are currently being invested to push the limits of gra- dient systems (Weiger et al., 2018) , it is reasonable to expect further improvement of SPARKLING performance.

En savoir plus
157 En savoir plus

INRIA/CNRS/universit´e de Lorraine
† IADI, Universit´e de Lorraine, Nancy, France
U947, INSERM, Nancy, France
Abstract—The paper presents a method to acquire articulatory data from a sequence of **MRI** images at a high framerate. The acquisition rate is enhanced by partially collecting data in the kt-space. The combination of **compressed** **sensing** technique, along with homodyne reconstruction, enables the missing data to be recovered. The good reconstruction is guaranteed by an appropriate design of the sampling pattern. It is based on a pseudo-random **Cartesian** scheme, where each line is partially acquired **for** use of the homodyne reconstruction, and where the lines are pseudo-randomly sampled: central lines are constantly acquired and the sampling density decreases as the lines are far from the center. Application on real speech data show that the framework enables dynamic sequences of vocal tract images to be recovered at a framerate higher than 30 frames per second and with a spatial resolution of 1 mm. A method to extract articulatory data from contour identification is presented. It is intended, in fine, to be used **for** the creation of a large database of articulatory data.

En savoir plus
Regarding the practical utilization of this work, our experimental results appeared to be in good agreement with simulations performed on the analyti-
380
cal brain phantom, which corroborates the validity of our approach to derive a maximum undersampling factor **for** a given acquisition-reconstruction setup. Our work may thus aid the design of undersampled 3D **acquisitions** using CS and even 4D **MRI**, even though prospective performance of **compressed** MR ac- quisitions may be slightly lower than predicted due to unconsidered MR system

I. I NTRODUCTION
Magnetic Resonance Imaging (**MRI**) is the reference ima- ging technique used to probe soft tissues in the human body **non**-invasively. The data acquisition process in **MRI** is inher- ently slow due to the sequential measurements collection in the Fourier domain, also called k-space. This slow acquisition causes many issues such as motion-corrupted scans, limited image resolution in a scan time compatible with clinical routine, **non**-applicability to certain patients and limited patient throughput. To reduce the scan time as a means to offset the limiting factors, the main solution proposed was to diminish the number of measurements in the k-space. It was exploited **for** parallel imaging (PI) [2], [3] as well as **for** the use of **Compressed** **Sensing** (CS) in **MRI** [4]. In this last framework, a manually crafted decomposition basis is used to express the sparsity of the image to be reconstructed in a transform domain. From this sparsity prior, an optimization problem, whose solution should be close to the image of interest, can be constructed.

En savoir plus
VI. C ONCLUSIONS
In this paper, we have compared analysis and synthesis- based formulations **for** **compressed** **sensing** MR image recon- struction in the parallel imaging context, i.e. when multiple receivers are combined within the same phased array coil. As already known in the literature, we have shown that translation invariant overcomplete decomposition outperform orthogonal wavelet transforms especially at low input SNR. Well known optimization algorithms have been respectively implemented to minimize the cost functions associated with the analysis and synthesis formulations. We have also pointed out the superiority of the Sparkling under-sampling scheme over the Radial one in terms of quality assessed by SSIM scores. We also provide a Python package, called Pysap, **for** CS image reconstruction in **MRI** and astrophysics, interfaced with the pynfft package to deal with **non**-**Cartesian** Fourier sampling.

En savoir plus
3 Introduction
Recent advances in the static magnetic field strength of magnetic resonance scanners and in the radio-frequency (RF) detector designs has allowed magnetic resonance microscopy (MRM) to reach spatial resolutions suitable **for** functional imaging of single cells (1,2). However, in order to reach the full potential of MRM it is necessary to reduce the currently long acquisition times required **for** obtaining high resolution images. Based on the fact that MR images, among other types of images, are compressible, an image can be reconstructed from a small number of random measurements (3). This finding opened the field of **Compressed** **Sensing** (CS) which can significantly reduce the **MRI** scan time and found numerous applications in preclinical (4) and clinical (5) imaging.

En savoir plus
An ensuing question is whether the calibration could be performed online, that is when different observations are received successively instead of being treated at once. In learning applications, it is sometimes advantageous **for** speed concerns to only treat a subset of training examples at a time. Sometimes also, the size of the current data sets may exceed the available memory. Methods implementing a step-by-step learning, as the data arrives, are referred to as online, streaming or mini-batch learning, as opposed to offline or batch learning. **For** instance, in deep learning, Stochastic Gradient Descent is the most popular training algorithm [12]. From the theoretical point of view, the restriction to the fully online case, where a single data point is used at a time, offers interesting possibilities of analysis, as demonstrated in different machine learning problems by [13, 14, 15]. Here we will consider the Bayesian online learning of the calibration variables.

En savoir plus
4. NUMERICAL RESULTS
In this section, we show the validity of the results of The- orem 1 by comparing the equations to the results of simu- lations. Here and in the following sections, signal length is N = 512 with sparsity K = 16. M = 128 measurements are taken. The nonzero elements of the signal are distributed as N (0, 1). The sparsity basis Ψ is the DCT matrix. The **sensing** matrix is composed by i.i.d. elements distributed as zero–mean Gaussian with variance 1/M . The noise vector is Gaussian with zero mean, while the covariance matrix de- pends on the specific test and will be discussed later. The reconstructed signal x is obtained using the oracle estimator. b A different realization of the signal, noise and **sensing** matrix is drawn **for** each trial, and the reconstruction error, evaluated as E h k b x − xk 2 2 i , is averaged over 1,000 trials.

En savoir plus
τ ∈T |< f τ , r l−1 >| is not sufficient
**for** accurate delay tracking and channel estimation with high precision. In other words, the delay points within a delay sub- set where the corresponding bases have high coherence with the residual vector, should be considered. With these delay points, the reference delay grid (RDG) guided RNM method is proposed in this paper to effectively fight against the **non**- uniform pilot arrangement and realize the near optimal delay searching of the l th channel tap, which will be discussed after the DT method in this section.

Denote by y a trial point which minimizes d, and set b(y) ← accepted. **For** all x ∈ V [y], set d(x) ← min{d(x), Λ(d, x; b, y)}.
Output: The map d : Z → IR.
We denoted by Λ(d, x; b, y) the modification of the Hopf-Lax update operator (4) in which the minimum is only taken over faces (of any dimension) of ∂V (x) which vertices (i) contain y, and (ii) are all accepted. Regarding the FM-LBR complexity O(N ln N + N ln κ(M)), we refer **for** details to the classical analysis in [26, 23, 1] and simply point out that (i) each FM-LBR causal stencil costs O(ln κ(M)) to construct, (ii) maintaining a list of Ω∩Z, sorted by increasing values of the mutable map d, costs O(ln N) **for** each modification of a single value of d, with a proper heap sort implementation, and (iii) the optimization problem defining the Hopf-Lax update (4), or its variant Λ(d, x; b, y), has an explicit solution: the minimum associated to each face of ∂V (x) is the root of a simple univariate quadratic polynomial, see Appendix of [23]. Memory usage is discussed in detail in Remark 1.10.

En savoir plus
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou **non**, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

133 En savoir plus

Specifically we consider the case when the probabilities pi mentioned above are values of a continuous function at uniformly spaced points on a given interval. In t[r]

the MC method cannot correctly interpolate the data, which means that the following source separation procedure performs badly. Comparing the perfor- mance of DecGMCA with MC+GMCA at its turning points (30% and 60% **for** the number of channels 20 and 10, respectively), we can see that DecGMCA conserves well the continuity of both criteria and outperforms MC+GMCA even when the mask is very ill-conditioned. One should notice that when mask is relatively good, DecGMCA still outperforms MC+GMCA. This is due to the fact that DecGMCA takes all of the data into account and simultane- ously processes source separation and subsampling effect, while MC+GMCA considers them separately. Consequently, the BSS in MC+GMCA relies on the quality of matrix completion, which in fact approximates the data in- terpolation and produces a negligible bias. Interestingly, the separation per- formances of the DecGMCA seem to degrade when the average number of available measurements per frequency in the Fourier domain (i.e., the product of the subsampling ratio and the total number of observations) is roughly of the order of the number of sources. In that case, the resulting problem is close to an under-determined BSS problem. In that case the identifiability of the sources is not guaranteed unless additional assumptions about the sources are made. In this setting, it is customary to assume that the sources have disjoint supports in the sparse domain, which is not a valid assumption in the present work. Additionally, radio-interferometric measurements are generally composed of a large amount of observations **for** few sources to be retrieved. Furthermore, in contrast to the fully random masks we considered in these experiments, real interferometric masks exhibit a denser amount of data at low frequency and their evolution across channels is mainly a dilation of the sampling mask in the Fourier domain. This entails that the sampling process across wavelegengths is highly correlated, which is a more favorable setting **for** BSS. Altogether, these different points highly mitigate the limitations of the DecGMCA algorithm due to subsampling in a realistic inferometric imaging setting.

En savoir plus
203 En savoir plus

Index Terms— Ultrasound imaging, **compressed** sens-
ing, Bayesian inference, Markov random field.
1. INTRODUCTION
Ultrasound (US) imaging is one of the most popular medi- cal imaging techniques and represents the gold standard in many crucial diagnostic exams such as obstetrics and cardi- ology. The main advantages of US imaging are its relatively low cost, its innocuity **for** the patient, its ease of use and real time nature. However, the real-time property is sometimes limited by the acquisition time or by the high amount of ac- quired data, especially in 3D ultrasound imaging. Even in 2D applications, a higher frame rate could be beneficial, i.e., **for** cardiac US monitoring. **For** this reason, a few research groups have recently started to evaluate the feasibility of US **acquisitions** using the compressive sampling (CS) framework [1, 2]. In particular, Friboulet et al. have presented in [3] a method **for** randomly sub-sampling the US raw data (sig- nals before beamforming and classically used in US imag- ing **for** obtaining the radiofrequency (RF) lines). The idea

En savoir plus
guide to construct sampling patterns and not as a requirement **for** perfect recovery. Surprisingly, a better drawing probability distribution reducing the required number of measurements is not the uniform one, as commonly used in [8], [4], but the one depending on the ℓ ∞ -norm of the considered row. B. Block diagonal case

The results show that the accuracy of CS is lower than ASVC in terms of NMSE when performing on full long recordings. However, a breakthrough finding is the ability of CS to extract AA from a short ECG recording containing only one heartbeat, which is impossible with ASVC, that is said to perform well **for** significantly longer recordings of at least 10 heartbeats. Based on the observation that CS performs well on short recordings, it is adapted to an online process, where it estimates the AA beat-by-beat from the ECG. This way, our CS approach handles better long recordings. On the other