• Aucun résultat trouvé

Compressed Sensing MRI Reconstruction with Multiple Sparsity Constraints on Radial Sampling

N/A
N/A
Protected

Academic year: 2021

Partager "Compressed Sensing MRI Reconstruction with Multiple Sparsity Constraints on Radial Sampling"

Copied!
16
0
0

Texte intégral

(1)

HAL Id: hal-02073171

https://hal.archives-ouvertes.fr/hal-02073171

Submitted on 16 Dec 2020

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Sparsity Constraints on Radial Sampling

Jianping Huang, Lihui Wang, Yuemin Zhu

To cite this version:

(2)

Research Article

Compressed Sensing MRI Reconstruction with Multiple Sparsity

Constraints on Radial Sampling

Jianping Huang

,

1

Lihui Wang

,

2

and Yuemin Zhu

3

1College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China

2Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and

Technology, Guizhou University, Guiyang 550025, China

3Univ Lyon, INSA Lyon, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon 69621, France

Correspondence should be addressed to Lihui Wang; wlh1984@gmail.com

Received 6 November 2018; Revised 17 December 2018; Accepted 6 January 2019; Published 10 February 2019 Academic Editor: Agathoklis Giaralis

Copyright © 2019 Jianping Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Compressed Sensing Magnetic Resonance Imaging (CS-MRI) is a promising technique for accelerating MRI acquisitions by using fewer k-space data. Exploiting more sparsity is an important approach to improving the CS-MRI reconstruction quality. We propose a novel CS-MRI framework based on multiple sparse priors to increase reconstruction accuracy. The wavelet sparsity, wavelet tree structured sparsity, and nonlocal total variation (NLTV) regularizations were integrated in the CS-MRI framework, and the optimization problem was solved using a fast composite splitting algorithm (FCSA). The proposed method was evaluated on different types of MR images with different radial sampling schemes and different sampling ratios and compared with the state-of-the-art CS-MRI reconstruction methods in terms of peak signal-to-noise ratio (PSNR), feature similarity (FSIM), relative l2 norm error (RLNE), and mean structural similarity (MSSIM). The results demonstrated that the proposed method outperforms the traditional CS-MRI algorithms in both visual and quantitative comparisons.

1. Introduction

Magnetic resonance imaging (MRI) has been widely used in the radiological diagnosis due to its high spatial resolution, noninvasive, and nonionizing radiation merits. It has become a powerful and promising technique to visualize and inves-tigate the anatomical and physiological properties of many organs. Generally, to obtain a high spatial resolution image with a high signal-to-noise ratio (SNR), the image should be acquired with more k-space data and for several times to average the noise. This however will undoubtedly increase the acquisition time. Long acquisition time may introduce several problems, such as the increase of motion artifacts, especially for special patients with motion control problems and moving organs. Therefore, how to reduce the acquisition time of MRI while maintaining image quality is currently a great challenge [1–3].

One of the most promising approaches to deal with this issue is compressed sensing (CS) [1, 4–6] which reduces the acquisition time by acquiring only a small fraction of

k-space data whereas enabling us to reconstruct the image from such highly undersampled measurements with high quality. It is well known that, to apply CS to MRI, finding proper k-space undersampling patterns, sparsifying transfor-mations, and the corresponding reconstruction algorithms is fundamental for guaranteeing the reconstructed image quality [1]. In the pioneering work of CS-MRI, Lusting et al. [7] reconstructed magnetic resonance (MR) images from the Cartesian undersampled k-space data by solving a 𝑙1 norm minimization equation with wavelet transform and total variation (TV) sparsity constraints. According to the theory of CS, the sparser the representation is, the better the reconstruction quality is. Therefore, numerous efforts have been devoted to investigating sparse representations and sparsity regularizations. For instance, sparse transformations based on some fixed basis were widely used, including discrete cosine transform (DCT), contourlet transform [8], and Shearlet transform [9–12]. To further explore the degree of sparsity, several sparsity regularizations and the corre-sponding combinations were developed, such as the nonlocal

(3)

self-similarity constraint, wavelet transform regularization, TV regularization, and nonlocal TV (NLTV) regularization [13–15]. The fixed sparsifying transformation is very fast for implementation, but it does not guarantee getting the best sparse representation for a specific image. Accordingly, data-driven learning was proposed which learns adaptively the sparse representations from the image itself [16–21], including the dictionary learning such as k-singular value decompo-sition (K-SVD) [22] and the transform learning based on adaptive tight frame algorithms [23, 24]. With the success of deep learning in solving inverse problems such as image denoising and reconstruction, data-driven sparsity learn-ing uslearn-ing autoencoder (AE) [25] and restricted Boltzmann machine (RBM) [26] have been proposed. Although data-driven based approaches indeed improved image reconstruc-tion quality, compared to tradireconstruc-tional CS-MRI, the learning process requires longer time since the sparse basis training is very computation expensive, which is not practical for clinical use. Considering the complexity of optimization problem and the speed of computation, combining several simpler regularization terms to further improve the reconstruction performance of traditional CS-MRI algorithms may be an alternative. Therefore, novel CS-MRI algorithms using more sparse priors to accelerate reconstruction process while main-taining reconstruction accuracy are still greatly desired.

In this paper, we propose to use multiple sparse con-straints to improve MR image reconstruction precision. More precisely, wavelet sparsity, wavelet tree sparsity, and NLTV regularization are all integrated in the CS-MRI framework. The wavelet coefficients of MR images not only are approx-imately sparse, but also have a tree structure. The latter provides better image reconstruction quality than standard wavelet sparsity prior [27–29] and enables us to exploit the correlation between the parent and child of wavelet coefficients and reduce the required number of k-space data for MR image reconstruction. Meanwhile, NLTV model extends the conventional TV term to a nonlocal version, which can effectively avoid staircase artifacts caused by TV regularization while better preserving image edges and fine details. It is therefore expected that the combination of these three sparse constraints can give more accurate reconstruc-tion result. In such framework, the optimizareconstruc-tion problem is solved by means of a fast composite splitting algorithm (FCSA) [30], which is an iterative algorithm in which each iteration involves a series shrinkage step. To validate the performance of the method proposed, the experiments are conducted on both chest and renal arteries MR images with several radial sampling schemes, including radial uniform, radial golden, and radial randomized sampling. The results are compared to previous CS-MRI reconstruction methods, including SparseMRI [7], RecPF [31], FCSA [30], FCSA-NLTV [32], and WaTMRI [29].

The remainder of this manuscript is organized as follows. Section 2 describes the multiple sparsity constraints-based CS-MRI reconstruction framework as well as the perfor-mance evaluation indices. The experimental results and discussions are given in Section 3, including the experimental setup, visual and quantitative comparison, and the influence of sampling ratio on the performance of the proposed method. Section 4 concludes the work.

2. Methods

In this section, we introduce the CS-MRI framework based on multiple sparse constraints and the corresponding evalu-ation criteria.

2.1. CS-MRI Reconstruction Framework. Assume that𝑥 is the

desired MR image and𝑦 is an undersampled measurement in k-space domain with the sensing matrix denoted asΦ. The aim of CS-MRI is to reconstruct the MR image𝑥 with a constraint of𝑦 = Φ𝑥 by solving the following optimization problem:

min

𝑥∈R𝑛 𝐽 (𝑥)

subject to 𝑦 = Φ𝑥

(1)

where 𝐽(𝑥) is a regularizing function that intends to find a perfect sparse representation of the image 𝑥. To exploit more sparse priors, wavelet sparsity, wavelet tree sparsity, and nonlocal TV regularization items are combined in the CS-MRI framework proposed in this paper. This multiple regularizations-based MR image reconstruction problem can be formulated as follows: ̂𝑥 = arg min𝑥 {{ { 1 2󵄩󵄩󵄩󵄩Φ𝑥 − 𝑦󵄩󵄩󵄩󵄩22+ 𝛼 ‖𝑥‖𝑁𝐿𝑇𝑉 + 𝛽 (‖Ψ𝑥‖1+ ∑ 𝑔∈𝐺󵄩󵄩󵄩󵄩󵄩(Ψ𝑥)𝑔󵄩󵄩󵄩󵄩󵄩2 )}} } (2)

where the sensing matrixΦ is a partial Fourier transform, which can be expressed by𝐹𝑢= 𝑃⋅𝐹 with 𝐹 being the Fourier transform and𝑃 being the common undersampling pattern (mask),Ψ is a wavelet transform, 𝐺 represents all the parent-child groups that encourage the tree sparsity and𝑔 is one of such groups, and‖𝑥‖𝑁𝐿𝑇𝑉is the NLTV-based constraint defined as follows [13, 32]:

‖𝑥‖NLTV= ∑

𝑢√∑V [𝑥 (𝑢) − 𝑥 (V)]

2𝑤 (𝑢, V)

(3) with𝑥(𝑢) and 𝑥(V) indicating the image values at 𝑢 and V and with𝑤(𝑢, V) representing the weight function formulated as

𝑤 (𝑢, V) ={{{{ { 1 𝑍𝑥 exp(−󵄩󵄩󵄩󵄩𝑞𝑥(𝑢) − 𝑞𝑥(V)󵄩󵄩󵄩󵄩 2 2 2ℎ2 ) if ‖𝑢 − V‖2≤ 𝛿2 0 otherwise } } } } } (4)

(4)

the target pixel are considered when calculating the nonlocal image gradient [13].

An auxiliary variable𝑧 is introduced to solve Eq. (2):

𝑧 = 𝐵Ψ𝑥 (5)

where 𝐵 is a binary matrix for wavelet coefficients group configuration. Accordingly, the optimization problem in Eq. (2) can be rewritten as follows:

̂𝑥 = arg min𝑥 {12󵄩󵄩󵄩󵄩Φ𝑥 − 𝑦󵄩󵄩󵄩󵄩22+ 𝛼 ‖𝑥‖𝑁𝐿𝑇𝑉 + 𝛽 (‖Ψ𝑥‖1+ 𝑠 ∑ 𝑖=1󵄩󵄩󵄩󵄩󵄩𝑧𝑔𝑖󵄩󵄩󵄩󵄩󵄩2 ) + 𝜆2‖𝑧 − 𝐵Ψ𝑥‖22} (6)

where 𝑔𝑖 is the 𝑖𝑡ℎ wavelet tree group and 𝑠 is the total number of groups. The solution of Eq. (6) can be split into two subproblems: z subproblem and𝑥 subproblem, in which,

z subproblem can be expressed as follows [29]:

𝑧𝑔𝑖 = arg min𝑔𝑖 {𝛽 󵄩󵄩󵄩󵄩󵄩𝑧𝑔𝑖󵄩󵄩󵄩󵄩󵄩2+ 𝜆 2󵄩󵄩󵄩󵄩󵄩𝑧𝑔𝑖− (𝐵Φ𝑥)𝑔𝑖󵄩󵄩󵄩󵄩󵄩 2 2} , 𝑖 = 1, 2, . . . , 𝑠 (7)

It has a closed form solution by soft-thresholding: 𝑧𝑔𝑖= max (󵄩󵄩󵄩󵄩𝑟𝑖󵄩󵄩󵄩󵄩2−𝛽

𝜆, 0) 𝑟𝑖 󵄩󵄩󵄩󵄩𝑟𝑖󵄩󵄩󵄩󵄩2

, 𝑖 = 1, 2, . . . , 𝑠 (8) where𝑟𝑖= (𝐵Φ𝑥)𝑔𝑖. For convenience, z is represented as

𝑧 = 𝑠ℎ𝑟𝑖𝑛𝑘𝑔𝑟𝑜𝑢𝑝 (𝐵Ψ𝑥,𝛽𝜆) (9) The𝑥 subproblem can be expressed as

̂𝑥 = arg min𝑥 {12󵄩󵄩󵄩󵄩Φ𝑥 − 𝑦󵄩󵄩󵄩󵄩22+ 𝛼 ‖𝑥‖𝑁𝐿𝑇𝑉+ 𝛽 ‖Ψ𝑥‖1

+𝜆2‖𝑧 − 𝐵Ψ𝑥‖22}

(10)

Let𝑓(𝑥) = (1/2)‖Φ𝑥 − y‖22+ (𝜆/2)‖𝑧 − 𝐵Ψ𝑥‖22, which is a convex and smooth function with Lipschitz constant𝐿𝑓, and 𝑔1(𝑥) = 𝛼‖𝑥‖𝑁𝐿𝑇𝑉, 𝑔2(𝑥) = 𝛽‖Ψ𝑥‖1, which are both convex and nonsmooth functions. Eq. (10) can be solved using the fast composite splitting algorithm (FCSA) [30], which means that the problem can be divided into multiple subproblems: the regularization of NLTV-norm and wavelet𝑙1-norm. Each subproblem is actually a convex function which can be solved by a proximal mapping operation𝑝𝑟𝑜𝑥(𝑔)(𝑥) [33]:

𝑝𝑟𝑜𝑥𝜌(𝑔) (𝑥) = arg min𝑢 {𝑔 (𝑢) +2𝜌1 ‖𝑢 − 𝑥‖22} (11) where𝜌 is a positive scalar.

The subsequent algorithm is described as shown in Algorithm 1.

In Algorithm 1,∇𝑓(𝑟𝑘) = Φ𝑇(𝑦−Φ𝑥)+𝜆𝜓𝑇𝐵𝑇(𝐵𝜓𝑟𝑘−𝑧), withΦ𝑇and𝜓𝑇representing, respectively, the inverse partial Fourier transform and inverse wavelet transform, and𝐾 is the maximum iteration times.

Input: 𝜌 = 1/𝐿𝑓, 𝑡1= 1, 𝑥0= 𝑟1, 𝛼, 𝛽, 𝜆 For k=1 to K do 𝑧 = 𝑠ℎ𝑟𝑖𝑛𝑘𝑔𝑟𝑜𝑢𝑝 (𝐺Ψ𝑥𝑘−1,𝛽 𝜆) 𝑥𝑔= 𝑟𝑘− 𝜌∇𝑓 (𝑟𝑘) 𝑥1= 𝑝𝑟𝑜𝑥𝜌(2𝛼 ‖𝑥‖𝑁𝐿𝑇𝑉) (𝑥𝑔) 𝑥2= 𝑝𝑟𝑜𝑥𝜌(2𝛽 ‖Ψ𝑥‖1) (𝑥𝑔) 𝑥𝑘=(𝑥1+ 𝑥2) 2 𝑡𝑘+1=1 + √1 + 4 (𝑡𝑘) 2 2 𝑟𝑘+1= 𝑥𝑘+𝑡𝑘− 1 𝑡𝑘+1 (𝑥𝑘− 𝑥𝑘−1) End for. Algorithm 1

2.2. Quantitative Evaluation Criteria. To evaluate

quanti-tatively the performance of the proposed CS-MRI recon-struction method, the indices including peak signal-to-noise ratio (PSNR), mean structural similarity (MSSIM) [34], feature similarity (FSIM) [35], and relative 𝑙2 norm error (RLNE) [36] were used. Denote that 𝑥 and ̂𝑥 represent, respectively, the original image and that reconstructed from the undersampled k-space data with CS-MRI algorithms, with a maximum value noted as MAX and the dimension of 𝑀 × 𝑁. Accordingly, PSNR is defined as PSNR = 20 log 10 𝑀𝐴𝑋 √(1/𝑀𝑁) ∑𝑀 𝑖=1∑𝑁𝑗=1(𝑥 (𝑖, 𝑗) − ̂𝑥 (𝑖, 𝑗))2 (12)

Let 𝑝 and 𝑞 represent one block of the images 𝑥 and ̂𝑥, respectively, the local means of which are expressed by𝜇𝑝 and𝜇𝑞with𝜎𝑝,𝜎𝑞, and𝜎𝑝𝑞indicating the standard deviations and cross-covariance of images blocks 𝑝 and 𝑞. Based on such notations, the structure similarity (SSIM) is calculated as follows:

SSIM(𝑝, 𝑞) = (2𝜇𝑝𝜇𝑞+ 𝐶1) (2𝜎𝑝𝑞+ 𝐶2) (𝜇2

𝑝+ 𝜇2𝑞+ 𝐶1) (𝜎2𝑝+ 𝜎2𝑞+ 𝐶2)

(13) where𝐶1and𝐶2are the tunable constants. If the total number of image blocks is𝑁𝑏, the MSSIM can be formulated as

MSSIM(𝑥, ̂𝑥) = 1 𝑁𝑏 𝑁𝑏 ∑ 𝑖=1 𝑆𝑆𝐼𝑀 (𝑝𝑖, 𝑞𝑖) (14) Both PSNR and MSSIM are proportional to image quality. The better the image quality is, the higher the PSNR and MSSIM are.

3. Experimental Results and Discussion

3.1. Experimental Setup. The experiments were performed

(5)

(a) MRI-Chest (b) MRI-Renal Arteries (c) Radial uniform (d) Radial golden (e) Radial randomized

Figure 1: The test images and radial undersampling masks.

Table 1: Parameter settings for different reconstruction methods.

Parameters SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

𝛼 0.001 0.001 0.001 0.001 𝛽 0.035 0.035 0.035 0.035 𝜆 - - 0.007 0.007 ℎ - - 0.0625 - - 0.0625 𝛿 - - 5 × 5 - - 5 × 5 𝑚 - - 11 × 11 - - 11 × 11

images with size of 256 × 256, respectively, as illustrated in Figures 1(a) and 1(b). We observed that one has rich texture and the other is relative smooth (Note: data from Ref. [29]). Since the radial sampling schemes have been demonstrated to be more feasible in practice and better than Cartesian sampling [37], we used several radial sampling masks, including radial uniform, radial golden [38, 39], and radial randomized, as shown in Figures 1(c)–1(e), where 10% k-space data is kept.

The observation measurement y was modeled as 𝑦 = Φ𝑥+𝜀, where 𝜀 represents complex Gaussian white noise with standard deviation𝜎𝑛. The associated input SNR (ISNR) [40] is defined as𝐼𝑆𝑁𝑅 = 20 log10(𝜎𝑥/𝜎𝑛), with 𝜎𝑥denoting the standard deviation of the original image. The ISNR was set to 20 dB.

In order to evaluate the benefits of multiple sparse constraints proposed in this paper, we compared quanti-tatively the proposed method with several reconstruction approaches, including SparseMRI [7], RecPF [31], FCSA [30], FCSA-NLTV [32], and WaTMRI [29], in terms of PSNR, MSSIM, FSIM, and RLNE indices. SparseMRI is the first work of CS-MRI, which models MR image reconstruction as a linear combination of least square fitting, wavelet sparsity, and TV regularization. The optimization problem is solved by conjugate gradient (CG) method. RecPF uses a variable splitting method to solve the same model as that in SparseMRI. FCSA decomposes the above problem into two easier subproblems and separately solves each of them with FISTA. To overcome the intrinsic drawback of the TV model, NLTV reconstruction method was proposed in FCSA-NLTV that replaces the TV in classical CS-MRI model with NLTV. WaTMRI is based on using a multiple regularization method, which combines wavelet sparsity, gradient sparsity, and tree sparsity in one model, and is formulated as the group sparsity problem for validating the benefit of wavelet tree structure

in CS-MRI. For fair comparisons, all the experiments were performed with MATLAB 2012a on a desktop equipped with 3.10 GHz Intel core i5 2400 CPU and 8 GB RAM. In the experiments, Daubechies wavelets with four decomposition levels were used to form a quadtree. Daubechies wavelets were chosen because they are the most commonly used ones. They are orthogonal and compactly supported wavelets, which have excellent edge detection characteristics and fast implementation. Daubechies wavelets were also used as sparse bases in previously reported CS-MRI works and were shown to give good image reconstruction performance. Nevertheless, the use of many kinds of wavelets (Haar, Daubechies, Symmlets, etc.) other than Daubechies wavelets is possible. Concerning the number of decomposition levels, since the image size is 256× 256, we have chosen 4 levels of decomposition. Too many decomposition levels will increase too much computation cost; too few decomposition levels will weaken tree structure benefit. Four decomposition levels in the present study seemed the best compromise. The parameter settings for different reconstruction methods were given in Table 1.

3.2. Visual Comparisons. Figures 2 and 3 give the visual

(6)

Radial uniform

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

(a)

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

0.5 0.4 0.3 0.2 0.1 0 (b) Radial golden

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

(c)

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

0.5 0.4 0.3 0.2 0.1 0 (d) Radial random

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

(e)

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

0.5 0.4 0.3 0.2 0.1 0 (f)

Figure 2: Results of reconstruction on chest MR image using different methods and different sampling masks with 20% sampling ratio. (a), (c), and (e) represent the reconstructed chest MR images with different methods, and (b), (d), and (f) represent the corresponding error images.

randomized sampling patterns, whereas the images recon-structed by the proposed method present much less artifacts than the others for all the sampling masks. For the renal arteries MR image with less edges, the reconstruction quality of all the methods is better than that of chest image. The

(7)

Radial uniform

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

(a)

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

0.5 0.4 0.3 0.2 0.1 0 (b) Radial golden

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

(c)

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

0.5 0.4 0.3 0.2 0.1 0 (d) Radial random

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

(e)

Zero-filling SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

0.5 0.4 0.3 0.2 0.1 0 (f)

Figure 3: Results of reconstruction on renal arteries MR image using different methods and different sampling masks with 20% sampling ratio. (a), (c), and (e) represent the reconstructed renal arteries MR images with different methods, and (b), (d) and (f) represent the corresponding error images.

In the proposed method, the wavelet and wavelet tree sparsity constraints mainly contribute to reconstructing the smooth regions, and the NLTV enables us to recover the tex-ture and edges due to the use of nonlocal similarity. The com-bination of these constraints makes them complementary

to each other and therefore results in relatively pleasant reconstruction results.

3.3. Quantitative Comparison. In order to provide more

(8)

Table 2: PSNR of images reconstructed with 20% sampling (ISNR=20).

Dataset Sampling mask SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

Chest image Radial uniform 24.54 24.39 25.12 25.05 25.16 27.57 Radial golden 24.31 24.28 24.77 24.72 24.90 26.78 Radial randomized 22.66 22.65 23.44 23.41 23.48 25.77 Renal Arteries Radial uniform 26.96 26.82 28.31 28.69 28.95 30.88 Radial golden 26.77 26.58 28.09 28.41 28.64 30.35 Radial randomized 25.03 24.86 26.26 26.66 26.84 28.27

Table 3: MSSIM of images reconstructed with 20% sampling (ISNR=20).

Dataset Sampling mask SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

Chest image Radial uniform 0.64 0.62 0.65 0.64 0.65 0.76 Radial golden 0.62 0.60 0.62 0.62 0.63 0.75 Radial randomized 0.58 0.57 0.59 0.58 0.58 0.70 Renal Arteries Radial uniform 0.60 0.54 0.78 0.79 0.80 0.84 Radial golden 0.58 0.52 0.77 0.77 0.79 0.82 Radial randomized 0.56 0.51 0.73 0.74 0.75 0.79

Table 4: FSIM of images reconstructed with 20% sampling (ISNR=20).

Dataset Sampling mask SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

Chest image Radial uniform 0.78 0.79 0.81 0.81 0.81 0.85 Radial golden 0.77 0.79 0.80 0.80 0.80 0.84 Radial randomized 0.76 0.77 0.78 0.78 0.78 0.82 Renal Arteries Radial uniform 0.83 0.82 0.85 0.86 0.87 0.89 Radial golden 0.81 0.81 0.84 0.85 0.85 0.88 Radial randomized 0.79 0.79 0.82 0.83 0.83 0.85

Table 5: RLNE of images reconstructed with 20% sampling (ISNR=20).

Dataset Sampling mask SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

Chest image Radial uniform 0.18 0.19 0.17 0.17 0.17 0.13 Radial golden 0.19 0.19 0.18 0.18 0.18 0.14 Radial randomized 0.23 0.23 0.21 0.21 0.21 0.16 Renal Arteries Radial uniform 0.16 0.17 0.14 0.13 0.13 0.10 Radial golden 0.17 0.17 0.14 0.14 0.13 0.11 Radial randomized 0.20 0.21 0.18 0.17 0.16 0.14

MSSIM, FSIM, and RLNE are calculated for the MR images reconstructed above with different methods and various sampling schemes (undersampled ratio is about 20%), as given in Tables 2–5, respectively. It is clearly seen that, compared with the WaTMRI, combining NLTV, wavelet, and wavelet tree sparsity constraints improved PSNR of about 2 dB for different sampling schemes. Moreover, the proposed method achieves the best performance in the radial uniform sampling scheme for both kinds of MR images, which has the highest PSNR value.

3.4. Influence of Sampling Ratio. In order to investigate the

influence of different sampling ratios on the performance of the reconstruction methods, the curve of PSNR, MSSIM, FSIM, and RLNE versus different undersampled ratios

(10%∼50% or 0.1∼0.5) for the MR images reconstructed with different methods and undersampling schemes are shown in Figures 4–7. It can be seen that, for both the chest and renal arteries MR images, PSNR, MSSIM, and FSIM of the images reconstructed by the proposed method (WaTMR NLTV) are higher than the other methods for all the sampling schemes with all the sampling ratios. As to RLNE, for both chest and renal arteries images, the RLNE obtained by our method is lower than the others for all the sampling ratios. Especially, the proposed method is much better than the others with low sampling ratio.

3.5. Influence of Noise. The noise robustness of the proposed

(9)

SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 20 22 24 26 28 30 32 34 PS NR 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 20 22 24 26 28 30 32 34 36 PS NR 20 22 24 26 28 30 32 34 PS NR 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 22 24 26 28 30 32 34 36 38 PS NR 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 22 24 26 28 30 32 34 36 38 PS NR 20 25 30 35 PS NR 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio

Radial uniform sampling Radial golden sampling Radial random sampling

Chest image

Renal Arteries image

Figure 4: Curve of PSNRs vs sampling ratios for both chest and renal arteries MR images reconstructed with different methods and sampling schemes. SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.45 0.5 0.550.6 0.650.7 0.75 0.8 0.850.9 0.95 MSS IM 0.45 0.5 0.550.6 0.650.7 0.75 0.8 0.850.9 0.95 MSS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.45 0.5 0.550.6 0.650.7 0.75 0.8 0.850.9 0.95 MSS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.4 0.5 0.6 0.7 0.8 0.9 1 MSS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.4 0.5 0.6 0.7 0.8 0.9 1 MSS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.4 0.5 0.6 0.7 0.8 0.9 1 MSS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio

Radial uniform sampling Radial golden sampling Radial random sampling

Chest image

Renal Arteries image

(10)

Table 6: Comparison of computation time (in seconds) of different methods on chest MR image (size 256× 256) with different radial sampling schemes.

sampling scheme method

SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

radial uniform 2.86 4.17 4.72 4.71 4.56 113.74 radial golden 2.92 4.39 4.85 4.73 4.81 115.77 radial randomized 2.83 4.14 4.30 4.25 4.34 104.95 SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed 0.7 0.75 0.8 0.85 0.9 0.95 1 FS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 FS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 FS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.75 0.8 0.85 0.9 0.95 1 FS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.7 0.75 0.8 0.85 0.9 0.95 1 FS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.7 0.75 0.8 0.85 0.9 0.95 FS IM 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio

Radial uniform sampling Radial golden sampling Radial random sampling

Chest image

Renal Arteries image

Figure 6: Curve of FSIMs vs sampling ratios for both chest and renal arteries MR images reconstructed with different methods and sampling schemes.

in Figure 8. It is seen that the reconstruction performance changes with the sampling ratio and ISNR. Increasing sampling ratio reduces the reconstruction error (except in the cases ISNR=5dB and ISNR=10dB) and reducing ISNR increases the reconstruction error. On the other hand, the reconstruction performance changes slightly when ISNR is larger than 10 dB when fixing sampling ratios; for example, R=0.3 in Figure 8.

3.6. Computation Time. Concerning computation time, the

comparison between the proposed method and the other methods is given in Tables 6 and 7. The computation time changed in fact slightly for different sampling ratios. Therefore, we have chosen to show in Tables 6 and 7 the average time over different sampling ratios for assessing the computation time. It is seen that the SparseMRI method is the

fastest among the six methods and that the proposed method takes the longest time.

To assess computational cost in case larger image sizes are involved, experiments on an image size of 512× 512 were also performed and the computation time is compared in Tables 8 and 9. From Tables 6–9, it can be observed that when image size increases, the computational cost rises for all methods. However, compared with other methods, the computation time for the proposed method has a relatively small increase.

4. Conclusion

(11)

SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed SparseMRI RecPF FCSA FCSA−NLTV WaTMRI Proposed 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 RLNE 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.05 0.1 0.15 0.2 0.25 0.3 RLNE 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.05 0.1 0.15 0.2 0.25 0.3 RLNE 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 RLNE 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.05 0.1 0.15 0.2 0.25 RLNE 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 RLNE 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.1 sample ratio

Radial uniform sampling Radial golden sampling Radial random sampling

Chest image

Renal Arteries image

Figure 7: Curve of RLNEs vs sampling ratios for both chest and renal arteries MR images reconstructed with different methods and sampling schemes.

Table 7: Comparison of computation time (in seconds) of different methods on renal arteries MR image (size 256×256) with different radial sampling schemes.

sampling scheme method

SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

radial uniform 3.02 4.33 2.74 5.21 5.41 116.74

radial golden 2.74 3.97 2.82 4.63 4.90 103.34

radial randomized 2.82 4.19 2.65 5.15 5.11 112.29

Table 8: Comparison of computation time (in seconds) of different methods on chest MR image (size 512×512) with different radial sampling schemes.

sampling scheme method

SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

radial uniform 44.34 18.00 62.91 62.45 70.41 210.96

radial golden 44.57 18.06 65.82 64.27 69.82 210.49

radial randomized 45.38 17.94 74.83 63.22 73.45 212.51

Table 9: Comparison of computation time (in seconds) of different methods on renal arteries MR image (size 512×512) with different radial sampling schemes.

sampling scheme method

SparseMRI RecPF FCSA FCSA-NLTV WaTMRI Proposed

radial uniform 45.36 17.49 34.36 94.62 104.90 212.48

radial golden 46.23 18.26 36.56 96.11 104.29 213.36

(12)

R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.01 0.02 0.03 0.04 0.05 0.06 PS NR 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 PS NR 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 PS NR 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 PS NR 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.01 0.02 0.03 0.04 0.05 0.06 PS NR 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 PS NR 10 15 20 25 30 35 40 45 50 55 60 5 SNR

Radial uniform sampling Radial golden sampling Radial random sampling

Chest image Renal Arteries image (a) R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 0 0.2 0.4 0.6 0.8 1 MSS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.2 0.4 0.6 0.8 1 MSS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 MSS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.2 0.4 0.6 0.8 1 MSS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.2 0.4 0.6 0.8 1 MSS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.91 MSS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR

Radial uniform sampling Radial golden sampling Radial random sampling

(13)

R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 0 5 10 15 20 25 30 35 FS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 5 10 15 20 25 30 35 40 FS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 5 10 15 20 25 30 35 FS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 5 10 15 20 25 30 35 40 FS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 5 10 15 20 25 30 35 40 FS IM 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 5 10 15 20 25 30 35 40 FS IM

Radial uniform sampling Radial golden sampling Radial random sampling

Chest image Renal Arteries image (c) R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1 R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 R=0.1R=0.2 R=0.3 R=0.4 R=0.5 0 0.05 0.1 0.15 0.2 0.25 RLNE 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.05 0.1 0.15 0.2 0.25 RLNE 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.05 0.1 0.15 0.2 0.25 0.3 RLNE 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.05 0.1 0.15 0.2 0.25 0.3 RLNE 10 15 20 25 30 35 40 45 50 55 60 5 SNR 10 15 20 25 30 35 40 45 50 55 60 5 SNR 0 0.05 0.1 0.15 0.2 RLNE 0 0.05 0.1 0.15 0.2 RLNE 10 15 20 25 30 35 40 45 50 55 60 5 SNR

Radial uniform sampling Radial golden sampling Radial random sampling

Chest image Renal Arteries image (d)

(14)

performed by changing the test images, sampling schemes, sampling ratios, and reconstruction methods. The results showed that the proposed WaTMRI-NLTV outperforms the conventional CS-MRI algorithms with less artifacts, richer texture, higher PSNR, MSSIM, and FSIM, and lower recon-struction RLNE, especially for the case of low sampling ratio. In the future, according to image properties, exploring more accurate sparse priors for further improving reconstruction quality would be desired.

Data Availability

The test images (a) and (b) in Figure 1 can be obtained by downloading the code attached to Ref. [29].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (nos. 61661010, 61701105), the Funds for Talents of Guizhou University (No. 2013(33)), the Nature Science Foundation of Guizhou Province (Qiankehe J No.20152044), the Natural Science Foundation of Hei-longjiang Province of China (no. QC2017066), French ANR (under MOSIFAH ANR-13-MONU-0009-01), and the Pro-gram PHC-Cai Yuanpei 2018 N∘41400TC.

References

[1] K. G. Hollingsworth, “Reducing acquisition time in clinical MRI by data undersampling and compressed sensing recon-struction,” Physics in Medicine and Biology, vol. 60, no. 21, pp. R297–R322, 2015.

[2] A. S. C. Yang, M. Kretzler, S. Sudarski, V. Gulani, and N. Seiber-lich, “Sparse reconstruction techniques in magnetic resonance imaging,” Investigative Radiology, vol. 51, no. 6, pp. 349–364, 2016.

[3] V. M. Runge, J. K. Richter, and J. T. Heverhagen, “Speed in Clinical Magnetic Resonance,” Investigative Radiology, vol. 52, no. 1, pp. 1–17, 2017.

[4] D. L. Donoho, “Compressed sensing,” IEEE Transactions on

Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.

[5] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” Institute of Electrical and Electronics

Engineers Transactions on Information Theory, vol. 52, no. 2, pp.

489–509, 2006.

[6] L. B. Montefusco, D. Lazzaro, S. Papi, and C. Guerrini, “A fast compressed sensing approach to 3D MR image reconstruction,”

IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1064–

1075, 2011.

[7] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,”

Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195,

2007.

[8] X. Qu, W. Zhang, D. Guo, C. Cai, S. Cai, and Z. Chen, “Iterative thresholding compressed sensing MRI based on contourlet

transform,” Inverse Problems in Science and Engineering, vol. 18, no. 6, pp. 737–758, 2010.

[9] W. Guo, J. Qin, and W. Yin, “A new detail-preserving regular-ization scheme,” SIAM Journal on Imaging Sciences, vol. 7, no. 2, pp. 1309–1334, 2014.

[10] W. Qidi, L. Yibing, L. Yun, and Y. Xiaodong, “The nonlocal sparse reconstruction algorithm by similarity measurement with shearlet feature vector,” Mathematical Problems in

Engi-neering, vol. 2014, Article ID 586014, 8 pages, 2014.

[11] M. Yuan, B. Yang, Y. Ma, J. Zhang, R. Zhang, and C. Zhang, “Compressed sensing MRI reconstruction from highly under-sampled k -space data using nonsubunder-sampled shearlet transform sparsity prior,” Mathematical Problems in Engineering, vol. 2015, Article ID 615439, 18 pages, 2015.

[12] S. Pejoski, V. Kafedziski, and D. Gleich, “Compressed sens-ing MRI ussens-ing discrete nonseparable shearlet transform and FISTA,” IEEE Signal Processing Letters, vol. 22, no. 10, pp. 1566– 1570, 2015.

[13] D. Liang, H. F. Wang, Y. C. Chang, and L. L. Ying, “Sensitivity encoding reconstruction with nonlocal total variation regular-ization,” Magnetic Resonance in Medicine, vol. 65, no. 5, pp. 1384–1392, 2011.

[14] C. Deng, S. Wang, W. Tian, Z. Wu, and S. Hu, “Approximate sparsity and nonlocal total variation based compressive MR image reconstruction,” Mathematical Problems in Engineering, vol. 2014, Article ID 137616, 13 pages, 2014.

[15] Y. Yang, X. Qin, and B. Wu, “Median filter based compressed sensing model with application to MR image reconstruction,”

Mathematical Problems in Engineering, vol. 2018, Article ID

8316194, 9 pages, 2018.

[16] S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled k-space data by dictionary learning,” IEEE

Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028–1041,

2011.

[17] F. Zijlstra, M. A. Viergever, and P. R. Seevinck, “Evaluation of variable density and data-driven K-space undersampling for compressed sensing magnetic resonance imaging,” Investigative

Radiology, vol. 51, no. 6, pp. 410–419, 2016.

[18] Z. Zhan, J. Cai, D. Guo, Y. Liu, Z. Chen, and X. Qu, “Fast multi-class dictionaries learning with geometrical directions in MRI reconstruction,” IEEE Transactions on Biomedical Engineering, vol. 63, pp. 1850–1861, 2016.

[19] Y. Huang, J. Paisley, Q. Lin, X. Ding, X. Fu, and X.-P. Zhang, “Bayesian nonparametric dictionary learning for compressed sensing MRI,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5007–5019, 2014.

[20] X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen, “Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator,” Medical

Image Analysis, vol. 18, no. 6, pp. 843–856, 2014.

[21] R. Kang, G. Qu, B. Cao, and L. Yan, “Combined similarity to reference image with joint sparsifying transform for longi-tudinal compressive sensing MRI,” Mathematical Problems in

Engineering, vol. 2016, Article ID 4162194, 12 pages, 2016.

[22] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representa-tion,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006.

[23] X. Fan, Q. Lian, and B. Shi, “Compressed-sensing MRI based on adaptive tight frame in gradient domain,” Applied Magnetic

(15)

[24] F. Xiaoyu, L. Qiusheng, and S. Baoshun, “Compressed sensing MRI with phase noise disturbance based on adaptive tight frame and total variation,” IEEE Access, vol. 5, pp. 19311–19321, 2017. [25] A. Majumdar, “An autoencoder based formulation for

com-pressed sensing reconstruction,” Magnetic Resonance Imaging, vol. 52, pp. 62–68, 2018.

[26] L. F. Polan´ıa and R. I. Plaza, “Compressed sensing ECG using restricted Boltzmann machines,” Biomedical Signal Processing

and Control, vol. 45, pp. 237–245, 2018.

[27] C. Chen and J. Huang, “Exploiting the wavelet structure in compressed sensing MRI,” Magnetic Resonance Imaging, vol. 32, no. 10, pp. 1377–1389, 2014.

[28] C. Chen and J. Huang, “The benefit of tree sparsity in accelerated MRI,” Medical Image Analysis, vol. 18, no. 6, pp. 834–842, 2014. [29] C. Chen and J. Huang, “Compressive sensing MRI with wavelet tree sparsity,” Advances in Neural Information Processing

Sys-tems, vol. 25, pp. 1124–1132, 2012.

[30] J. Huang, S. Zhang, and D. Metaxas, “Efficient MR image reconstruction for compressed MR imaging,” Medical Image

Analysis, vol. 15, no. 5, pp. 670–679, 2011.

[31] J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 288–297, 2010.

[32] J. Huang and F. Yang, “Compressed magnetic resonance imag-ing based on wavelet sparsity and nonlocal total variation,” in Proceedings of the 9th IEEE International Symposium on

Biomedical Imaging: From Nano to Macro (ISBI ’12), pp. 968–

971, Barcelona, Spain, May 2012.

[33] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM

Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.

[34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,”

IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–

612, 2004.

[35] L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Transactions on

Image Processing, vol. 20, no. 8, pp. 2378–2386, 2011.

[36] X. Qu, D. Guo, B. Ning et al., “Undersampled MRI recon-struction with patch-based directional wavelets,” Magnetic

Res-onance Imaging, vol. 30, no. 7, pp. 964–977, 2012.

[37] R. W. Chan, E. A. Ramsay, E. Y. Cheung, and D. B. Plewes, “The influence of radial undersampling schemes on compressed sensing reconstruction in breast MRI,” Magnetic Resonance in

Medicine, vol. 67, no. 2, pp. 363–377, 2012.

[38] L. Feng, R. Grimm, K. T. O. Block et al., “Golden-angle radial sparse parallel MRI: combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI,” Magnetic Resonance in

Medicine, vol. 72, no. 3, pp. 707–717, 2014.

[39] L. Feng, L. Axel, H. Chandarana, K. T. Block, D. K. Sodickson, and R. Otazo, “XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using com-pressed sensing,” Magnetic Resonance in Medicine, vol. 75, no. 2, pp. 775–788, 2016.

[40] R. E. Carrillo, J. D. McEwen, and Y. Wiaux, “Sparsity Averaging Reweighted Analysis (SARA): A novel algorithm for radio-interferometric imaging,” Monthly Notices of the Royal

(16)

Hindawi www.hindawi.com Volume 2018

Mathematics

Journal of Hindawi www.hindawi.com Volume 2018 Mathematical Problems in Engineering Applied Mathematics Hindawi www.hindawi.com Volume 2018

Probability and Statistics

Hindawi

www.hindawi.com Volume 2018

Hindawi

www.hindawi.com Volume 2018

Mathematical PhysicsAdvances in

Complex Analysis

Journal of

Hindawi www.hindawi.com Volume 2018

Optimization

Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Engineering Mathematics International Journal of Hindawi www.hindawi.com Volume 2018 Operations Research Journal of Hindawi www.hindawi.com Volume 2018

Function Spaces

Abstract and Applied Analysis

Hindawi www.hindawi.com Volume 2018 International Journal of Mathematics and Mathematical Sciences Hindawi www.hindawi.com Volume 2018

Hindawi Publishing Corporation

http://www.hindawi.com Volume 2013 Hindawi www.hindawi.com

World Journal

Volume 2018 Hindawi

www.hindawi.com Volume 2018Volume 2018

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Numerical Analysis

Advances inAdvances in Discrete Dynamics in Nature and Society Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Differential Equations International Journal of Volume 2018 Hindawi www.hindawi.com Volume 2018

Decision Sciences

Hindawi www.hindawi.com Volume 2018

Analysis

International Journal of Hindawi www.hindawi.com Volume 2018

Stochastic Analysis

International Journal of

Submit your manuscripts at

Références

Documents relatifs

Given the high number of group A patients who were positive for local tumour recurrence on PET (many of these results being confirmed by histology), one might argue that in

In this abstract, we propose two novel algorithms based on the SC concept, i.e., Iterative Maximum Overlap Construction (IMOC) to generate a sampling scheme from discretized

Recently, the Spherical Code (SC) concept was proposed to maximize the minimal angle between different samples in sin- gle or multiple shells, producing a larger angular separation

This highlights that the MEDUSA model is not of suf ficient complexity to capture the changes between the fast- and slow-sinking POC fluxes in the upper ocean compared to our

There are several reasons for which one may want to generate uniform samples of lattice paths: to make and try conjectures on the behaviour of a large ”typical” path, test

Our main result is a lower bound on the number of opinions to be produced by a runtime decentralized monitor in an asynchronous system where monitors may crash.. This lower

This difficulty comes from the heterogeneity of the data structure necessary to describe a channel code and the associated decoder: turbo codes use trellises, LDPC codes

Next, we used the conventional approach proposed by Pauly and Christensen (1995a) to estimate mean SPPRs of these species but using the mean transfer efficiency of 11.9%