Haut PDF Cardiac motion estimation in ultrasound images using a sparse representation and dictionary learning

Cardiac motion estimation in ultrasound images using a sparse representation and dictionary learning

Cardiac motion estimation in ultrasound images using a sparse representation and dictionary learning

Aviles 2017 ]. In this context, many challenges have to be resolved in order to achieve the objective of reliable cardiac function assessment. For example, there is still a need for deriving accurate and reproducible quantitative measures of motion to overcome the current state of inter-vendor variability of left ventricular (LV) longitudinal strains. Furthermore, the assessment of the cardiac function is still limited to global measure- ments [ Alessandrini 2015 ] and undergoes great amounts of smoothing, causing loss of clinically valuable local information [ Mirea 2016 ]. An accurate local analysis of the car- diac deformation has a major impact on the diagnosis, treatment choice and timing of surgical interventions in many clinical cases, e.g., ischemia, valvular heart disease and early detection of adverse cardiac effect of chemotherapy in oncology. Therefore, new motion estimation strategies that limit the loss of structural and local information are needed in the process of endorsing regional strains [ Mirea 2016 ]. In particular, de- veloping more adaptive alternatives to the purely geometrical regularizations used for cardiac motion estimation is still an open challenge. In the context of UI, 2D cardiac motion estimation is still a difficult task. In particular, US images are characterized by a poor signal-to-noise ratio caused by the speckle noise. Another drawback of UI is the presence of acquisition related artefacts that can affect cardiac motion estimation. Moreover, in the case of 2D UI, out-of-plane motions cause discrepancies in the speckle pattern, leading also to erroneous motion estimates. More generally, the smoothness assumptions typically used for cardiac motion estimation can be violated, e.g., in the case of anatomical boundaries. These shortcomings still call for new robust motion esti- mation strategies that mitigate the effects of outliers in cardiac UI. Furthermore, several recent works have attempted to address the spatio-temporal nature of cardiac motion [ De Craene 2012 , Zhijun 2014 , McLeod 2015 ]. However, many current methods either suffer from problems of large motions between distant frames, or do not process the image sequences as a whole. Therefore, it is still an open challenge to efficiently incor- porate temporal aspects into the problem of cardiac motion estimation from US image sequences.
En savoir plus

143 En savoir plus

Cardiac Motion Estimation with Dictionary Learning and Robust Sparse Coding in Ultrasound Imaging

Cardiac Motion Estimation with Dictionary Learning and Robust Sparse Coding in Ultrasound Imaging

2 Universit´e de Toulouse, IRIT, CNRS, Toulouse, France ABSTRACT Cardiac motion estimation from ultrasound images is an ill- posed problem that needs regularization to stabilize the solu- tion. In this work, regularization is achieved by exploiting the sparseness of cardiac motion fields when decomposed in an appropriate dictionary, as well as their smoothness through a classical total variation term. The main contribution of this work is to robustify the sparse coding step in order to handle anomalies, i.e., motion patterns that significantly deviate from the expected model. The proposed approach uses an ADMM-based optimization algorithm in order to simultaneously recover the sparse representations and the outlier components. It is evaluated using two realistic simu- lated datasets with available ground-truth, containing native outliers and corrupted by synthetic attenuation and clutter artefacts.
En savoir plus

5 En savoir plus

Cardiac motion estimation in ultrasound images using spatial and sparse regularizations

Cardiac motion estimation in ultrasound images using spatial and sparse regularizations

where K is the maximum number of non-zero coefficients of α p . Typical algorithms designed for solving (2) include the K-SVD [11] and online DL (ODL) algorithms [10]. At this point, it is interesting to mention that a few recent attempts to use sparse representations and DL for motion es- timation have been investigated in the literature. In [12], the authors included a sparsity prior to an OF estimation prob- lem and used the wavelet basis for the sparse coding step. This approach was also investigated in [13] using a learned motion dictionary. The method proposed in this paper com- bines a specific similarity measure for UI with spatial smooth- ness and sparse regularizations. This strategy exploits jointly the statistical properties of the speckle noise and the smooth and sparse properties of the cardiac motion. More precisely, we consider a multiplicative Rayleigh noise model introduced in [14] 1 , a spatial regularization based on the l
En savoir plus

6 En savoir plus

Cardiac motion estimation in ultrasound images using spatial and sparse regularizations

Cardiac motion estimation in ultrasound images using spatial and sparse regularizations

118 Route de Narbonne 31062 Toulouse Cedex 9, France ABSTRACT This paper investigates a new method for cardiac motion es- timation in 2D ultrasound images. The motion estimation problem is formulated as an energy minimization with spa- tial and sparse regularizations. In addition to a classical spa- tial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the so- lution via an appropriate dictionary learning step. The pro- posed method is evaluated in terms of motion estimation and strain accuracy and compared with state-of-the-art algorithms using a dataset of realistic simulations. These simulation re- sults show that the proposed method provides very promising results for myocardial motion estimation.
En savoir plus

7 En savoir plus

Motion Estimation in Echocardiography Using Sparse Representation and Dictionary Learning

Motion Estimation in Echocardiography Using Sparse Representation and Dictionary Learning

For future work, it would be necessary to investigate possi- ble extensions of the algorithm to 3D UI. In this work, we have addressed the problem of 2D motion estimation, which can present some shortcomings, such as out-of-plane motion and limited geometrical information, that could be overcome in 3D. Nevertheless, it should be pointed out that in contrast with 2D imagery, 3D UI is affected by the problems of frame rate and image spatial resolution in the azimuthal direction and thus, does not necessarily provide better motion estimation results. Furthermore, it is worth mentioning that the data fidelity and regularization terms used in the actual formulation are not inherently limited to 2D and could be extended to 3D. In the same way, the dictionaries could be learned separately for each direction or jointly for the 3 dimensions. The differences between these two strategies of learning the dictionary have not been investigated in this paper, but would also deserve consideration in future work. Another research prospect would be to study the interest of adaptive dictionary learning techniques for applications in which the training database is updated periodically. Furthermore, the proposed approach has not exploited the temporal properties of cardiac motion. Integrating this aspect could be performed by using more than two consecutive frames or by learning motion dictionaries that take into account the sparsity of the motion versus time. Another possible prospect concerns the problem of outliers. Considering potential model deviations or viola- tions of smoothness assumptions (e.g., motion boundaries) in the current approach for robust motion estimation is clearly an interesting prospect.
En savoir plus

15 En savoir plus

Robust Optical Flow Estimation in Cardiac Ultrasound images Using a Sparse Representation

Robust Optical Flow Estimation in Cardiac Ultrasound images Using a Sparse Representation

V. D ISCUSSION AND P ERSPECTIVES This paper presented a new motion estimation method for robust 2D cardiac US images. The main objective of this method was to robustify the cardiac motion estimation algorithm of [21] (based on spatial and sparse regularizations) in order to mitigate the effects of outliers. The obtained fully robust approach allowed us to deal with the problem of native outliers, e.g., motion boundaries or background motions, as well as UI artefacts and image noise. It is worth mentioning at this point that other strategies have been proposed in the literature to address the problem of cardiac motion estimation outliers (see Section I). For example, in [32] the myocardium was segmented prior to the motion estimation, allowing to down-weight the displacements located at the epicardial bor- ders, and thus, to prevent over-smoothing in this area. In con- trast with the method studied in [32], the proposed method addressed the problem of spatial outliers for the entire motion field (i.e., using pixel-wise weights). It allowed us to deal not only with discontinuities at the contours, but also with outliers located inside the myocardium. In addition, the proposed strategy did not require a beforehand segmentation (which may be difficult to obtain in some practical applications), allowing spatial discontinuities to be directly compensated from the estimated motions. More generally, the proposed approach showed the interest of jointly robustifying the data fidelity and regularization terms in a variational approach.
En savoir plus

13 En savoir plus

Robust Optical Flow Estimation in Cardiac Ultrasound Images Using a Sparse Representation

Robust Optical Flow Estimation in Cardiac Ultrasound Images Using a Sparse Representation

Other prospects include the robustification of the dictionary learning step. A robust learning of the cardiac motion dictio- nary can be especially useful when using corrupted learning data. For example, this is the case when the training set contains patterns far from typical cardiac motions. If the dictionary is learnt in an adaptive way, i.e., using the esti- mated motions themselves, a robust learning approach would allow us to discard erroneous motion estimates. Furthermore, it would be worth to take advantage of the sparse codes, the dictionary atoms and the robust weights that are obtained using the proposed method. For example, a joint robust motion estimation and segmentation could be obtained by combining the information provided by the weights of the spatial and sparse regularizations. Taking into account the increased use of 3D UI, it is also worth mentioning that the proposed method could be extended to 3D. However, the limitations of frame rate and spatial resolution in the azimuthal direction imply that the use of 3D US images does not lead necessarily to a more accurate estimation when compared to 2D UI.
En savoir plus

14 En savoir plus

Robust Optical Flow Estimation in Cardiac Ultrasound Images Using a Sparse Representation

Robust Optical Flow Estimation in Cardiac Ultrasound Images Using a Sparse Representation

V. D ISCUSSION AND P ERSPECTIVES This paper presented a new motion estimation method for robust 2D cardiac US images. The main objective of this method was to robustify the cardiac motion estimation algorithm of [21] (based on spatial and sparse regularizations) in order to mitigate the effects of outliers. The obtained fully robust approach allowed us to deal with the problem of native outliers, e.g., motion boundaries or background motions, as well as UI artefacts and image noise. It is worth mentioning at this point that other strategies have been proposed in the literature to address the problem of cardiac motion estimation outliers (see Section I). For example, in [32] the myocardium was segmented prior to the motion estimation, allowing to down-weight the displacements located at the epicardial bor- ders, and thus, to prevent over-smoothing in this area. In con- trast with the method studied in [32], the proposed method addressed the problem of spatial outliers for the entire motion field (i.e., using pixel-wise weights). It allowed us to deal not only with discontinuities at the contours, but also with outliers located inside the myocardium. In addition, the proposed strategy did not require a beforehand segmentation (which may be difficult to obtain in some practical applications), allowing spatial discontinuities to be directly compensated from the estimated motions. More generally, the proposed approach showed the interest of jointly robustifying the data fidelity and regularization terms in a variational approach.
En savoir plus

13 En savoir plus

Cardiac Motion Estimation Using Convolutional Sparse Coding

Cardiac Motion Estimation Using Convolutional Sparse Coding

Abstract—This paper studies a new motion estimation method based on convolutional sparse coding. The motion estimation problem is formulated as the minimization of a cost function composed of a data fidelity term, a spatial smoothness constraint, and a regularization based on convolution sparse coding. We study the potential interest of using a convolutional dictionary in- stead of a standard dictionary using specific examples. Moreover, the proposed method is evaluated in terms of motion estimation accuracy and compared with state-of-the-art algorithms, showing its interest for cardiac motion estimation.
En savoir plus

6 En savoir plus

Low-Dimensional Representation of Cardiac Motion Using Barycentric Subspaces: a New Group-Wise Paradigm for Estimation, Analysis, and Reconstruction

Low-Dimensional Representation of Cardiac Motion Using Barycentric Subspaces: a New Group-Wise Paradigm for Estimation, Analysis, and Reconstruction

4.2. Evaluation using a Synthetic Sequence We evaluate the method using one synthetic time series of T = 30 cardiac im- age frames computed using the method described in Prakosa et al. (2013). The use of a synthetic sequence has the important advantage to provide a dense point correspondence field following the motion of the myocardium during the cardiac cycle which can be used to evaluate the accuracy of the tracking. Another op- tion could be to use point correspondence manually defined by experts, but they tend to be inconsistent and not reliable Tobon-Gomez et al. (2013). First, we compute the optimal references using the methodology described in Algorithm 2, giving us the three reference frames spanning the barycentric subspace: #1 is frame 1, #2 is frame 11 and #3 is frame 21. Then we register each frame i of the sequence using the method described above to get the deformations from each of the three references to the current images using both the standard method and our approach using Barycentric Subspaces as a prior. We deform each of the 3 ground truth meshes corresponding to the reference frames (1,11 and 21) with the deformation from the reference frame to the current frame. We compare our approach with the standard approach where the registration between one of the reference and the current frame is done directly. In Figure 8 (left), we show the point-to-point registration error of the deformed mesh using the 3 different deformations (one with respect to each references). Substantial reduction of the error (of about 30%) can be seen for the largest deformations (between end-systole and the first reference for the blue curve corresponding to the frame 1 chosen as reference). This comes at the cost of additional error for the small deformations evaluated at the frame near the respective references. In Figure 8 (right), we show the estimation of the volume curve (which is one of the most important cardiac feature used in clinical practice). Our better estimation of the large deformation leads to a substantial improvement of the volume curve estimation. In particular the estimation of the ejection fraction goes from 32% with the standard method to 38%, closer to the ground truth (43%), reducing the estimation error by half.
En savoir plus

24 En savoir plus

Cardiac Motion Estimation Using Convolutional Sparse Coding

Cardiac Motion Estimation Using Convolutional Sparse Coding

motion image s k is modeled as a convolution between the co- efficient maps x m,k and a set of M filters d m . The coefficient maps indicate where the filters are activated, and the filters are supposed to model specific structures contained in the images of interest. A particular example is displayed in Fig. 1, where Fig. 1(a) displays one frame of the heart motion, Fig. 1(b) shows the estimated filters for the image and Fig. 1(c) shows the map of the cardiac motions associated with the red patch of the image. Note that the convolutional dictionary of Fig. 1(b) was obtained using M = 32 filters of size L × L with L = 8. In Fig. 1(c), the cardiac motions of the red patch are written as the linear combination of 10 filters convolved with a respective set of coefficient maps. Note that only 10 filters are required to represent this patch and that the 22 remaining filters are inactive, i.e., with zero coefficients. To improve the quality of the visualization, only the non-zero values of the coefficient maps have been shown. The key advantage of using a convolutional sparse model is its translation-invariant property which may offer a better representation in comparison with standard dictionary learning strategies. Indeed, each patch of the image can be sparsely represented with the proposed model by a single shift-invariant local dictionary [6].
En savoir plus

7 En savoir plus

Anomaly detection in mixed telemetry data using a sparse representation and dictionary learning

Anomaly detection in mixed telemetry data using a sparse representation and dictionary learning

k =1  e D,k  2 (4) where  e D ,k  2 , k = 1 , . . . , K D is the Euclidean norm, e D,k corre- sponds to the k th time-series of e D associated with the k th param- eter and b D is a regularization parameter that controls the level of sparsity of e D . The sparsity constraint for the anomaly signal re- flects the fact that anomalies are rare and affect few parameters at the same time. Note that the discrete vector x D is constrained to belong to B, where B is the canonical or natural basis of R L , i.e., B = {  l , l = 1 , · · · , L } , where  l is a vector whose l th component equals 1 and whose other components equal 0. In other words, only one atom of the discrete dictionary  D is chosen to repre- sent the discrete signal, this amounts to looking for the nearest neighbour of y D in the dictionary. This strategy has proved to be an effective method to reconstruct discrete signals (compared to a representation using a linear combination of atoms), which ex- plains this choice. Since x D belongs to a finite set, its estimation is combinatorial and can be solved for each atom φ D, l (where φ D, l is the l th column of  D ) as follows
En savoir plus

12 En savoir plus

Learning A Tree-Structured Dictionary For Efficient Image Representation With Adaptive Sparse Coding

Learning A Tree-Structured Dictionary For Efficient Image Representation With Adaptive Sparse Coding

This paper first describes a new tree-structured dictionary learning method called Tree K-SVD. Inspired from ITD and TSITD, each dictionary at a given level is learned from a sub- set of residuals of the previous level using the K-SVD algo- rithm. The tree structure enables the learning of more atoms than in a ”flat” dictionary, while keeping the coding cost of the index-coefficient pairs similar. Tests are conducted on fa- cial images, as in [1, 4, 5], compressed for multiple rates in a compress scheme. Thus, for a given bit rate, Tree K-SVD is shown to outperform ”flat” dictionaries (K-SVD, Sparse K- SVD and the predetermined (over)complete DCT dictionary) in terms of quality of reconstruction for a high sparsity, i.e. when the number of atoms used in the representation of a vector is low. Setting the sparsity constraint to only a few atoms limits the number of levels, and so of atoms, in the tree-structured dictionary. The paper then describes an adap- tive sparse coding method applied on the tree-structured dic- tionary to adapt the sparsity per level, i.e. to allow selecting more than 1 atom per level. It is shown to improve the quality of reconstruction.
En savoir plus

6 En savoir plus

Detection of Multiple Sclerosis Lesions using Sparse Representations and Dictionary Learning

Detection of Multiple Sclerosis Lesions using Sparse Representations and Dictionary Learning

2 Over the past years, various approaches for semi-automatic and automatic segmentation of MS lesions have been proposed. In these methods, different im- age features, classification methods and models have been tried, but they usu- ally suffer from high sensitivity to the imaging protocols and so usually require tedious parameter tuning or specific normalized protocols [3]. More recently, sparse representation has evolved as a model to represent an important variety of natural signals using few elements of an overcomplete dictionary. Many pub- lications have demonstrated that sparse modeling can achieve state-of-the-art results in image processing applications such as denoising, texture segmentation and face recognition [4, 5]. In [5], given multiple images of individual subjects under varying expressions and illuminations, the images themselves were used as dictionary elements, for classification. Such a method uses dictionary learning to analyze image as a whole. Mairal et al [6] proposed to learn discriminative dictionaries better suited for local image descrimination tasks. In medical imag- ing, local image analysis is of prime importance and it could be interesting to see the performance of sparse representation and dictionary learning based classification methods in the context of disease detection. Some researchers have reported works on segmentation of endocardium and MS lesions using dictionary learning [7, 8]. Weiss et al. proposed an unsupervised approach for MS lesion seg- mentation, in which a dictionary learned using healthy brain tissue and lesion patches is used as basis for classification [7].
En savoir plus

10 En savoir plus

Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing.

Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing.

5. Department of Radiology, Nanjing Hospital Affiliated to Nanjing Medical University , 210096, People’s Republic of China Abstract —In abdomen computed tomography (CT), repeated radiation exposures are often inevitable for cancer patients who receive guided surgery or radiotherapy. Low-dose scans should thus be considered in order to avoid too high accumulative harm of radiation. This work is aimed at improving abdomen tumor CT images from low-dose scans by using a fast dictionary learning (DL) based processing. Stemming from sparse representation theory, the patch-based DL approach proposed in this paper allows effective suppression of both mottled noise and streak artifacts. The experiments carried out on clinical data show that the proposed method brings encouraging improvements in abdomen low-dose CT images with tumors.
En savoir plus

13 En savoir plus

Augmented dictionary learning for motion prediction

Augmented dictionary learning for motion prediction

this work combines merits of Markovian-based methods and clustering-based methods by finding the local clusters characterized by partial trajectory segments, and making predictions using both the local motion models and the global Markovian transition dynamics. Multi-class inverse reinforcement learning (IRL) algorithms [16], [17], which are related to clustering-based methods (e.g. [7]), have also been applied to modeling motion patterns. Previous work based on clustering partial trajectory segments [18], [19] was limited to modeling local motion patterns as short straight line segments, whereas this work is more flexible since we do not constrain the shape of local motion patterns. Recent work [20] applied sparse coding to an image representation of hand gesture trajectories, but this work models each dimension independently, which would not be suitable for location- based applications as considered in this paper.
En savoir plus

9 En savoir plus

Unsupervised change detection for multimodal remote sensing images via coupled dictionary learning and sparse coding

Unsupervised change detection for multimodal remote sensing images via coupled dictionary learning and sparse coding

This paper proposes an unsupervised CD method able to deal with images dissimilar in terms of modality and of spatial and/or spectral resolutions. The adopted methodology, similar to [17, 18], learns coupled dictionaries able to conveniently represent multi- modal remote sensing images of the same geographical location. The problem is formulated as a joint estimation of the coupled dictionary and sparse code for each observed image. Additionally, appropriate statistical models are used to better fit the modalities of the pair of observed images. Overlapping patches are also taken into account during the estimation process. Finally, to better couple im- ages with different resolutions, additional scaling matrices [19] are jointly estimated within the whole process. The overall estimation process is formulated as an inverse problem. Due to the nonconvex nature of the problem, it is solved iteratively using the proximal alternating linearized minimization (PALM) algorithm [20], which ensures convergence towards a critical point for some nonconvex non-smooth problems. CD is, then, envisaged through the differ- ences between sparse codes estimated for each image using the estimated coupled dictionaries. This paper is organized as follows.
En savoir plus

6 En savoir plus

Unsupervised change detection for multimodal remote sensing images via coupled dictionary learning and sparse coding

Unsupervised change detection for multimodal remote sensing images via coupled dictionary learning and sparse coding

This paper proposes an unsupervised CD method able to deal with images dissimilar in terms of modality and of spatial and/or spectral resolutions. The adopted methodology, similar to [17, 18], learns coupled dictionaries able to conveniently represent multi- modal remote sensing images of the same geographical location. The problem is formulated as a joint estimation of the coupled dictionary and sparse code for each observed image. Additionally, appropriate statistical models are used to better fit the modalities of the pair of observed images. Overlapping patches are also taken into account during the estimation process. Finally, to better couple im- ages with different resolutions, additional scaling matrices [19] are jointly estimated within the whole process. The overall estimation process is formulated as an inverse problem. Due to the nonconvex nature of the problem, it is solved iteratively using the proximal alternating linearized minimization (PALM) algorithm [20], which ensures convergence towards a critical point for some nonconvex non-smooth problems. CD is, then, envisaged through the differ- ences between sparse codes estimated for each image using the estimated coupled dictionaries. This paper is organized as follows.
En savoir plus

7 En savoir plus

Fusion of multispectral and hyperspectral images based on sparse representation

Fusion of multispectral and hyperspectral images based on sparse representation

F (4) where D is the dictionary, A is the sparse code, and ¯ U is the ap- proximation of U derived from the dictionary and the code. Gen- erally, an over-complete dictionary is proposed as a basis for the image patches. In many applications, the dictionary D is •xed a priori, and corresponds to various types of bases constructed using atoms such as wavelets [21] or discrete cosine transform coef•cients [22]. However, these bases are not necessarily well matched to nat- ural or remote sensing images since they do not necessarily adapt to the nature of the observed images. As a consequence, learning the dictionary from the observed images instead of using prede•ned bases generally improves signal representation [23]. More precisely, the strategy advocated in this paper consists of learning a dictionary D from the high resolution MS image to capture most of the spatial information contained in this image. To learn a dictionary from a multi-band image, a popular method consists of searching for a dic- tionary whose columns (or atoms) result from the lexicographically vectorization of the HS 3D patches [16, 24]. However, this strat- egy cannot be followed here since the dictionary is learned on the MS image Y m ∈ R n λ ×n composed of n λ bands to approximate
En savoir plus

6 En savoir plus

Fusion of multispectral and hyperspectral images based on sparse representation

Fusion of multispectral and hyperspectral images based on sparse representation

F (4) where D is the dictionary, A is the sparse code, and ¯ U is the ap- proximation of U derived from the dictionary and the code. Gen- erally, an over-complete dictionary is proposed as a basis for the image patches. In many applications, the dictionary D is •xed a priori, and corresponds to various types of bases constructed using atoms such as wavelets [21] or discrete cosine transform coef•cients [22]. However, these bases are not necessarily well matched to nat- ural or remote sensing images since they do not necessarily adapt to the nature of the observed images. As a consequence, learning the dictionary from the observed images instead of using prede•ned bases generally improves signal representation [23]. More precisely, the strategy advocated in this paper consists of learning a dictionary D from the high resolution MS image to capture most of the spatial information contained in this image. To learn a dictionary from a multi-band image, a popular method consists of searching for a dic- tionary whose columns (or atoms) result from the lexicographically vectorization of the HS 3D patches [16, 24]. However, this strat- egy cannot be followed here since the dictionary is learned on the MS image Y m ∈ R n λ ×n composed of n λ bands to approximate the target image U composed of m e λ spectral bands. Conversely, to capture most of the spatial details contained in each band of the MS image, we propose to approximate each band of the target image U by a sparse decomposition on a dedicated dictionary. In this case, the regularization term (4) can be written as
En savoir plus

7 En savoir plus

Show all 10000 documents...