• Aucun résultat trouvé

wNoEM wEM mdEM

Percentage of corrupted shapes

ASSD (mm)

(b)d-perturbation

% corrup. PCA cPCA wNoEM wEM mdEM

25 1.36 1.41±0.02 1.40±0.01 1.73±0.01 1.36±0.02 50 1.36 1.49±0.02 1.54±0.02 1.83±0.01 1.41±0.03 75 1.36 1.51±0.03 1.60±0.02 1.83±0.01 1.43±0.04 100 1.36 1.76±0.10 1.69±0.04 1.84±0.02 1.65±0.05

(c) n-perturbation

sPCA cPCA wNoEM wEM mdEM

Percentage of corrupted shapes

ASSD (mm)

(d) n-perturbation

Table 4.2: Comparison of standard PCA vs. various robust PCA implementations (wEM and mEM from [SLB07], and wNoEM from [Kie97]) in the presence of d- and n-perturbations. Standard PCA without corrupted shapes is denoted as sPCA, while cPCA refers to the standard PCA with corrupted shapes. Error metric is ASSD in mm.

shapes was high (cPCA had the highest ASSD with 100% of corrupted shapes). Among all robust approaches, the wEM was the worst regardless of the perturbation type. It only outperformed the cPCA when all shapes were perturbed. mdEM was the approach which was consistently more reliable (Fig. 4.22(e)) with respect to the other two robust implementations, although the difference was not statistically significant for low percentages of corrupted shapes. Nevertheless, the error difference between the mdEM and the sPCA results remained below 10% of the optimal

value regardless of the percentage of corrupted shapes.

Then-perturbations appeared to be more problematic for the cPCA and the robust PCA variants compared to the directional perturbation. It is difficult to clearly explain this discrepancy since perturbations are randomly located over the shapes. Nevertheless, the mdEM approach still produced satisfactory results as the error difference remained below 20% of that of sPCA. Finally, no significant differences were observed depending on the bone types or sides.

4.6 Discussion

4.6.1 Generic Model Construction

The creation of generic models is greatly facilitated by the proposed techniques such as tubular modeling (Sec. 4.2.2.4) and automatic inter-penetration removal (Sec. 4.2.4). However, the issue with the initial segmentation of the images remains. Although semi-automatic segmentation tools are available, such as the interpolation of segmentation results between slices or the use of constraint points, the processing of 500-1000 slices is a tedious task. In fact, a lot of time and effort are spent in checking first the results in a global manner (i.e. analysis of the global shape of the segmented structure) and subsequently in a local fashion by doing a slice-by-slice verification.

The accuracy of the segmentation of course depends of the context. We have seen that the eval-uation of our modeling by an expert radiologist was satisfactory. Despite some reconstruction errors observed in the images, 3D models were considered as very good from a modeling view-point. The accuracy in building generic models is generally not as critical as in the segmentation of medical images for clinical diagnosis. In the clinical context, results presenting too large devi-ations from the ground truth can indeed mislead the diagnosis and put patients at risk. Similarly, the creation of appearance priors requires an accurate modeling because the appearance around the border of the models is captured. A poor modeling will inevitably yield non-representative priors which will be useless in a segmentation method.

Instead, what really matters in the creation of generic models is the quality of the models. Models must be smooth, must not inter-penetrate each other and must offer a good trade-off between mesh quality and number of vertices. A mesh quality indicator is the quasi-regularity of the mesh faces and quasi-uniformity of the vertex repartition. Our construction method fulfills these conditions. They are essential to create good shape priors (correspondence) and to have generic models that well behave during a deformable model evolution. Our ultimate goal is the use of deformable generic models coupled with shape priors to segment clinical data.

4.6.2 Shape Correspondence

Based on the experimental results reported in Sec.4.5.3, we can consider that the correspondence method by landmark sliding is beneficial with respect to the initial mesh-to-volume correspon-dence method. This was confirmed by the performance metrics and analysis of the modes of

variations. Although the SSMs built with hip bone shapes appeared as less compact after corre-spondence by landmark sliding, the Specificity and the Generality measures improved. Further-more, generated instances of hip bones with a refined correspondence appeared as more natural and smoother.

Our metrics presented some curve crossings which could lead to ambiguity [KE06]. In the liter-ature, alternative performance metrics were hence proposed [MDW07,MDW08]. Among them, the testing of the SSM in the segmentation context was considered [VADF+06]. This is a good choice as ultimately we are interested in the performance of our knowledge-based segmentation approach. This investigation will be conducted in the evaluation Section of next Chapter.

The correspondence by landmark sliding approach was in particular appreciated due to its capa-bility to refine a training set of shapes with arbitrary topology. In fact, compared to MDL-based approaches, no constraints are made on the shape genus. Moreover, the resulting refined shapes remain totally unchanged from a topological viewpoint (e.g., same number of points), which is particularly convenient. By using the lowest shape resolution, which offered a good compromise between shape quality and number of vertices, we were able to compute the correspondence across all resolution levels by using projection, TPS transform and resolution change.

The speed of this correspondence method is also an asset. We were able to process 45 right/left femur and hip bone training shapes in approximatively 4 hours in a single core 3.4 Ghz Pentium 4 with 2 GB of RAM. This computation time is considered to be very satisfactory for a shape correspondence approach and it was possible thanks to the use of the lower resolution, which produced a low number of landmarks, and thanks to the efficient solving of the quadratic prob-lem (Appendix Sec. A.4). If we compared it to the fast MDL-based correspondence approach of Heimann [Hei08], we reach similar, if not lower computation times. In fact, Heimann’s im-plementation needed between 30 to 50 hours (depending on the use or not of an optimization process) to process 45 shapes with 2562 landmarks. We ran a similar correspondence procedure on our 43 training shapes with the same number of landmarks in 27.7 hours approximatively.

Since the correspondence scales linearly with the number of training shapes, we can extrapolate a total processing time of 29 hours for 45 shapes.

4.6.3 Robust Statistical Shape Model

The synthetic results of Sec. 4.5.4stressed the need to use robust SSM construction methods. The standard PCA was indeed too sensitive to erroneous vertices. The EM-based robust approach (mdEM) which replaced missing data with reconstructed values was the most effective. Given the strong artificial perturbations that were applied, the robust approach mdEM is expected to behave efficiently with real data in which the amplitude of perturbations tend to be smaller. Real perturbations have good chances to be modeled as a combination of d−and n− perturbations.

In fact, d−perturbations model the effect that could be observed if vertices outside the image extent were kept fixed while the deformable model was evolving inside the image FOV. This would result in a “shift” effect visible with thed−perturbation (Fig.4.22(a)). Then−perturbation

attempts to mimic erroneous deformations, due to misleading image artifacts or poor appearance priors, which will occur along the vertex normal direction.

The robust PCA error was generally close to the reference PCA error which would suggest that an SSM built from perturbed shapes with a robust PCA could be used for segmentation purpose.

However, this experiment illustrated only the good generality aspect of the robust SSM. It is unsure indeed whether the mixing of completeS and corrupted S shapes into a robust SSM will preserve its specificity and efficacy in the segmentation context. As a result, this will be assessed in the evaluation Section of next Chapter. In the following of this thesis, the mdEM approach is chosen as the official robust PCA.

K nowledge - based D eformable