101 - 54602 Villers lès Nancy Cedex France Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex France Unité de recherche INRIA Rhône-Alpes : 65[r]

101 - 54602 Villers lès Nancy Cedex France Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex France Unité de recherche INRIA Rhône-Alpes : 65[r]

It is important to indicate the other interesting and effec- tive models **for** tubular structure **segmentation** applications including the curvilinear enhancement filters (e.g. the steerable filters [ 13 ], [ 20 ]–[ 23 ], the orientation score-based diffusion filters [ 24 ]–[ 26 ], the path operator-based filter [ 27 ]) and the graph-based shortest path models (e.g. [ 28 ], [ 29 ]). **For** more models relevant to tubular structure **segmentation**, we refer to the complete reviews in [ 1 ]–[ 3 ]. In the remaining of this section, we present a non-exhaustive overview of the existing minimal path-based tubular structure **segmentation** approaches. The centerline of a tubular structure can be naturally mod- elled as a minimal path [ 30 ], which is a globally optimal curve that minimizes a curve length measured by a suitable metric. The classical Cohen-Kimmel minimal path model [ 30 ] has been taken as the basic tool in many tubularity **segmentation** tasks, due to its global optimality and the efficient and stable numerical solvers like the fast marching methods [ 31 ]–[ 33 ]. In the context of tubularity **segmentation**, the minimal path- based approaches are studied mainly along two research lines. Firstly, the Cohen-Kimmel model [ 30 ] provides an efficient and robust way **for** minimally interactive **segmentation**, pro- viding that the end points of the target structure have been pre- scribed. To reduce the user intervention, the growing minimal path model [ 34 ] was designed to iteratively add new source points, which are referred as keypoints, during the **geodesic** distance computation. The keypoints detection method has been applied to road crack detection [ 35 ] and blood vessel **segmentation** [ 36 ], [ 37 ] with suitable stopping criteria. The **geodesic** voting model [ 38 ] used a voting score derived from a set of minimal paths with a common source point, which can detect a vessel tree structure from a single source point. The curves resulted from the **geodesic** voting method [ 38 ] and their respective offset curves can be taken as initialization **for** the narrowband **active** contours model [ 39 ]. By minimizing an

En savoir plus
A local minimizer **for** such a criterion including both region and boundary func- tionals is usually hard to compute. This is mostly due to the fact that the set of **image** **regions**, i.e., the set of regular open domains in R n (whose boundary is a closed, C 2
manifold), does not have a structure of vector space, preventing us from using in a straightforward fashion gradient descent methods. In order to circumvent this diﬃ- culty, calculus of variations and shape optimization techniques can be brought to bear on the problem. The basic idea is to use them in order to derive a PDE that will drive the boundary of an initial region toward a local minimum of the error criterion. The key point is to compute the velocity vector at each point of the boundary at each time instant. In this paper we propose a framework **for** achieving these goals in a number of practically important cases.

En savoir plus
ABSTRACT In this chapter, we propose to concentrate on the research of an optimal domain with regards to a global criterion including region and boundary functionals. A local shape minimizer is obtained through the evo- lution of a deformable domain in the direction of the shape gradient. Shape derivation tools, coming from shape optimization theory, allow us to easily differentiate region and boundary functionals. We more particularly focus on region functionals involving region-dependent features that are globally attached to the region. A general framework is proposed and illustrated by many examples involving functions of parametric or non parametric prob- ability density functions (pdfs) of **image** features. Among these functions, we notably study the minimization of information measures such as the entropy **for** the **segmentation** of homogeneous **regions** or the minimization of the distance between pdfs **for** tracking or matching **regions** of interest. Keywords: **active** contours, **active** **regions**, region functionals, boundary functionals, shape optimization, shape gradient, **segmentation**, tracking, **Image** statistics, non parametric statistics, parzen window, entropy, dis- tance between pdfs.

En savoir plus
des Quinze-Vingts, Paris, France
Abstract
In this chapter, we give an overview of part of our previous work based on the minimal **geodesic** path framework and the Eikonal partial differential equation (PDE). We show that by designing adequate Riemannian and Randers **geodesic** metrics the minimal paths can be utilized to search **for** solutions to almost all of the **active** contour problems and to the Euler-Mumford elastica problem, which allows to blend the advantages from minimal **geodesic** paths and those original approaches, i.e. the **active** contours and elastica curves. The proposed minimal path-based models can be applied to deal with a broad variety of **image** analysis tasks such as boundary detection, **image** **segmentation** and tubular structure ex- traction. The numerical implementations **for** the computation of minimal paths are known to be quite efficient thanks to the Eikonal solvers such as the Finsler variant of the fast marching method introduced in ( Mirebeau , 2014b ).

En savoir plus
+ λκ N (22)
The evolution of the **active** contour driven by (21) is given in Fig. 3 while the one obtained using (22) is given in Fig. 4. We can remark that when the PDE includes the additional terms, the square is well segmented (Fig. 3(c)), whereas when the PDE does not include these additional terms we obtain a circle instead of the expected square (Fig. 4(c)). These results can be better understood while analyzing the velocity vectors of each evolution equation. We compare the two evolutions of the velocity vectors using PDE (21) during the propagation of the curve. The evolution of the amplitude is given in Fig. 3(d–f), while the ampli- tude using PDE (22) is given in Fig. 4(d–f). We can observe that the velocity using PDE (22) is a constant where obviously the **image** features do not appear. On the contrary the **image** features well appear in the velocity using PDE (21) and so the square can be well segmented.

En savoir plus
In this case, as explained in Section 3.2, we aim at maximizing the KLD between inside and outside pdf estimates. The four images are well segmented and composed of two (or more) textures, where at least one texture in each **image** is well sparsified by the UDWT (Fig.2). **For** the **image** in Fig.2 (a), the FCR=2.35%. The two textures composing the **image** are sparsely rep- resented by wavelets because they are essentially oscil- latory patterns with main horizontal and vertical orien- tations. **For** Fig.2 (b), the FCR is 3.53%, this is quite low considering the poor sparsification of one of the two textures. The example of Fig.2 (c) gives FCR=1.60% given that the UDWT is a good sparse representation of these textures, and the simple partition of the **image**. We present results **for** more than two textures on **image** Fig.2 (d) , to show that our method is not restricted to the **segmentation** of only two **regions**. Our method does a very good job at segmenting the different textures, without the need of a reference, with a FCR=2.28%. However, one has to keep in mind that this was possible because the outside texture (here diagonally oscillating) is efficiently sparsified by the UDWT, which makes it easily discernible from the other textures.

En savoir plus
Index Terms—**Segmentation**, Alternate Sequential Filtering
(ASF), microtomography, **active** contours
I. I NTRODUCTION
W EATHERING of buildings is a widespread problem encountered in most of countries all around the world. This weathering concerns buildings made with concrete as well as historical buildings made with stones or bricks. Indeed, these porous materials are subjected to deterioration due to the action of external environmental (physical, chemical and biological) agents [8], [14]. In all cases, water transfer within the whole volume of the porous media is the common point to weathering. Then, **for** aesthetical reasons, durability aspects, historical and cultural interests, architects, restorers and sci- entists work together in order to protect and restore these buildings. On this topic, a lot of studies concerns building material characterisations by analysing mineral and chemical composition and determining their porous characteristics [13], [25]. A complementary way in this field is to characterize the medium and to simulate some physical processes (e.g. fluid and mass transfer) in a realistic geometry. Such a goal can be achieved by 3D grey level **image** analysis obtained by X-ray microtomography. This technique gives a map of the X-ray absorption coefficient of the various phases constituting the material. Indeed, X-ray tomography [17] is a powerful tool to extract accurately the structure of various porous materials: rocks [1], [6], [30], cements [12], and others [7], [24], [16].

En savoir plus
Practically, the multicriteria-guided **segmentation** step pro- ceeds as follows. The component-tree of the 3D PET **image** is first computed in quasi-linear time. This construction basically consists of thresholding the **image** **for** each grey-level value; computing the connected component of each binary (thresh- olded) **image**; and organizing these components as the nodes of a tree that is structured with respect to the standard inclusion relation on sets [16]. Each node of the tree then corresponds to a region of the **image**, with specific space and intensity properties. The criteria of each region can then be computed and stored as a vectorial attribute at the corresponding node. A ternary classification of these attributes, with respect to the classification model trained from the previous step, then allows us to discard the nodes that do not correspond to lesions. The adopted approach only preserves the **regions** of interest, with an explicit discrimination between **active** lesions and hyperfixating organs.

En savoir plus
Received: date / Accepted: date
Abstract Minimum cost paths have been extensively studied theoretical tools **for** interactive **image** segmen- tation. The existing geodesically linked **active** contour (GLAC) model, which basically consists of a set of ver- tices connected by paths of minimal cost, blends the benefits of minimal paths and region-based **active** con- tours. This results in a closed piecewise-smooth curve, over which an edge or region energy functional can be formulated. As an important shortcoming, the GLAC in its initial formulation does not guarantee the curve to be simple, consistent with respect to the purpose of **segmentation**. In this paper, we draw our inspiration from the GLAC and other boundary-based interactive **segmentation** algorithms, in the sense that we aim to extract a contour given a set of user-provided points, by connecting these points using paths. The key idea is to select a combination among a set of possible paths, such that the resulting structure represents a relevant closed curve. Instead of considering minimal paths only, we switch to a more general formulation, which we re- fer to as admissible paths. These basically correspond to the roads travelling along the bottom of distinct valleys between given endpoints. We introduce a novel term to

En savoir plus
In Table. 1 , we show the quantitive comparisons between the isotropic and Finsler dual- front models on the images shown in Fig. 5 . **For** each **image**, we run the dual-front models with different metrics **for** 30 times and we compute the statistical information including the maximum (Max.), minimum (Min.), average (Avg.) and standard derivation (Std.) values of the Jaccard index J **for** these 30 experiments. In each dual-front running, the initial contour is set as a circle with radius 40, the centre point of which is chosen by randomly sampling a point that is inside the target. We set α = 0.3, β = 10 and ` = 16 **for** both the isotropic and Finsler dual-front models. Specifically, we set ε = 1 **for** the dual-front model with the proposed Finsler metrics. From Table. 1 , one can see that through the 30 **segmentation** experiments on each **image**, the Finsler dual-front model achieves higher Avg. and Max. Jaccard index values than the isotropic model. Meanwhile, the standard derivation of J from the Finsler dual-front model is lower than the isotropic model. This implies the proposed Finsler dual-front model is more accurate and more robust to initialization compared to the isotropic case [ 17 ], even if the parameter α is set to a small value.

En savoir plus
Figure 1. λ-flat zones of **image** "tooth saw" (21 × 21 pixels). (a) **Image**. (b) Row profile. (c) λ = 9.9 (21 zones). (d) λ = 10 (1 zone).
The purpose of our study is just to address this issue. We start with an initial partition by λ-flat zones, with a non critical high value of λ, that leads to a sub-**segmentation** (i.e., large classes in the partition). Then, **for** each class, we would like to define a second **segmentation** according to a regional criterion. In fact, two new connections are introduced: (1) η- bounded **regions**, and (2) µ-**geodesic** balls; the corresponding algorithms are founded on seed-based region growing inside the λ-flat zones. We show that the obtained reliable segmentations are less critical with respect to the choice of parameters and that these new **segmentation** approaches are appropriate **for** hyperspectral images. From a more theoretical viewpoint, the Serra’s theory of **segmentation** [ 10 ] allows us to explain many notions which are considered in this paper.

En savoir plus
2 The Neutrosophic Representation of Infor- mation.
Neutrosophic information is described by a triplet q = (µ, ω, ν) ∈ [0, 1] 3 , where µ represents the degree of truth, ω degree of neutrality and ν degree of falsity [8], [9], [10], [11]. The neutrosophic representation is a generaliza- tion of the fuzzy representation [12] and at the same time a generalization of the intuitionistic fuzzy representation [1]. **For** neutrosophic information the operations of union, intersection and negation can be defined. **For** two neutrosophic information a = (µ a , ω a , ν a ) and b = (µ b , ω b , ν b ) we define in

En savoir plus
In the last part a definition of the mean shape of a sample set of shapes is given, as well as the one of characteristic deformations that convey the shape variability, and then this sha[r]

195 En savoir plus

In a recent study, about 1000 neurons were reconstructed from a mouse retina using 20,000 h of human labor ( Helmstaedter et al., 2013 ). In spite of this great effort, the reconstructed retinal volume was just 0.1 mm on each side, only large enough to encompass the smallest types of retinal neurons. This study employed semiautomated methods, using advances in machine learning to automate most of the reconstruction ( Jain et al., 2010b ). Without the automation, the reconstruction would have required 10–100× more human effort. To reconstruct larger volumes, it is critical to improve the accuracy of computer algorithms and thereby reduce the amount of human labor required by semiautomated systems. Ideally, the need **for** human interaction will be progressively eliminated, gradually enabling fully automated tracing with eventual proof-reading of its results.

En savoir plus
2.2 **Geodesic** voting **for** the **segmentation** of tree structures
In Rouchdy and Cohen (2008), we have introduced a new concept to segment a tree structure from only one point given by the user in the tree structure. This method consists in computing the **geodesic** density from a set of geodesics extracted from the **image**. Assume we are looking **for** a tree structure **for** which a potential cost function has been defined as above and has lower values on this tree structure. First, we provide a starting point x0 roughly at a root of the tree structure, and we propagate a front wave in the whole **image** with the fast marching method, obtaining the minimal action U. Then we consider an end point anywhere in the **image**. Backtracking the minimal path from the end point, we will reach the tree structure somewhere and stay on it until the start point is reached. So, a part of the minimal path lies on some branches of the tree structure. The idea of this approach is to consider a large number of end points {x k } N k ¼1 on the **image** domain and to analyse the set of minimal paths y k obtained. **For** this, we consider a voting scheme along the centrelines. When backtracking each path, we add 1 to each pixel we pass over. At the end of this process, pixels on the tree structure will have a high vote because many paths have to pass over it. On the contrary, pixels in the background will generally have a low vote because very few paths will pass over them. The result of this voting scheme is what we call the **geodesic** density or voting score. This means at each pixel the density of geodesics pass over this pixel. The tree structure corresponds to the points with high-**geodesic** density.

En savoir plus
2. Data
2.1. Data **for** **active** **regions**
The HMI, an instrument on board the SDO spacecraft, pro- vides us continuous full-disk photospheric magnetic field data ( Scherrer et al. 2012 ; Schou et al. 2012 ). The HMI team devel- oped an automated method that detects **active** region patches from the full-disk vector magnetogram data and provides us derivative data that are called space-weather HMI **active** region patches (SHARP) ( Bobra et al. 2014 ). The automatic-detection algorithm operates on the line-of-sight magnetic field **image** and creates a smooth bounding curve, called bitmap, which is cen- tered on the flux-weighted centroid. The HMI Stokes I, Q, U, V data were inverted within the smooth bounding curve with the code called very fast inversion of the Stokes vector (VFISV), which is based on the Middle-Eddington model of the solar atmosphere. The 180 ◦ ambiguity in the transverse component of the magnetic field was corrected **for** using the minimum-energy algorithm ( Metcalf 1994 ; Crouch et al. 2009 ). The inverted and disambiguated magnetic vector field data were remapped to a Lambert cylindrical equal-area projection, which gives us decomposed Bx, By, and Bz data. JSOC provides us these decomposed data. We downloaded these decomposed data from the JSOC webpage. We calculated 17 **active** region magnetic field parameters every 12 min from these SHARP data. These parameters are listed with keywords and formula in Table 1 . We followed the same procedure to calculate the **active** region mag- netic field parameters as defined in Bobra & Couvidat ( 2015 ). We considered the pixels that are within bitmap and above a high-confidence disambiguation threshold (coded value is greater than 60) **for** our magnetic parameter calculation. We used a finite-di fference method to calculate the computational derivative needed **for** the parameter calculation. We used Green’s function technique with a monopole depth of 0.00001 pixels to calculate the potential magnetic field, which is necessary **for** the calculation of the total photospheric magnetic free-energy den- sity. We neglected **active** **regions** near the limb, where it is di ffi- cult to see magnetic structures because of the projection e ffect. The calculated magnetic field parameter data are also not reli- able near the limb. We therefore only considered the data **for** our study that were within ±70 ◦ from the disk center. We note

En savoir plus
iteration, the **geodesic** distance **for** the starting points is set to zero. In the second iteration, the distance **for** the first neighboring points within the object is set to one. In the third iteration, the distance to the second neighbors is set to two, and so on. The process ends when the whole object has been processed. In this example, the process finishes at the 75 th iteration at the end of the spiral (See Fig. 1(b) ). The **geodesic** distance is only computed **for** the points belonging to the object, thus pixels outside the object are set to infinite (or an invalid distance value, -1 in this example). Note that the **geodesic** propagation can be implemented as iterative **geodesic** dilations. However, a more efficient implementation is possible using priority queues [ 22 ].

En savoir plus
Finally, we use a Hilbert-Peano filling curve [19] to trans- form the images into a unidimensional data structure [20, 21]. 4.1. Segmentations of linear and correlated noise
This experiment consists of the binary shapes artificially cor- rupted with a 0-mean, additive and correlated Gaussian noise taken as the realization of a Gaussian Markov Random Fields (GMRF) with an exponential correlation function [22]. Such a noise is then parametrized by a correlation range r (that we fix to 3) and a noise variance σ 2 . The DNNs **for** the Deep model are set to one hidden layer with 10 neurons. Note that we also give, **for** comparison purposes, the results of the K- means algorithm.

En savoir plus