• Aucun résultat trouvé

Estimation of affine transformations directly from tomographic projections in two and three dimensions

N/A
N/A
Protected

Academic year: 2021

Partager "Estimation of affine transformations directly from tomographic projections in two and three dimensions"

Copied!
16
0
0

Texte intégral

(1)

DOI 10.1007/s00138-011-0376-2 O R I G I NA L PA P E R

Estimation of affine transformations directly from tomographic

projections in two and three dimensions

René Mooser · Fredrik Forsberg · Erwin Hack · Gábor Székely · Urs Sennhauser

Received: 30 April 2011 / Revised: 5 August 2011 / Accepted: 26 September 2011 / Published online: 18 October 2011 © Springer-Verlag 2011

Abstract This paper presents a new approach to estimate two- and three-dimensional affine transformations from tomographic projections. Instead of estimating the defor-mation from the reconstructed data, we introduce a method which works directly in the projection domain, using paral-lel and fan beam projection geometries. We show that any affine deformation can be analytically compensated, and we develop an efficient multiscale estimation framework based on the normalized cross correlation. The accuracy of the approach is verified using simulated and experimental data, and we demonstrate that the new method needs less projec-tion angles and has a much lower computaprojec-tional complexity as compared to approaches based on the standard reconstruc-tion techniques.

Keywords Computed tomography · Tomographic projections· Motion estimation

R. Mooser (

B

)· E. Hack · U. Sennhauser

Electronic/Metrology/Reliability Laboratory, Swiss Federal Laboratories for Materials Testing and Research (EMPA), Ueberlandstr. 129, 8600 Duebendorf, Switzerland e-mail: rene.mooser@gmail.com

Present Address: R. Mooser

swissQuant Group AG, Kuttelgasse 7, 8001 Zurich, Switzerland F. Forsberg

Division of Experimental Mechanics,

Luleå University of Technology, 97187 Luleå, Sweden Present Address:

F. Forsberg

LKAB R&D, P.O. Box 952, 97128 Luleå, Sweden G. Székely

Computer Vision Laboratory, ETH Zurich, Sternwartstr. 7, 8092 Zurich, Switzerland

1 Introduction

Since its invention in the late 1960s, computed tomogra-phy (CT) is nowadays an indispensable imaging technology in medicine, non-destructive testing and materials research. A challenge often encountered in imaging techniques is the need to quantify the changes between two states of the object under inspection. This includes e.g. the deformation induced to a sample by external forces or the misalignment of a speci-men between two consecutive CT scans. The estimated defor-mation can then be used to study mechanical properties like strain [1], which is important e.g. in clinical applications [2,3] or in experimental mechanics [4,5]. Digital motion estima-tion is a key tool to find and determine deformaestima-tions from digital images as produced by CT and is still a topic with a lot research devoted to. Due to the vast amount of literature concerned with motion estimation, we refer to two reviews, see [6,7].

Conventional approaches for CT imaging use the recon-structed image data to perform the deformation estimation. In this work, we develop an approach to estimate the affine deformation of a sample directly from the tomographic pro-jections. Working directly with the projection data has three main advantages. First, the standard reconstruction tech-niques, e.g. filtered backprojection [8], are computation-ally expensive and have a scaling behavior of O(Md+1), where M is the linear number of voxels in the reconstructed

d-dimensional image volume. As modern CT systems obtain

detector resolutions of up to 2,048 × 2,048 pixels, even state-of-the-art hardware becomes as bottleneck in the recon-struction process. Second, in order to obtain reconstruc-tions with good image quality, the number of projection images should roughly be the same as the linear number of voxels in the reconstructed volume [9]. We will show that our direct approach needs considerably less projections, resulting

(2)

in reduced scanning time and lowered radiation dose. Third, the standard filtered backprojection is prone to reconstruction artifacts, since artifacts which are localized in the projections are smeared out over the reconstructed volume, resulting in streaks, see e.g. [10]. These streaks are no longer localized and can hence seriously disturb any further image processing method, e.g. motion estimation.

Approaches for estimating deformations directly in the projections have recently received a lot of attention. In [11–13], a projection-based registration of two-dimensional datasets using rigid body transformation is discussed. In [14], these ideas are extended to three-dimensional datasets, allow-ing (non-uniform) scalallow-ing, translation and rotation as degrees of freedom. Due to the use of a Fourier phase matching method, this approach is very fast, but has the disadvantage that the maximum rotation angle around the x and y axis must be less than 10◦. A modification of this approach was recently applied to problems in the field of nondestructive testing [15]. Another approach for parallel, fan and cone beam geometry is discussed in [16], but again only translation and rotation are allowed. Also, some authors use the Radon transform as a tool to determine affine transformations of images, see [17–22]. As these approaches assume knowledge of the underlying image, they cannot be directly compared to our situation, but give valuable insight into the mathematics of projection based image registration. In contrast to all these approaches, we develop a method which incorporates all degrees of freedom of an affine transformation, that is, rotation, translation, non-uniform scaling and non-uniform shearing. Furthermore, our approach works with two- and three-dimensional parallel and fan beam geometries. Prelim-inary results of our efforts (using only simulated two dimen-sional parallel beam geometry) have already been published in [23].

This paper is organized as follow: In Sect.2, the Radon transform is introduced as a tool to describe the mathematics behind computed tomography, and the parallel and fan beam scanning geometries are explained. Also, we briefly discuss the simulation of tomographic projections, which is needed for our further arguments. In Sect.3, the motion estimation approach is presented, the results using simulated and exper-imental data are shown in Sect.4. Section 5concludes the paper with a summary and a short outlook.

2 Radon transform and computed tomography

We start by introducing the two dimensional Radon trans-form. Consider a function f(x) defined for x ∈ R2, then its Radon transform is given as [24]

ˇf (p, ξθ) := Rθ[ f(x), p] :=



Ω

f(x)δ (p − ξθ, x) dx, (1)

Fig. 1 Two-dimensional Radon transform. p is the signed distance from the origin of the coordinate system,θ is the rotation angle whereξθ ∈ S1is a unit vector,Ω ⊆ R2, S1 the unit circle andξθ is given byξθ = (cos(θ), sin(θ)), cf. Fig.1. 2.1 Computed tomography

In X-ray CT, the investigated sample is placed between an X-ray source and a detector device, which measures the inten-sity of the transmitted X-rays Itr. By rotating the object in small stepsΔθ, an absorption contrast image is generated for every angleθ. For a monochromatic beam with incident intensity I0, the normalized measured intensity

Itr I0 is Itr I0 = exp ⎛ ⎝− L f(s) ds ⎞ ⎠ (2)

where L is a projection line through the object and f(s) the material dependent attenuation coefficient.

2.2 Scanning geometries

We introduce two types of scanning geometries, namely the parallel beam and the fan beam geometry.

2.2.1 Parallel beam geometry

In the two-dimensional case, the parallel beam geometry can be directly written as Radon transform Eq.1. The variable p in Eq.1describes the detector coordinate andθ the projection angle.

2.2.2 Fan beam geometry

In two dimensions, the fan beam geometry is described as

Fβ,D[ f (x), s]=



Ω

(3)

Fig. 2 Fan beam geometry. To avoid confusion with the parallel beam geometry, we denote the virtual detector coordinate with s and the rota-tion angle withβ. γ is the angle between the central projection ray and the projection ray L, while D is the distance between the detector and the source. Also shown are p andθ if the ray L would come from the parallel beam geometry

with D the source to origin distance,β the projection angle,

ξβ−γ =

cos(β − γ ) sin(β − γ )

andγ the angle between the cen-tral ray (s = 0) and the current ray. Here we use the symbol s instead of p for the detector coordinate, in order to avoid confusion with the parallel beam transform. Note that the dependency of γ from s and D is hidden, i.e.

γ = γ (s, D) = cos−1( D

D2+s2). The fan beam geometry is shown in Fig.2. Using interpolation, it is possible to resort the fan beam geometry to the parallel beam geometry, with

γ + α = π2 β − θ + α = π 2 ⇒ β − γ = θ (4) and cos(γ ) =D D2+ s2 p= s cos(γ ) ⎫ ⎬ ⎭ ⇒p = s DD2+ s2. (5)

It is also possible to formulate the three-dimensional (slice by slice) fan beam equation, but we will not use it for our further arguments, hence it is not discussed here.

2.3 Simulation of tomographic projections

Expression Eq.2is used as a starting point for our simula-tion of the tomographic projecsimula-tions. The integral in Eq.2is substituted by a discrete sum and accounts for typical CCD photon counting noise by replacing the deterministic values of Itrby Poisson distributed values, i.e.

˜Itr ∼ Poisson ⎛ ⎝I0exp ⎛ ⎝− L f(s) ds ⎞ ⎠ ⎞ ⎠ . (6)

It is well-known that the signal-to-noise ratio of a CCD device is proportional to the square root of the detected photons, hence scanning an object with a higher I0 will result in a better image quality. We can therefore use I0as a steering parameter to control the quality of the obtained projections. For a given image f(x) and a given set of projection angles

Θ, the simulated sinogram therefore consists of ˜Itr, computed at the anglesθ ∈ Θ.

3 Methods

3.1 Affine transformations and the Radon transform Affine transformations consist of rotation, translation, scaling and shearing. Using matrices, affine transformations can be constructed in two dimensions with

Ta(x) = Aax+ u (7)

where

Aa= Arot· Ascale· Ashear (8)

and u= ux uy , Arot= cos(ϕ) − sin(ϕ) sin(ϕ) cos(ϕ) , Ascale= sx 0 0 sy , Ashear= 1 shx shy 1 . (9)

Similar constructions hold for the three dimensional case, see e.g. [25]. Inserting the transformation Eq.7into the Radon transform Eq.1yields

f(Aax+ u), p  =det(A−1a ) ˇf  p+  A−a ξθ, u  , A− a ξθ  . (10)

As A−a ξθ is generally not a unit vector anymore, we

normalize the arguments of Eq.10, resulting in

f(Aax+ u), p =det(A−1a ) A− a ξθ ˇf  p+A−a ξθ, u A− a ξθ , A−a ξθ A− a ξθ  , (11)

where we have used the identity δ (ax) = a 1 δ (x) ∀x ∈ R and ∀a ∈ R \ 0. We therefore notice that any affine map-ping acts on the projection data in three different ways, namely by translation with respect to the p axis ( p

(4)

and rotatingξθθA−a ξθ

A−a ξθ). The rotation angle is given

by ϕ = arccos  ξθ, A−a ξθ A−a ξθ  . (12)

We will later show that the factor det(A −1 a )

A−a ξθ has no

influence on our estimation approach and can therefore be neglected.

3.2 Parameter-estimation procedure We introduce the notation ˇf



ˇTa(p), ˇTa(ξθ)



to express the action of the affine transformation Taon the Radon transform

ˇf (p, ξθ). The inverse problem is formulated by

Definition 1 Let Ta be a regular affine mapping and fdef(x) = fref(Ta(x)) two unknown images with known

Radon transforms Rθ[ fref(x)] and Rθ[ fdef(x)], respec-tively. For the target function

Ξ(Test) = 1 2  θ∈Θ (ψ(Test, θ))2, (13)

find the affine mapping Test, such that Test= arg min

Test∈A Ξ(Test). (14) Here, ψ(Test, θ)=S  ˇfref  ˇTest(p), ˇTest(ξθ)  , ˇfdef(p, ξθ)  (15) is a similarity measure,Θ is the set of all projection angles andA is the set of affine mappings.

As similarity measureS (·, ·), we choose normalized cross correlation [26], hence S (·, ·) = 1 − NCC(·, ·), (16) with NCC(I1, I2) = 1− 

x(I1(x) − ¯I1)(I2(x) − ¯I2)  x  I1(x) − ¯I1 2 x  I2(x) − ¯I2 212 . (17)

Here, ¯I1and ¯I2are the arithmetic means of I1and I2, respec-tively. An important property of the normalized cross corre-lation Eq.17is its invariance under linear transformations, i.e. NCC(I1, aI2+ b) = NCC(I1, I2) for a, b ∈ R, a = 0. This allows us to ignore the constant factor det(A

−1 a )

A− a ξθ in

Eq.11, which simplifies the optimization process.

3.3 Optimization algorithm

In order to solve the minimization problem Eq.13, a trust-region based optimizer is used, see e.g. [27]. Starting with an initial estimate t0est(understand the vector testas the param-eters of the transformation Test), the algorithm iteratively (iteration index k) computes better approximations on the subspace, the trust region, around the current position, i.e.

tkest+1= tkest+ t, (18) witht determined by min t : 1 2t H ψt+t subject to Dt≤ρk, (19) where Hψ is the Hessian matrix, Jψ is the Jacobian matrix ofψ with respect to t, D is a diagonal scaling matrix and ρk a positive scalar. The solution of Eq.19can approximately be found by computing a Gauss–Newton direction, given as solution of the least squares problem

Jψt = −ψ, (20)

where ψ is the vector of residuals of the target function, i.e. ψ =ψ(Tk est, θ1), ψ(Tkest, θ2), . . .  . (21) Finally,t is accepted if Ξtkest+ t  < Ξtkest  , (22)

andρk+1 is increased. Ift is rejected, ρk+1is decreased and the step is repeated. The iteration is terminated if either the target function is smaller than a user defined thresh-oldΞtol, or the iteration stagnates, i.e.tk+1− tk <etol, where etolis again a user defined tolerance. The search of the Gauss-Newton direction needs no second derivatives (i.e. no Hessian matrix), which reduces the memory usage, as only the projection data and the Jacobian must be available. To compute the Jacobian matrix Jψ, we use three point centered finite differences to approximate derivatives. The optimiza-tion procedure does therefore directly search for the optimal parameters of Test, i.e. the rotation, scaling, shearing and translation composition in Eqs.8and9is not used.

3.3.1 Spline interpolation

The evaluation of the target function Eq.13for the current estimate Testmay need projection data at coordinates(p, θ) which are not available due to the limited detector resolution and the finite number of projection angles. A cubic spline [28] is used to interpolate the projections at arbitrary coor-dinates. Using cubic splines as interpolation method has the advantage that the resulting function is smooth up to order

(5)

two. Also, evaluating a given spline interpolation is fast, as the B-spline basis functions have only small support.

3.3.2 Multiresolution extension

From the definition of the affine transformation Ta, it can

be seen that the absolute values of the translational part (i.e. the vector u) are typically larger than the components of the linear part (i.e. the matrix Aa). It is reasonable to assume

that−20 ≤ ux, uy ≤ 20, but typically for the components ai j ∈ Aa it holds−1.5 ≤ ai j ≤ 1.5. This does also mean

that a given incrementt in Eq.18has more influence on Aa

than on u. Although this effect should be somewhat compen-sated by the smaller derivatives (with respect to ux and uy)

of the target function Eq.13, we prefer to implement a mul-tiscale [29] extension for the translational part. As transla-tions do only act on the p coordinate of the Radon transform, the multiscale representation is obtained by downsampling

Rθ[ f(x), p] with respect to p. We denote the Radon trans-form at level l byRθf(x), pl, with the convention that

f(x), p0 := Rθ[ f(x), p]. In every downsample step

l → l + 1, the size of the projection data in p-direction is

reduced by a factor of two. Most importantly, the magnitude of a translation p = p + u is also reduced by a factor of two, i.e.ul = 1

2lu0. The modified optimization procedure

then reads: Iterate for l= L, L − 1, . . . , 0

1. Compute spline interpolation(s) ofRθfref,def(x), pl 

. 2. Find best affine parameters on the level l, denoted by

test,l.

3. Upscale translational part of test,lby multiplication with a factor of 2.

4. Use the upscaled version of test,l as inital estimate t0est,l−1for the next iteration level, go back to 1.

To avoid aliasing due to undersampling, we apply Gaussian smoothing prior to the downsampling, using a filter length of 7 pixels and a standard deviation ofσ = 1.

3.4 Three-dimensional parallel beam

The definition of the Radon transform Eq.1can be extended to three dimensions by choosing x ∈ R3 andξθ1,2 ∈ S2. A possible (but not unique) choice for ξθ1,2 is ξθ1,2 =

(cos(θ1) sin(θ2), sin(θ1) sin(θ2), cos(θ2)), see also Fig. 3. Similar to Eq.1, we write

ˇf p, ξθ1,2 := Rθ1,2[ f(x), p] :=  Ω f(x)δp−ξθ1,2, x dx. (23)

Note that in the three-dimensional case, the Radon trans-form integrates over planes, whereas the integration is over

Fig. 3 Three-dimensional Radon transform. Integration area is the plane E with signed distance p from the origin of the coordinate system. The vectorξθ1,2does depend on two anglesθ1,2

lines in the two-dimensional case. We may understand the three-dimensional parallel beam transform as a slice by slice scanning of a three-dimensional sample, each slice is then described by a two-dimensional Radon transform. Formally, this can be described as

ˇfp, t, ˜ξθ, e3  =  Ω f(x)δ  p−  ˜ξθ, x  δ (t − e3, x) dx, (24)

where ˜ξθ = (cos θ, sin θ, 0)and e3= (0, 0, 1). Thus, the integration line is described by an intersection of 2 planes through the volume f(x), p and t are the horizontal and vertical detector coordinates, respectively. As a consequence, Eq.24cannot be equal to the three dimensional version of the Radon transform Eq.23. However, we can compute a sam-ple of the three-dimensional Radon transform by applying a two-dimensional Radon transform to the projection data obtained from Eq.24. Consider for this



ΩD

ˇfp, t, ˜ξθ, e3 

  

3d para. beam transf.

δp2−  ξθ2, (t, p)  d pdt   

2d Radon transform of the projections over p and t

(25)

withξθ2 = (cos(θ2), sin(θ2)), p2the signed distance of the lineξθ2, (t, p)

= 0 to the origin (p = 0, t = 0) and ΩD

the detector area. A necessary condition for the non-triviality of the integral Eq.25is

t− e3, x = 0 ⇒ t = z (26)

and

(6)

Hence, p= p2 sin2)− z cos2) sin2). (28)

Inserting Eqs.26and27into Eq.25then yields  Ω f(x)δ p2 sin2)− z cos2) sin2) −  ˜ξθ, x  dx (29) =  Ω f(x)δp2−  ξθ1,2, x dx (30) = Rθ1,2[ f(x), p2], (31)

which is exactly the Radon transform inR3. Therefore, com-putation of the three dimensional Radon transform is pos-sible if three dimensional parallel beam projection data is available. However, the computation of the Radon transform has similar computational complexity as the standard image reconstruction process, hence this approach makes only sense if less projections must be computed. For the sake of clarity, we rewrite the target function Eq.13as

Ξ(Test) = 1 2  θ12∈Θ1×Θ2 (ψ(Test, θ1, θ2))2, (32) whereΘ1is the set of projection angles for the three dimen-sional parallel beam transform andΘ2is the set of projec-tion angles of the two dimensional Radon transform. We may then apply the same estimation/optimzation procedure as in Sects.3.2and3.3to determine the parameters of the three-dimensional affine transformation.

3.4.1 Sparse evaluation of the target function

To speed up the computation in the main iteration loop, it is possible to use only a limited number of angle pairs1, θ2) for the computation of the target function (Eq.32). As every profileRθ1,2[ f(x), ·] of the Radon transform contains pro-jection data from the full sample in image space, this is not equivalent to a restriction of the image volume to a subvo-lume. However, in order to keep the spline interpolation rea-sonably accurate, all profiles of the Radon transform must be used for the computation of the interpolation spline. Hence, only the evaluation of the target function is accelerated, but as this evaluation must be done in every optimization itera-tion loop, it is computaitera-tionally far more expensive than the spline interpolation. Let k be the fraction of profiles used for the evaluation of the target function (e.g. k = 0.05 cor-responds to 5% of all profiles), then the expected speedup is≈ 1k.

3.5 Application to the fan beam geometry

We can directly apply the transformation from Eq.11to the image data f(x), resulting in

Fβ,D[ f (Aax+ u), s] =det(A −1 a ) Aξ ×  Ω f(y)δ  s· cos(γ ) +Aξ, u Aξ − Aξ Aξ,y ! dy, (33)

where we have set Aξ:= A−a ξβ−γ. Although Eq.33is con-ceptually equivalent to Eq.11, there are three major draw-backs for its practical use. First, the factor det(A−1a )

A−a ξβ−γdoes not

only depend on the transformation Aaand the rotation angle

β, but also on the ray angle γ (and hence on s and D). While

the dependency on D is not an issue (as the source-origin distance is constant during the scan), the dependency on s means that det(A−1a )

A−a ξβ−γcannot be neglected in the computation

of the target function Eq.13, as the normalized cross correla-tion similarity measure (Eq.17) is not invariant under trans-formations of s. Second, the transformed vector A−1a ξβ−γ

A−a ξβ−γ

must be computed for everyβ and for every γ . As this com-putation is performed in every iteration step, this increases the computational load. The third problem is a bit more sub-tle. Assume, without loss of generality, that Aa = E, u =

(ux, 0)andβ = γ = 0◦, hence we are looking at the

cen-tral projection ray, which hits the detector at s= 0. Looking at the deformed geometry, the corresponding detector coor-dinate according to Eq.33is s= 0 +ξβ−γ, u = ux, which

equals a scanning geometry translated by ux parallel to the

detector. However, for such a geometry, no projection data is available, as the ray with s= uxis not the central projection

ray (i.e.γ = 0◦). However, an equivalent ray (which must be parallel to the original ray) can be found by choosingβ andγof the new ray appropriately. The mappingβ → β andγ → γ is exactly the transformation from fan beam to parallel beam, and this transformation must be done for every detector coordinate in every iteration of the optimiza-tion procedure, see also Fig. 4. Due to these drawbacks, it is preferable to convert fan beam data to parallel beam data prior to the motion estimation. This can be done simulta-neously to the projection acquisition and does not increase the computational burden significantly. After the fan beam to parallel beam mapping, the parallel beam estimation proce-dure can be used to determine the affine transformation. For the 3d fan beam case, the same approach is advisable, such that the procedures from Sect.3.4can be used.

(7)

Fig. 4 Fan beam geometry and deformation. We are considering the ray which passes through the four dots in the reference sample. In the deformed (here, deformed mean translated parallel to the x axis) sample, the ray which passes through the equivalent four dots does not originate from the same source position and is parallel to the original ray. Hence the transformation is the fan beam to parallel beam mapping

3.6 Computational complexity

We investigate the computational complexity of the three major parts of the two dimensional estimation approach: – Spline interpolation;

– Evaluation of the target function;

– Solution of the least squares problem in the trust region optimizer;

Due to the small support of the B-spline basis functions, the complexity of the spline interpolation is linear with respect to the number of interpolation sites. Let Θ be the number of projection angles and Npthe number of detector coordinates,

then the B-spline interpolation (as well as the evaluation) has a complexity of Cspline= O  Θ · Np . (34)

The evaluation of the of the target function is dominated by the computation of the normalized cross correlation, i.e.

Ctf= O 

Θ · Np

. (35)

Solving the least squares problem involves the computation of the Jacobian, usingO( Θ ·dof ) operations, where dof is the number of degrees of freedom, i.e. 6 for the two dimen-sional affine case. The resulting linear system can the be solved inO( Θ ·dof2) steps, e.g. by applying a QR factor-ization. Hence, one step of the trust region optimizer costs

Copt= O 

Θ · (dof + dof2).

(36) Accounting for the fact that the computation of the target function and the solution of the linear system must be done in

185 1453 2901 5797 1 5 9 18 Number of detector coordinates Time u nits Time u nits 23 180 359 717 1 8 16 33 Number of projection angles (a) (b)

Fig. 5 Complexity of the deformation estimation. Time units needed for 10 iterations of the optimization algorithm versus the number of detector coordinates Np(a). Time units needed for 10 iterations of the optimization algorithm versus the number of projection angles Θ used (b)

every iteration step of the optimization procedure, we finally deduce that the overall complexity is given by

C2d = O 

Ni t · Θ · (2 · Np+ dof + dof2)



, (37)

where Nitis the number of iterations of the optimization pro-cedure. As usually Np dof + dof2, we can approximate

Eq.37by C2d ≈ O  2· Nit· Θ · Np . (38)

If the additional effort for the multiscale approach is also taken into account, we get

C2d,multiscale ≈ O 

4· Ni t· Θ · Np

, (39)

as the number of detector coordinates at level l is given by 1

2l · Npand limL→∞

L l=0

1

2l = 2, and this number is not

dependant on L. Comparing Eq. 39to theO(Md+1) com-plexity of standard reconstruction algorithms, we conclude (using d = 2 and M ≈ Np) that our approach has roughly

the same asymptotical complexity. However, our approach already includes the computational costs for the motion esti-mation and enables us to choose Θ < M and Nit < M, hence some advantages with respect to the computational complexity can be expected.

Finally, we mention that similar arguments also hold for the three dimensional case. An experimental verification of the arguments above is given in Fig.5, where the linear scal-ing of the complexity is verified w.r.t. Npand Θ .

4 Results

In this section, we present the results for our estimation approach, using simulated and experimental data.

(8)

x (pixel) x (pixel) y (p ixe l) y (p ixe l) −128 128 128 −128 −128 128 128 −128 (a) (b)

Fig. 6 Head dataset (a). Engine dataset (b)

4.1 Test data

4.1.1 Two-dimensional data

As test images for the two-dimensional case, we use two different datasets:

Head: A cross section of a human head Fig.6a, 256× 256 pixels, obtained from http://www9.informatik.uni-erlangen.de/External/vollib/.

Engine: A cross section of a CT scan of an engine block Fig.6b, 256× 256 pixels, obtained from http:// www.volvis.org.

4.1.2 Three-dimensional data

For the three-dimensional tests, we use the following data-sets:

Engine 3d: CT scan of two cylinders of an engine block Fig.7a, 256× 256 × 128 voxels, obtained from http://www.volvis.org. (a) (b) y z x z y x

Fig. 7 Engine 3d dataset (a). Chest dataset (b)

Chest: A CT scan of a human chest Fig.7b, 384×384×240 voxels, obtained from http://www9.informatik.uni-erlangen.de/External/vollib/.

4.1.3 Experimental data

For our experiments, we use two different test objects. The first is a small beech wood sample, cylindrically shaped with a diameter of roughly 1 mm and a length of roughly 10 mm. The sample is glued to a metallic pin (diameter 3.15 mm) which acts as sample holder. The second sample is a sim-ilar cylindrical piece of wood, but the wood type is Scots pine and the wood has been dried at 140◦C prior to the use as a test object. To induce the deformation, both samples are placed in water for several hours (7 and 12 h for the beech and the pine, respectively). The heat treatment of the pine sample is expected to influence the water uptake behav-ior, but we will not interpret our deformation results in the context of the structural properties of wood. The reader inter-ested in such results is referred to e.g. [30–32]. The measure-ments are performed at the TOMCAT beamline [33,34] at the Swiss Light Source (SLS), Paul Scherrer Institute, Villi-gen, Switzerland. Both samples are scanned before and after the water treatment (i.e. the dry and the wet state, respec-tively), to obtain the reference and the deformed sample, respectively. The data is acquired using synchrotron radi-ation with an energy of 9.4 keV. After conversion to vis-ible light by a YAG:Ce (yttrium aluminium garnet doped with cerium) scintillator, an objective with 10x magnifica-tion is used. The detector resolumagnifica-tion is 2,048 × 901 pixel, corresponding to a field of view of 1.43 × 0.63 mm and a pixel size of 0.7 × 0.7 µm. A total of 1,501 projections in the angular interval 45◦− 225◦are taken. Exposure time is 480 ms for each projection, resulting in a total scan time of

(9)

x (pixel) y (pixel) −750 0 750 550 0 −550

Fig. 8 “Beech1” dataset (dry state)

x (pixel) y (pixel) −860 0 720 720 0 −740

Fig. 9 “Pine” dataset (wet state). Note the completely different wood structure compared to the beech datasets

roughly 12 min. For the 2d estimation, we extract two X Y planes (2,048 × 2, 048 pixels each) from the reconstructed volumes at two different z coordinates for the beech sample (“beech1” and “beech2”). The z coordinates are manually selected such that any deformation in z direction is approxi-mately compensated. The structural variation in z direction is much smaller than in x and y direction, such that the choice of the z coordinate is rather uncritical. Similarly, one X Y plane (2,048 × 2,048 pixels) is extracted from the reconstructed pine wood volume, called “pine” from here on. Finally, a 3d dataset is obtained by cutting a 2,048 × 2,048 × 200 volume from the full beech volume dataset. Due to the parallel pro-jection geometry, the corresponding sinogram data can easily be located in the sinogram stack and hence be used for our projection based approach. Examples of reconstructed slices (beech and pine) are shown in Figs.8and9.

Table 1 Test parameters for the simulated data

Parameter Value(s)

No. of projections Θ (2d) {180, 90, 45, 23, 12} No. of projectionsΘ1,2(3d) Θ1 = 180, Θ2 = 45

No. of multiscale levels L L= 3 Max. number of iterations 50 Fraction of profiles used (3d only) k= 0.05

No. of incidents photons I0 {500, 1000, 2000, 3000, 5000}

For the 2d tests, the projection angles are chosen equiangular in [0◦. . . 180]

Table 2 Affine parameter range

Parameter Value(s)

Translation (pixel) tx, ty, tz [−20 . . . 20]

Scaling sx, sy, sz [0.9 . . . 1.1]

Shearing shx, shy, shz [−0.1 . . . 0.1] Rotation angleϕx,y,z [−20◦. . . 20◦] ϕx,y,zdenotes rotation around the x, y and z axis, respectively. For the 2d simulations, only the x and y values in every row are used, and the rotation is always perpendicular to the x y-plane

4.2 Error measures

Let frefbe the undeformed image, Tathe transformation and fdef = fref(Ta(x)) the deformed image. We then compute an

estimated transformation Testusing the procedure described in Sect.3. Once T∗estis estimated for the transformation Ta,

we are able to calculate an estimated deformation vector field vestat every pixel (or voxel) of the test image and compare it to the reference deformation field vref. We choose the origin of the coordinate system to be in the center of the image and define the mean relative magnitude error as

Ψmag,rel= 1 N N  j=1 vref  xj − vest  xj vref  xj , (40)

and the mean angular error as

Ψang = 1 N N  j=1 arccos   vref(xj), vest(xj) vref(xj)vest(xj)  , (41)

where N is the number of pixel or voxel in the image. 4.3 Results from simulated data

We simulate the tomographic projection according to the procedures described in Sect. 2.3. The test parameters are given in Table1, the range of the deformation parameters in Table2. For each I0, we simulate 25 image pairs (reference and deformed) and evaluate the estimated affine transforma-tions according to the error measures Eqs.40and41. For the

(10)

500 1000 2000 3000 5000 0 0.1 0.2 0.3 0.4 0.5 Re l. error Head Engine 500 1000 2000 3000 5000 0 5 10 15

Number of incident photons Number of incident photons

An g . error (d eg ) Head Engine (a) (b) Ψ Ψ

Fig. 10 Errors for the 2d deformation estimation using the data-sets head and engine with different number of incident photons I0.

Relative error (a). Angular error in degrees (b). In all plots, the stan-dard deviation is also shown

optimization procedure, the identity mapping T0

est(x) = x was selected as initial estimate. We also tried to enhance the initial estimate by the use of simple method proposed in [35], which is able to recover translation and scaling using a very fast center-of-mass approach. However as our deforma-tion also includes rotadeforma-tion and shearing, the obtained initial estimates did not yield any improvement compared to the identity mapping.

4.3.1 2d datasets

From Fig.10, we see that both test datasets perform well, with the relative error approximately 10% and the angular error approximately 5◦for most test runs. The influence of the number of photons I0 is only significant for I0 = 500 for the head dataset, proving the robustness of our estima-tion technique with respect to the signal to noise ratio of the tomographic projections.

4.3.2 3d datasets

The estimation results for the 3d datasets are shown in Fig.11. The estimation accuracy for all datasets is indepen-dent of the value of I0used for testing. The reason for this is clearly the fact that the 2d Radon transform (i.e. the outer

500 1000 2000 3000 5000

0.05 0.15 0.25 0.35

Number of incident photons

Number of incident photons

Rel. error Ψ Ψ Chest Engine 500 1000 2000 3000 5000 2 4 6 8 10 12 14 16 An g . error (de g ) (a) (b) Chest Engine

Fig. 11 Errors for the 3d deformation estimation using the datasets chest and engine with different number of incident photons I0.

Rela-tive error (a). Angular error in degrees (b). In all plots, the standard deviation is also shown

500 1000 2000 3000 5000 0 0.02 0.04 0.06 0.08 0.1

Number of incident photons

Rel. er ro r Ψ Chest Engine

Fig. 12 Relative error for the deformation estimation using the 3d datasets chest and engine with different I0, allowing only six degrees of

freedom (translation and scaling). The standard deviation is also shown

Radon transform in Eq. 25) results in an averaging of the noisy projection data, hence the standard deviation (which is a measure of the amount of noise) is reduced. Compared to the 2d results in Fig.10, the error level is generally higher by a factor of two. This is due to the fact that for the 3d case, 12 unknowns must be fitted in the estimation procedure com-pared to only 6 in the 2d case. This poses a more difficult problem to the optimization algorithm. To support this expla-nation, we also tested the engine and chest datasets with only six degrees of freedom, namely translation and scaling. From Fig.12, we see that consequently the estimation errors are decreased to the level of the 2d estimation problem.

(11)

4.4 Reference solution for the experimental data

First we present reference solutions for the experimental datasets. Due to the handling of the samples, we expect two different influence factors causing deformation. First, a rigid body transformation will occur because the sample is dis-mounted between the scans. Second, the water is expected to induce swelling of the wood. We will see later that the result-ing deformation can indeed be very well approximated by an affine model. We compute reference solutions only for the 2d experiments. The manual methods described below are very difficult to apply to a 3d dataset, and automated measures, e.g. normalized cross correlation, are not very meaningful due to the unstable microstructure of the beech sample, cf. Fig.15.

4.4.1 Reference solutions for the beech samples (2d)

For each of the two beech samples, we asked two independent operators to identify six pairs of points which do correspond to the same image features. Together with our own estimate of corresponding points, we then have 18 pairs of points which identify corresponding image features. The reference solu-tion is then computed by the use of least squares. This yields our reference solution, which is given by

Abeech1 = 1.014 −0.158 0.183 1.109 (42) ubeech1 = 14.679 50.551 (43) for the sample “beech1” and

Abeech2 = 1.008 −0.164 0.181 1.065 (44) ubeech2 = 15.800 51.162 (45) for the sample “beech2”. One sees that the deformations for “beech1” and “beech2” are very similar, even though the z coordinates are significantly different.

4.4.2 Reference solution for the pine sample

The “pine” sample shows almost no non-rigid deformation (a possible reason for this could be the heat treatment), such that it is easy to identify corresponding points. Again 18 pairs of corresponding points were selected and the associated affine estimate was computed using least squares, yielding

Apine= 1.002 0.050 −0.064 1.002 (46) upine= 39.420 −177.340 . (47) 360 180 90 45 0 0.2 0.4 0.6 0.8

Number of proj. angles

Number of proj. angles

Re l. error Beech1 Beech2 Pine 360 180 90 45 0 10 20 30 40 50 An g . error (de g ) (a) (b) Beech1 Beech2 Pine

Fig. 13 Errors for the deformation estimation using the datasets “beech1”, “beech2” and “pine”. ’Relative error (a). Angular error in degrees (b). In all plots, the standard deviation is also shown

4.5 Results from experimental data

We present deformation estimation results from the experi-mental data obtained by the procedures described in4.1.3. From the 1,501 available projections, we use Θi =

{360, 180, 90, 45}, 1 ≤ i ≤ 4 and set the multiresolution parameter L = 5, i.e. 6 multiresolution levels. The identity mapping T0est(x) = x was selected as initial estimate for all tests. Again, we compute the standard deviation by the use of cross validation: For each set of projection anglesΘi, we

perform the motion estimation by randomly selecting Θi

angles and repeat this procedure 60 times for eachΘi. The

accuracy of the estimated motion is measured using the error measures Eqs.40and41. The results for the 2d estimation are shown in Fig.13.

The results are in good agreement with the simulated data in Fig.10for Θ ≥ 180. For smaller numbers of projection angles, the estimation accuracy decreases rapidly. Only the “pine” dataset performs reasonably well even for few projec-tions angles, which is probably due to the simple deformation (almost exclusively translational) and the lack of microstruc-tural change due to water uptake. Also, the random sampling of the projection angles has a strong influence on the esti-mation accuracy. For example, if 45 equiangular sampled

(12)

x (pixel) y (pixel) y (pixel) −1024 0 1024 1024 0 −1024 x (pixel) −1024 0 1024 1024 0 −1024 (a) (b)

Fig. 14 Example of a deformation estimation result for the “beech1” dataset. Pointwise absolute difference between the dry state and the wet state, scaled for better visibility (a). Pointwise absolute difference between the dry state deformed with the estimated transformation and the wet state, scaled for better visibility (b). The estimated deformation

is Abeech1= 1.011 −0.170 0.181 1.100 , u = 13.265 50.060

, the relative error is 7.7%. The goodness of the deformation estimation is demonstrated by the proper alignment of the large cells (the tracheids). Also, the growth ring (the horizontal high density region) is properly aligned

projection angles are chosen, the relative error Eq. 40 is 0.245 and 0.346 for the “beech1” and “beech2” datasets, respectively. In Fig.14, the improvement of the visual cor-respondence for the dataset “beech1” using the estimated transformation is shown, using 360 projection angles.

As we have no quantitative criteria for the 3d beech data-set, we illustrate the estimation result using the visual cor-respondence before and after the estimation process. We use Θ1 = 180, Θ2 = 40, the multiresolution param-eter L = 5 and the fraction of used profiles k = 0.05, cf. Sect.3.4.1. A good agreement is found between the dry state deformed with the estimated transformation and the wet state, if the alignment of the large cells (the tracheids) and the growth ring is considered, see Figs.15and16.

−300 0 300 300 0 −300 x (pixel) x (pixel) y (pixel) y (pixel) −300 0 300 300 0 −300 (a) (b)

Fig. 15 Example of a deformation estimation result for the 3d beech dataset. Pointwise absolute difference between the dry state and the wet state at z= 200 (a). Pointwise absolute difference between the dry state deformed with the estimated transformation and the wet state at z = 200 (b). Although the large cells and the growth ring are prop-erly aligned, the microstructure remains misaligned, this prevents the successful use of cross correlation to quantify the alignment

It is worth to notice that a similar estimation approach using the reconstructed image volumes would use a tremen-dous amount of computational resources: Even if we would only keep the reference and deformed volumes in memory, this alone would need at least 2×8×200×2,048×2,048 = 12.8 GB of memory (assuming double precision data). Our approach however needs only about 2× 8 × 180 × 40 × 2,048 = 0.23 GB of memory for the data. Additionally, also the computational complexity is considerably lower. The standard approach would reconstruct 200 data slices of size 2,048×2,048 using 1,501 projection angles for each volume, whereas our approach only computes the Radon transform for 180 projection images of size 2,048 × 200, using 40 pro-jection angles. This alone results in a speedup of more than 400x.

(13)

x (pixel) x (pixel) y (pixel) y (pixel) −300 0 300 300 0 −300 −300 300 −300 300 −300 300 (a) (b)

Fig. 16 Example of a deformation estimation result for the 3d beech dataset. Pointwise absolute difference between the dry state and the wet state at z= 50 (a). Pointwise absolute difference between the dry state deformed with the estimated transformation and the wet state at z= 50 (b)

5 Conclusion

We developed an unified approach for two- and three-dimen-sional projection based affine motion estimation for parallel and fan beam geometries. The approach has been verified using simulated and experimental data. We have shown that it is possible to achieve a mean relative magnitude error of less than 10% for 6 degrees of freedom and less than 15% for 12 degrees of freedom. Besides the progress in theoretical knowledge about motion estimation from tomographic pro-jections, our work provides multiple improvements in prac-tical applications:

– The amount of computational resources needed is dras-tically reduced. Despite modern multi CPU and GPU reconstruction techniques, especially the enormous mem-ory requirements for high resolution three dimensional

data is still a challenging issue. As shown by the example with the three dimensional beech dataset, our approach uses only a fraction of the memory needed for a full vol-ume reconstruction, thus no special hardware is required. – The number of projections needed is considerably reduced compared to the standard reconstruction approach. This results in an accelerated scanning time and minimizes the radiation dose delivered to the sample. Especially the radiation dose is often a concern in applications working with biological samples.

– Working directly with projection data minimizes the influence of artifacts which are often present in volume reconstructions. These artifacts are localized in the pro-jection data, but delocalized (i.e. smeared out) in the reconstructed image data. Often, local artifacts have only a limited influence on image processing techniques (e.g. motion estimation), while globally disturbed image/vol-ume data poses a much more difficult problem.

– For the three dimensional case, also the noise reduction due to the averaging effect of the outer Radon transform in Eq.25must be mentioned. This makes the proposed approach attractive for scans resulting in projection data with low signal to noise ratio. e.g. due to materials with high X-ray absorption coefficients.

For further research, the choice of the optimization algorithm and the selection of the similarity measure could significantly improve the motion estimation framework.

Acknowledgments The authors are grateful to Samuel McDonald

from the Swiss Light Source (SLS) at the Paul Scherrer Institute (PSI), Villigen, Switzerland, for his help while performing the synchrotron experiments at the TOMCAT beamline.

Appendix A: Cone beam geometry

It would be tempting to apply our projection based motion estimation approach to the widely used cone beam geome-try. However, there is a fundamental problem due to the Tuy–

Smith condition [36,37]. This condition states that the Radon space data is only completely sampled if every plane which intersects the sample also intersects the source trajectory. For the cone beam geometry, only points which belong to the

x y-plane (z = 0) satisfy this condition. It is hence not

possible to compute the three-dimensional Radon transform from the cone beam projection images. The development of approximate methods for the computation of the three dimen-sional Radon transform could, however, help to overcome this limitation.

Appendix B: Computation of the Jacobian for the optimization

We investigate the computation of the Jacobian matrix used to solve Eq.20for the 2d case. Let ta= (t1, t2, . . . , t6) be the

(14)

parameters of the affine transformation Ta. We identify the

first four parameters t1, . . . , t4as parameters of the matrix Aaand t5, t6as parameters of the translation u, i.e.

Aa= t1t2 t3t4 , u = t5 t6 , (48)

cf. Eq.7. The Jacobi matrix is given by

JΨ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ ∂Ψ (Ta, θ1) ∂t1 . . . ∂Ψ (Ta, θ1) ∂t6 ... ... ∂Ψ (Ta, θ Θ ) ∂t1 . . . ∂Ψ (Ta, θ Θ ) ∂t6 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ (49) and Ψ (Ta, θ) =1−NCC ˇfref  ˇTa(p), ˇTa(ξθ)  , ˇfdef(p, ξθ)  , (50) cf. Eqs.15and17. In order to ease the notation, set

I1:= ˇfref  ˇTa(p), ˇTa(ξθ)  I2:= ˇfdef(p, ξθ) , (51) then Ψ (Ta, θ) = 1 − 

(I1− ¯I1)(I2− ¯I2)   I1− ¯I1 2  I2− ¯I2 212 (52) := 1 − I12 √ I11I22 . (53) Therefore, ∂Ψ ∂ti =−  ∂ I1 ∂ti∂ ¯I1 ∂ti  

I11(I2− ¯I2)−I12(I1− ¯I1)

I11 √

I11I22

, (54) for i= 1, . . . , 6. To evaluate the derivatives of the type ∂ I1

∂ti,

we need the spline interpolation as discussed in Sect.3.3.1. Let bk,lbe the spline coefficients and Bk3(p) and Bl3(θ) be the

cubic spline basis functions. Then, in the(p, θ) coordinate system with transformation ˇTa, I1is given by

I1( ˇTa(p), ˇTa(θ)) =  k,l bk,lBk3  ˇTa(p)  Bl3  ˇTa(θ)  (55) with ˇTa(p) = p+A−a ξθ, u A− a ξθ (56) ˇTa(θ) = θ + arccos  ξθ, A−a ξθ A−a ξθ  (57)

cf. Eqs.11and12. It follows that

∂ I1 ∂ti =  k,l bk,l  ∂ ˇTa(p) ∂ti Bk,3  ˇTa(p)  Bl3  ˇTa(θ)  +∂ ˇTa(θ) ∂ti Bl,3  ˇTa(θ)  Bk3  ˇTa(p)  , (58)

where Bk,3and Bl,3denote the outer derivatives of Bk3( ˇTa(p))

and Bl3( ˇTa(θ)), respectively. These outer derivatives can

eas-ily be computed because the basis functions are polynomials. It remains to compute∂ ˇTa(p)

∂ti

and∂ ˇTa(θ)

∂ti

. For i= 5, 6 (i.e. the translational part of ˇTa), it immediately follows that

∂ ˇTa(p) ∂t5 = $ A−a ξθ, 1 0 % Aa−ξθ , ∂ ˇTa(θ) ∂t5 = 0 ∂ ˇTa(p) ∂t6 = $ A−a ξθ, 0 1 % Aa−ξθ , ∂ ˇTa(θ) ∂t6 = 0. (59)

For the derivatives w.r.t. ti, i = 1, . . . , 4, we fix the following

notation: Aξθ := A−ξθ, Ai :=∂ A ∂ti. (60) Then, ∂ ˇTa(p) ∂ti = 1 Aξ θ 2 & ξ θ A−1∂i uAξθ − Aξθ, u + p × 1 2Aξθξ  θ  A−1i A−+ A−1A−i ξθ  ξθ ' , (61) for i = 1 . . . , 4. Likewise, ∂ ˇTa(θ) ∂ti = − 1 ( ) ) *1 − ξθ, Aξθ 2 Aξθ2 + 1 Aξ θ 2 ξ θ A−∂i ξθAξθ −ξ θ A−ξθ 1 2Aξθξ  θ  A−1 i A −+ A−1A− ∂i  ξθ , , (62) for i = 1 . . . , 4. Note that

∂ A−1

∂ti = −A −1∂ A

∂ti

A−1, (63)

hence all expressions in Eqs.61and62are directly comput-able.

(15)

References

1. Geers, M.G.D., DeBorst, R., Brekelmans, W.A.M.: Computing strain fields from discrete displacement fields in 2d-solids. Int. J. Solids Struct. 33(29), 4293–4307 (1996)

2. Bay, B., Smith, T., Fyhrie, D., Saad, M.: Digital volume correlation: three-dimensional strain mapping using X-ray tomography. Exp. Mech. 39(3), 217–226 (1999)

3. Verhulp, E., Rietbergen, B.van , Huiskes, R.: A three-dimensional digital image correlation technique for strain measurements in microstructures. J. Biomech. 37(9), 1313–1320 (2004)

4. Zhang, D., Zhang, X., Cheng, G.: Compression strain measurement by digital speckle correlation. Exp. Mech. 39(1), 62–65 (1999) 5. Cheng, P., Sutton, M., Schreier, H., McNeill, S.: Full-field speckle

pattern image correlation with b-spline deformation function. Exp. Mech. 42(3), 344–352 (2002)

6. Maintz, J.B., Viergever, M.A.: A survey of medical image regis-tration. Med. Image Anal. 2(1), 1–36 (1998)

7. Zitova, B., Flusser, J.: Image registration methods: a survey. Image Vis. Comput. 21(11), 977–1000 (2003)

8. Natterer, F., Wuebbeling, F.: Mathematical methods in image reconstruction. Soc. Ind. Appl. Math. Philadelphia (2001) 9. Kak, A.C., Slaney, M.: Principles of Computerized Tomographic

Imaging. IEEE Press, New York (1988)

10. De Man, B., Nuyts, J., Dupont, P., Marchal, G., Suetens, P.: Reduc-tion of metal streak artifacts in X-ray computed tomography using a transmission maximum a posteriori algorithm. IEEE Trans. Nuclear Sci. 47(3), 977–981 (2000)

11. Fitchard, E.E., Aldridge, J.S., Reckwerdt, P.J., Mackie, T.R.: Reg-istration of synthetic tomographic projection data sets using cross-correlation. Phys. Med. Biol. 43(6), 1645–1657 (1998)

12. Fitchard, E.E., Aldridge, J.S., Reckwerdt, P.J., Olivera, G.H., Mackie, T.R., Iosevich, A.: Six parameter patient registration directly from projection data. Nuclear Instrum. Methods Phys. Res. Sect. a-Accelerators Spectrom. Detect. Assoc. Equip. 421(1–2), 342–351 (1999)

13. Fitchard, E.E., Aldridge, J.S., Ruchala, K., Fang, G., Balog, J., Pearson, D.W., Olivera, G.H., Schloesser, E.A., Wenman, D., Reck-werdt, P.J., Mackie, T.R.: Registration using tomographic projec-tion files. Phys. Med. Biol. 44(2), 495–507 (1999)

14. Lu, W.G., Fitchard, E.E., Olivera, G.H., You, J., Ruchala, K.J., Aldridge, J.S., Mackie, T.R.: Image/patient registration from (par-tial) projection data by the fourier phase matching method. Phys. Med. Biol. 44(8), 2029–2048 (1999)

15. Bingham, P., Arrowood, L.: Projection registration applied to non-destructive testing. J. Electron. Imag. 19(3) (2010)

16. Weihua, M., Tianfang, L., Nicole, W., Lei, X.: Ct image registration in sinogram space. Med. Phys. 34(9), 3596–3602 (2007) 17. Milanfar, P.: A model of the effect of image motion in the

radon transform domain. IEEE Trans. Image Process. 8(9), 1276– 1281 (1999)

18. Robinson, D., Milanfar, P.: Fast local and global projection-based methods for affine motion estimation. J. Math. Imag. Vis. 18(1), 35–54 (2003)

19. Desbat, L., Roux, S., Grangeat, P.: Compensation of some time dependent deformations in tomography. IEEE Trans. Med. Imag. 26(2), 261–269 (2007)

20. Bartels, C., de Haan, G.: Direct motion estimation in the radon transform domain using match-profile backprojections. In: de Haan, G. (ed.) Image Processing, 2007. ICIP 2007. IEEE Inter-national Conference on, vol. 6, pp. VI-153–VI-156 (2007) 21. Tatsuhiko, T., Shinichi, H.: Detection of planar motion objects

using radon transform and one-dimensional phase-only matched filtering. Syst. Comput. Jpn. 37(5), 56–66 (2006)

22. Traver, J.V., Pla, F.: Motion analysis with the radon transform on log-polar images. J. Math. Imag. Vis. 30(2), 147–165 (2008) 23. Mooser, R., Hack, E., Sennhauser, U., Székely, G.: Estimation of

affine transformations directly from tomographic projections. In: 6th International Symposium on Image and Signal Processing and Analysis, pp. 377–382. Salzburg, Austria (2009)

24. Deans, S.R.: The Radon Transform and Some of its Applica-tions. Wiley, New York (1983)

25. Woods, R.P.: Spatial transformation models. In: Handbook of med-ical imaging, pp. 465–490. Academic Press, Inc., San Diego (2000) 26. Lewis, J.P.: Fast normalized cross-correlation. In: Vision Interface, pp. 120–123. Canadian Image Processing and Pattern Recognition Society (1995)

27. Richard, H.B., Robert, B.S., Gerald, A.S.: Approximate solution of the trust region problem by minimization over two-dimensional subspaces. Math. Program. 40(3), 247–263 (1988)

28. De Boor, C.: A Practical Guide to Splines, rev. edn. Springer, New York (2001)

29. Lindeberg, T.: Scale-space theory: A basic tool for analysing struc-tures at different scales. J. Appl. Stat. 21(2), 224–270 (1994) 30. Forsberg, F., Mooser, R., Arnold, M., Hack, E., Wyss, P.: 3d

micro-scale deformations of wood in bending: synchrotron radia-tion [mu]ct data analyzed with digital volume correlaradia-tion. J. Struct. Biol. 164(3), 255–262 (2008)

31. Forsberg, F., Sjödahl, M., Mooser, R., Hack, E., Wyss, P.: Full dimensional strain measurements on wood exposed to three-point bending: Analysis by use of digital volume correlation applied to synchrotron radiation micro-computed tomography image data. Strain 46(1), 47–60 (2010)

32. Trtik, P., Dual, J., Keunecke, D., Mannes, D., Niemz, P., Stähli, P., Kaestner, A., Groso, A., Stampanoni, M.: 3d imaging of micro-structure of spruce wood. J. Struct. Biol. 159(1), 46–55 (2007) 33. Marone, F., Hintermuller, C., McDonald, S., Abela, R., Mikuljan,

G., Isenegger, A., Stampanoni, M.: X-ray tomographic microscopy at tomcat. In: Developments in X-Ray Tomography VI, vol. 7078, pp. 707,822–11. SPIE, San Diego (2008)

34. Stampanoni, M., Groso, A., Isenegger, A., Mikuljan, G., Chen, Q., Meister, D., Lange, M., Betemps, R., Henein, S., Abela, R.: Tomcat: A beamline for TOmographic Microscopy and Coherent rAdiology experimenTs. AIP Conf. Proceed. 879(1), 848–851 (2007) 35. Schretter, C., Neukirchen, C., Bertram, M., Rose, G.: Correction

of some time-dependent deformations in parallel-beam computed tomography. In: Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on, pp. 764–767 (2008)

36. Tuy, H.K.: An inversion-formula for cone-beam reconstruc-tion. Siam J. Appl. Math. 43(3), 546–552 (1983)

37. Smith, B.D.: Image reconstruction from cone-beam projections: Necessary and sufficient conditions and reconstruction meth-ods. IEEE Trans. Med. Imag. 4(1), 14–25 (1985)

(16)

Author Biographies

René Mooser received his

diploma degree in Computa-tional Science and Engineering in 2006 and a Ph.D. degree from the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland, in 2010. While pur-suing his Ph.D. degree, he was an assistant researcher at the Swiss Federal Laboratories for Materials Testing and Research (EMPA) in the fields of computed tomography and image process-ing. Currently, he is working in the financial industry, developing algorithmic trading strategies.

Fredrik Forsberg received his Masters degree in Engineering Physics in 2002 and his Ph.D. in Experimental Mechanics from Luleå University of Technol-ogy, Luleå, Sweden, in 2008. He is currently a researcher at LKAB R&D, Luleå, Sweden. His main areas of research are X-ray microtomography, cor-relation based techniques and 2D/3D image analysis.

Erwin Hack received the

diploma degree in theoretical physics and a Ph.D. degree in physical chemistry from the Uni-versity of Zurich, Switzerland, in 1986 and 1991, respectively. He is a senior scientist at the Swiss Federal Laboratories for Materi-als Testing and Research (Empa), where he is working on full-field optical measurement techniques. His research interests are inter-ferometry, infrared imaging and image analysis. He has authored and co-authored 23 papers in peer-reviewed journals and more than 25 conference papers. He is a member of the Editorial Board of Optics and Lasers in Engineering (Elsevier), the European Optical Society and the Optical Society of America.

Gábor Székely received his

degree in chemical engineering from the Technical University of Budapest in 1974, and a degree in applied mathematics from the University of Budapest in 1981. He also obtained his Ph.D. degree in analytical chem-istry from the Technical Uni-versity of Budapest in 1985. Currently, he is a professor of medical image analysis and visu-alization at the Computer Vision Laboratory of ETH Zurich and is developing image analysis, visu-alization, and simulation methods for computer support of medical diag-nosis, therapy, training, and education.

Urs Sennhauser received his

Ph.D. degree in Physics from the Swiss Federal Institute of Tech-nology (ETH), Zurich, Switzer-land, in 1981. He is heading the Electronics/Metrology Labo-ratory and the Reliability Center at the Swiss Federal Laborato-ries for Materials Testing and Research (Empa). He is currently lecturing “Reliability of Devices and Systems” and “Physics of Failure and Failure Analysis of Electronic Circuits and Devices” at ETH Zurich. In addition to national affiliations, he is an active member of international profes-sional organizations such as IEEE, APS and SPIE.

Figure

Fig. 1 Two-dimensional Radon transform. p is the signed distance from the origin of the coordinate system, θ is the rotation angle where ξ θ ∈ S 1 is a unit vector, Ω ⊆ R 2 , S 1 the unit circle and ξ θ is given by ξ θ = (cos(θ), sin (θ))  , cf
Fig. 2 Fan beam geometry. To avoid confusion with the parallel beam geometry, we denote the virtual detector coordinate with s and the  rota-tion angle with β
Fig. 3 Three-dimensional Radon transform. Integration area is the plane E with signed distance p from the origin of the coordinate system.
Fig. 4 Fan beam geometry and deformation. We are considering the ray which passes through the four dots in the reference sample
+7

Références

Documents relatifs

The oxidation of Alcell lignin, soda lignin, and several lignin model compounds was investigated using oxygen with CoCl 2 ·6H 2 O as homogenous catalyst dissolved in an.. ionic

Second, we formalise the notion of asymptotic robustness under data transformation and show that mild assumptions on the true data generating process are sufficient to ensure that

In the case of probabilistic (resp. affine automata), the set of cut-point languages are called stochastic languages (resp. affine languages) and denoted by SL (resp.. Rabin’s

In certain experiments conditions, the projection data acquisition is guided by angle limitations or a restricted angle, this requires a subsampling of the projections number or

From left to right, optically measured man surface currents (circles) and in situ currents (crosses) for the 10 th to the 12 th March 2008 at Truc Vert experiment. From up to

The M¨ obius transform is a crucial transformation into the Boolean world; it allows to change the Boolean representation between the True Table and Algebraic Normal Form.. In

Moreover, we prove how these sequences relates to the spectral representation of the random field in a partial analogy to the usual property relating the ordinary Fourier and

The simple example of lines in Euclidean space has suggested naming X-ray transform the corresponding integral operator, associating to a function f its integrals Rf() along