• Aucun résultat trouvé

Evolution Strategies and the Illumination Correction Problem

On a Gradient-based Evolution Strategy for Parametric Illumination Correction

5.3 Evolution Strategies and the Illumination Correction Problem

, ( ) , ( ) ,

(x y i x y r x y

f = u . The logarithm of the image is computed to obtain a linear expression: g(x,y)=lni(x,y)+lnr(x,y). The Fourier transform of this expression is G(u,v)=I(u,v)+R(u,v). The linear filtering of the image with a filterH(u,v) is given by

S u v( , )=H u v G u v( , )u ( , )=H u v( , )uI u v( , )+H u v( , )uR u v( , ). (5.2)

From this expression it becomes clear that we can remove the illumination component by linear filtering of the logarithm image, assuming that the illumination logarithm is band-limited. A high-pass filter would allow to recover the real image after exponentiation of the filtered image in Eq. (5.2). If the filter applied is a low-pass filter, we would obtain the estimation of the illumination field. A weakness of the method is the assumption that low-frequency components are only due to illumination effects. This is especially bad in MRI images where the tissues are assumed to correspond to constant-intensity regions.

5.3 Evolution Strategies and the Illumination Correction Problem

The initial discussion for any application of evolutionary algorithms is the function to be minimized. In the case of parametric approaches to illumination inhomogeneity correction, the objective function will be related to the approximation error. In [9] the model of the illumination bias is a linear combination of Legendre polynomials, denotedPi(.) , where i is the order of the polynomial. The reason for this choice is that they are orthogonal, thus they constitute a basis of smooth functions, ideal to model the smooth variations of the illumination gradient. The 2D bias field is modeled as follows:

ˆ( , ) ( ) ( ) ( ). are orthogonal. In the case of the 3D volumes, needed for medical imaging, the generalization is straightforward. The true illumination bias is unknown, so it is impossible to define the error of approximation related to it. The approach is then to assume (1) that we know the classes of objects present in the image, (2) that each region in the image corresponding to an object has the same and constant intensity, and (3) that we know this intensity value. This is generally true for medical images where the tissues have a known response to a given imaging method. A noise term could be needed for model completeness but it is of no use in the following, so it is neglected. The error is then defined as the approximation to an image whose pixels have exactly the class-defined intensities.

e valley r x y k reflectance obtained after correction. Many of the expressions in the original paper [9] refer to the modeling of the illumination as an additive term, which can be explained in the framework of the homomorphic filtering. In this case the algorithm would be estimating the filter to be applied to the logarithm image. We have preferred to assume that the Legendre polynomials are multiplicative modulations of the image. Therefore, they must be normalized in the [0,1] range.

The image correction is performed by dividing the observed image by the estimated bias: r x yˆ ,

( )

= f x y

( ) ( )

, b x yˆ , . It must be noted that the corrected image will have a greater signal-to-noise-ratio (SNR) in the regions of low estimated bias.

The global minima of Eq. (5.4) will be configurations of pixel intensities such that each one belongs to one of the predefined intensity classes

µ

k. The expression of the error that we use in the derivation of the gradient is a special case of Eq. (5.4), whenvalley is a quadratic function:

e r x y f x y

It is easy to deduce an expression of the gradient of the error relative to each parameter of the linear combination of Legendre polynomials.

,

This expression is the basis for the GradPABIC proposed below. Once the error function has beenidentified, the formulation of an ES needs the definition of the individuals and the search space. In the present case, the search space is that of the linear combination parameters of Legendre polynomials, and each individual will consist of a set of such parameters. We assume that the typical ES [2] is well known by the reader. We will use a

(

µ+h

)

strategy that consists of the selection of the new population over the set of parents and offsprings. This strategy is elitist and its convergence is guaranteed. The sensitive parameters of these algorithms are the number of mutations and the population size. We will not use recombination.

Observe that the number of fitness computations is O

(

µ+hG

)

, with Gthe number

of generations. That is, the population size is not very influential on the computation time (unless it grows exponentially). The PABIC proposed by [9] is basically an ES with a population of an individual, and a restricted version of the self-adaptive mutation variance: a (1+1) ES. We reproduce below the expressions that define the algorithm:

wherextis the population of the algorithm, given by the set of linear combination parameters of the Legendre polynomial,

r

t is a random vector whose components follow independent normal N(0,1)distributions. The

A

tis the mutation covariance matrix, which is self-adapted along the evolution. The magnitude of the matrix is increased or decreased depending on the finding of new optimal solutions. We have applied the algorithm with the parameters recommended in the paper. The GradPABIC algorithm has the same outline as the PABIC, but the mutation operator is given by a random sampling along the gradient of the error function on the search parameters:

rt 5N( , ),0I

To test the algorithms we have generated several instances of corrupted images from a chessboard image. We know the original uncorrupted image and the illumination bias field, and therefore we can compute the correlation between them and the estimated ones. The goal of the algorithms is to estimate the bias fields and to recover a clean chessboard image. PABIC parameters were set according to the nominal values recommended in [9]: cg r o w in the interval [1.01,1.1]

andcshrink=cgrow<1 4/ . The number of PABIC iterations is 9000. The maximum order of the Legendre polynomials was 3. The ES was tested with µ=100,h=300 and the number of generations is 30, which gives the same number of fitness evaluations of PABIC. The GradPABIC was allowed only 300 fitness evaluations because of the cost of computing the gradient. We computed 30 replications of the algorithm on each image. We computed the correlation between the recovered chessboard images and the original ones, as well as the correlation between the original illumination bias and the ones estimated by the correction algorithms. The results are summarized in Figures 5.1, 5.2, 5.3, and 5.4. Figure 5.1 plots the average correlation between the original image and the recovered one for each image and algorithm. The results of GradPABIC are better than the PABIC and ES in all cases. Figure 5.2 plots the standard deviation of the correlation between the recovered image and the ground truth for each image and algorithm. High standard deviation implies low confidence in the algorithm results.

Again GradPABIC provides the best results in the sense of lower variance of the results for each image, but for some images where the ES improves it. The worst results in terms of variance are the ones provided by the PABIC. The ES is less variable than PABIC, which is natural because of the improved convergence properties implied by the larger population. The PABIC, being a single individual or single-solution algorithm, has a poorer ability to escape bad local optima than ES. The good results of GradPABIC are more surprising, because it is also a single-solution algorithm. It seems that the gradient information improves the convergence of the algorithm.