• Aucun résultat trouvé

Evolutionary AT Optimization

Evolutionary Optimization of Approximating Triangulations for Surface Reconstruction

3.5 Evolutionary AT Optimization

128 128

= 30 30

3.4 Deterministic AT Optimization

3.5 Evolutionary AT Optimization

Terzpoulos and Vasilescu [8] use dynamic graphs for triangulations. The au-thors use a mass-spring system with spring coefficients that are adjusted ac-cording to the estimated curvature of the point set to be approximated. The equilibrium state is calculated by solving a differential equation system. In their experiments a point set of sampling points is approximated by a triangulation with . Typical for this approach is that it uses (manmade) assumptions about the correlation between the optimal position of the vertices and the curvature estimated from the point cloud.

Hoppe [4] uses a deterministic approximation scheme that utilizes least-square distances as well as varying numbers of edges for a dynamic mass-spring-based triangulation model. An additional regularization term in the tness function generates regular distribution of the vertices. This additional term supports the search for the minimum of an energy function. The strength of the spring coefficients is decreased continuously in order to allow triangles with tapered-off edges. The approach of Hoppe does not make expicit use of a priori assumptions about curvature properties, etc. It just needs the evaluation of an unbiased distance function.

Algorri and Schmitt [1] introduce a different mass-spring model. The rela-tion between the vertices and the sampling points is realized by a connecrela-tion between the vertex and its next-nearest sampling point. A nonlinear equation system models a dynamically oscillating net structure. The spring values of the system are calculated deterministically after each oscillation step. The de-nition of the damping factor is not easy to adjust because on the one hand the system should remain long enough in oscillation to nd a good global solution and on the other hand it should anneal fast enough to nd a good solution.

All algorithms described in the last paragraph follow deterministic schemes.

These approaches assume that the implemented algorithmic solution leads directly and without any deterioration to the optimum solution. This implies that the general criteria to nd a solution are already known and can be implemented algorithmically. An evolutionary algorithm does not make any assumptions about the path to the solution and, thus, is able to nd the desired optimum with only very little a priori knowledge.

The evolution strategy (ES) is a special form of the superset called evo-lutionary algorithms (EA). EA also comprise methods like genetic algorithms (GA), evolutionary programming (EP), and genetic programming (GP). EA, fuzzy logic, and neural networks belong to the general set called CI (computa-tional intelligence) techniques [12]. Evolutionary algorithms are probabilistic optimization strategies that adopt the paradigms of evolution as introduced

by Darwin. The surprisingly robust and beautiful solutions of nature are the motivation to use this sophisticated approach.

The evolution strategy is described in its basic form by the nomenclature -ES [2]. Here, denotes the number of parents,

the number of offspring, and a coefficient that denes the number of parents involved in recombination. In the special case where two parents mate to form an offspring, the two commonly known short forms -ES and -ES are used. The + sign denotes that offspring as well as parents are used by selection. The sign , denotes that only the offspring will be used to form the next parent population. In the case of a -selection the relation is necessary. An individual consists of a set of real-valued parameters (so called objective variables), which can be evaluated by an -dimensional tness function (generally, ), and to strategy

parameters , of the ES. Two additional global

parameters and characterize the strength of the normal distributed ran-dom mutation, which is applied to the objective variables of each individual.

If only one step size is used, equals zero. A rotation matrix of correlation coefficients as elements of the strategic parameters is introduced by Schwefel [11] but is not used in this context. Each complete individual consisting of both parameter sets is adapted by the evolutionary process. Hence, an ex-ternal deterministic adaptation of the step sizes is not necessary, because the strategy parameters also undergo the selection process. Recombination inter-changes the genetic data of two or more parent individuals by intermediate or discrete exchange schemes.

where in case of selection and in case of selection.

truncation selection:

The individuals are ordered by their tness evaluated by . The best survive.

termination criterion: (e.g., a maximum number of generations)

The algorithm iterates the following steps:

(A) initialization:

(B) recombination:

(C) mutation:

(D) evaluation:

(E) selection:

(F) test termination criterion: either (B) or end.

Mutation is realized by rst varying the step size parameters:

(3.5) In a second step the objective variables are changed:

(3.6) The vector can be initialized arbitrarily in (e.g., ). The fac-tors and depend on the dimension of the problem. A classic choice is and , respectively [13]. denotes the normal distribution with expectation and variance .

Typically, either discrete or intermediate recombination is used. This op-erator is applied separately to the objective and the strategic parameters.

Bisexual discrete recombination generates one new offspring individual by mating two randomly selected parent individuals and . The compo-nents or of the parents are chosen by chance. Bisexual intermediate recombination generates a new offspring individual by application of the

for-mula: . From empirical results it is known that

intermediate recombination of the objective parameters and discrete recom-bination of the step sizes are a good choice.

The ES for optimizing an AT was implemented as follows:

The initial population is formed by triangulations (e.g., by Delaunay triangulation [3]) of the point set . The individuals are encoded ac-cording to the description of (see paragraph ). In the beginning, an equidistant point set in the plane is used to build . A set of sampling points of articially designed objects is used.

The tness function (see paragraph )

is applied.

The termination criterion holds if a limiting maximum number of gener-ations is exceeded.

and . The complete set of step sizes is used, i.e., . Intermediate recombination of the objective parameters and discrete re-combination of the step sizes was the best choice during initial experiments

and, thus, this setting is kept for all experiments. and have been set according to the dimensions (e.g., , for ).

The experiments have been designed to illustrate the behavior of the algo-rithm regarding the inuence of the structure and density of the sampling points, the exactness of the reconstruction, and time and space complexity of the algorithm.

A hemisphere was sampled with points. Figure 3.2 shows the reconstruction of the articial object using a set of vertices.

The initial triangulation started with an equidistant point set. The opti-mized triangulation shows triangles with nearly equal edge lengths (left-hand side of Figure 3.2). This result is typical for surfaces with homogenous cur-vature. The algorithm distributes the vertices equally over the surface. The border edge between the sphere and the plane is interpolated by a regular lin-ear polygon. The best tness measured for 5 runs was , the minimum punctiform distance was , and the maximum puncti-form distance was .

A cylinder lying in the plane has a lateral area with one constant zero and one constant nonzero curvature. The front and rear faces stand vertically on the plane and form two sharp circular edges with the lateral area of the cylinder. Again a set of equally spaced sampling points has been used to describe the object surface. A subset of vertices was used for the triangulation.

On the right-hand side of Figure 3.3 the insufficient approximation of the front face as well as the cut curve of the cylinder and the plane are obvious.

Skew Steps Figure 3.3.

Figure 3.4.

Delaunay triangulation of a half-cylinder using the initial regular grid.

Optimized reconstruction of a half cylinder.

The triangulation algorithm of Delaunay used for the initial individual tries to generate triangles with equal edge lengths. On a regular point grid this yields the typical structures that can be seen in the left-hand side of Figure 3.3.

The optimized approximating triangulation is shown in Figure 3.4. The vertices are distributed in a more efficient way than during the initialization.

Long triangles are generated automatically only by application of the sim-ple SSE tness function. This is especially noteworthy, because interpolating triangulations often generate small triangles. Even optimized interpolating triangulations need special tness functions to be able to evolve long triangles [9]. The ability of the algorithm to generate nearly vertical triangles should also be noticed.

The special property of the EA is that it forms optimal structures just by evaluating the tness function and selecting the best solutions. It does not follow xed deterministic rules.

Sharp edges that do not exactly follow the orientation of the scanning lines may become problematic for 3D interpolating triangulations.

Due to aliasing effects and the xed vertex structure, these algorithms some-times tend to generate harmonica-shaped vertical structures. Leaving the

2 2

P i

.

. .

Figure 3.5.

.

.

. .

.

O n n O m m

100 1 000

5 000 30 000

Inuence of the Sample Point Density.

5 000

5 000

30 000 1 2046 10

5 397 10

( log ) + ( log )

Evolution of a step structure. Generations (upper left), (upper right), (lower left) and (lower right).

striction to use only xed vertices, the surface can adapt better to the surface structure.

One can see from Figure 3.5 that after only a few generations the evolu-tionary algorithm was able to generate a rough approximation of the object.

After generations the heights of the vertices are properly set. Then the structure is modeled by the adaptation of the parameters of the mass-spring system. After generation the small step in the middle has evolved. Af-ter generations a maximum distance of and a tness of

are reached.

Figure 3.6 shows exemplarily the convergence behavior of the optimization algorithm. The tness values and the step sizes are displayed. The trend of the curves is representative for the reconstruction of surfaces with a complex topology. The best case, linearly (in a logarithmic scale) decreasing tness values, can only be achieved for simple structured surfaces. The result of the automatic adaptation process of the step sizes can be seen in the lower part of Figure 3.6.

The density of the digitized points has a direct inuence on the calculation time because the next-nearest triangle to a given point has to be identied. This problem can be solved efficiently by

the method of Preparata [6], which has time and

0.01

0 5000 10000 15000 20000 g 30000

log(f)

0 5000 10000 15000 20000 g

stepsize

0 5000 10000 15000 20000 g 30000

stepsize

generations

min. step size (spring) min. step size (z.comp.)

aver. step size (spring) aver. step size (z-comp.)

max. step size (spring) max. step size (z-comp.)

10-3

dimension of the sampling points

distance(log.) Fitness function and step sizes.

Inuence of the number of sampling points on the tness and the run time.

space complexity ( number inner edges, number of triangles, number of sampling points).

In the experiments the sample points of the half-cylinder have been used.

The number of the equidistant points is varied from to . The initial triangulation of the AT has vertices. In order to allow a comparison of the triangulations, the distance of the resulting surfaces to a

sample point set was calculated. All runs stopped after generations.

Figure 3.7 shows the average distances of 5 runs and the amount of time (logarithmic scaling). The gure supports the experience that a non-linear relation between the amount of time and the number of points exists. It should be noted that an increasing number of points does not have a relevant inuence on the tness, if the proportion of the number of triangles to the number of sampling points exceeds a certain limit (here about ). This oversampling phenomenon is also typical for reconstructions with Non-Uniform Rational B-Splines (NURBS) [10] and gives a hint about the necessary resolution of an

10-3

Inuence of the Vertex Density.

Inuence of the vertex density on the tness and the run time.

AT. For more complex geometries this effect is not so well pronounced but still exists.

Generally, the higher the number of ver-tices of an AT, the better the approximations. Therefore, vertex high densities are desired. Of course, an oversampling effect appears here too if the number of vertices reaches the number of sample points. The space complexity of the data structure itself is not affected, because the number of triangles increases only linearly with the number of vertices.

In the experiment the geometry of the half-cylinder with sampling points has been used. The number of vertices has been increased from to . Figure 3.8 shows the tness and the run time in relation to the number of vertices. The experiments show that the number of vertices has a cubic inuence on the run time. Therefore, the parameter settings for the AT should be adequately chosen in order to nd a compromise between run time and surface quality. Vertex numbers over lead to unreasonable run times.

Furthermore, instability problems may appear. Small vertical triangles evolve that t between the scan lines. These triangles do not contribute directly to the tness and therefore cannot be adapted. A different encoding of the

-parameters of an individual may help to handle this problem.

Experiments with large objects show that, due to the fact that long trian-gles should be evolvable, generally it is not possible to split the surface into smaller patches without losses in the reconstruction quality.

The technique of approximating triangulations is an important tool to nd the optimal distribution and density of sampling points. This data can be esti-mated by the positions of the vertices organized by an evolutionary algorithm.