• Aucun résultat trouvé

Self-Adaptive Software: Evolution Strategies

One of the key features of evolution strategies is that they adapt themselves to the characteristics of the optimization problem to be solved, i.e., they use a feature called self-adaptation to achieve a new level of flexibility. Self-adaptation allows a software to adapt itself to any problem from a general class of problems, to recon-figure itself accordingly, and to do this without any user interaction. The concept was originally invented in the context of evolution strategies (see, e.g., [6]), but can of course be applied on a more general level [5]. Looking at this from the op-timization point of view, most opop-timization methods can be summarized by the following iterative procedure, which shows how to generate the next vector (x1(t+1),…,xn(t+1)) from the current one:

(x1(t+1),…,xn(t+1)) = (x1(t),…,xn(t)) + st.

(v1(t),…,vn(t)).

Here, (v1(t),…,vn(t)) denotes the direction of the next search step at iteration t+1, andstdenotes the step size (a scalar value) for the search step length along this di-rection. Of course, the key to the success of an optimization method consists of finding effective ways to determine, at each time step t, an appropriate direction and step size (and there are hundreds of proposals how to do this). An evolution strategy does this in a self-adaptive way.

The basic idea of an evolution strategy, like other evolutionary algorithms as well, consists of using the model of organic evolution as a process for adaptation and optimization. Consequently, the algorithms use a “population of individuals”

representing candidate solutions to the optimization problem, and evolutionary op-erators such as variation (e.g., recombination and mutation) and selection in an it-erative way such as it is outlined in Figure 1.3. In evolution strategies, populations can be rather small, like, e.g., in the example of a (1,10)-strategy. The notation in-dicates that 10 offspring solutions are generated by means of mutation from one parent solution, and the best (according to the objective function value) of the off-spring individuals is chosen as the parent for the next iteration. It should be noted that discarding the parent is done intentionally, because it allows the algorithm to accept temporary worsenings in quality to overcome locally optimal solutions (cf.

Figure 1.2). For the mutation operator, the basic variants of evolution strategies use normally distributed variations zi5Ni

( )

0,m where Ni

( )

0,m denotes a normally distributed random sample with expectation zero and standard deviation m, i.e., the mutation operator modifies a solution candidate (x1(t),…,xn(t))by setting xi(t+1) = xi(t)+ zi, where i = {1,…,n}. The mutation is normally distributed with expected value zero and variance m2, i.e., step size and direction are implicitly defined by means of the normal distribution (here, the direction is random while the step size is approximately m

( )

n 1 2/ ). The fundamental approach for self-adaptation is to adapt m itself online while optimizing by extending the representation of solutions by the step size m, i.e., ((x1(t),…,xn(t)), m), where m now is a component of the

in-dividual (and different for each inin-dividual of a population), and the mutation op-erator proceeds according to the rule

v= u

(

u

( ) )

= +

( )

v ( )+ ( )

m m o

m

exp , ,

, N xi t xi t Ni

0 1

1 0

forming the new individual ((x1(t),…,xn(t)), mv). In other words, m is mutated first, and the mutated step size is then used to generate the offspring. There is no exter-nal control of step sizes at all. Instead, they are completely controlled by the algo-rithm itself, based on an autonomous adaptation process using the implicit feed-back of the quality criterion. A theoretical analysis for an analyzable objective function has proven that, for this special case, self-adaptation generates an optimal m at any stage of the search process (see, e.g., [4] for a complete introduction to evolution strategy theory). In the above formulation, the special parameter o de-notes a “learning rate”, which defines the speed of adaptation on the level of stan-dard deviations m. According to the theoretical knowledge about the process, a value of o=1 2

( )

n1 2/ is a robust and generally useful setting.

The method outlined above is only the most basic version of self-adaptation.

Much more elaborate variants are in use, which allow for the self-adaptation of general, n-dimensional normal distributions, including correlations between the variables, and also self-adaptive population sizes are currently under investigation.

The resulting algorithms have no external parameters that need to be tuned for a particular application.

Figure 1.3. The evolutionary loop.

1.4 Examples

All adaptive business intelligence solutions provided by NuTech Solutions for its clients are characterized by applying the most suitable combination of traditional and computational intelligence technologies to achieve the best possible improve-ment of business processes. In all cases, the technical aspects of the impleimprove-menta- implementa-tion and client’s problem are subject to nondisclosure agreements. Concerning the applications of self-adaptive evolution strategies, the following three examples (see Figure 1.3) illustrate the capabilities of these algorithms: the optimization of traffic light schedules at street intersections to dynamically adapt the traffic light control Figure 1.4.Examples of applications of evolution strategies: traffic light con-trol (top), elevator concon-trol optimization (middle), metal stamping process op-timization in automobile industry (bottom).

to the actual traffic situation (executed for the Dutch Ministry of Traffic, Rotter-dam, The Netherlands). The optimization of control policies for elevator control-lers to dynamically adapt elevator control to the actual traffic situation (executed for Fujitec Ltd., Osaka, Japan), and the optimization of the metal stamping process to improve quality of the resulting car components while minimizing metal losses (executed for AutoForm Engineering, Zürich, Switzerland). In these examples, the model is implemented by a simulation software already at the client’s disposal, i.e., the optimization part (right part) of Figure 1.1 is executed by NuTech Solutions on the basis of existing models. The dimensionality n of the model input is in the small to middle range, i.e., around 20–40, all of them real-valued. The two traffic control problems are dynamic and noisy, and the evolution strategy locates and continuously maintains very high-quality solutions in an effective and flexible way that cannot be achieved by other methods. In the metal stamping simulation, the evolution strategy is the first algorithm at all that makes the process manageable by means of optimization, and the method yields strong improvements when com-pared to hand-optimized processes.

1.5 Outlook

In this chapter, only very little information about the actual industrial impact of adaptive business intelligence solutions based on computational intelligence tech-nologies can be disclosed. Much more complex applications, implementing the whole scenario outlined in Figure 1.1, are presently in use by clients of NuTech Solutions, with an enormous economic benefit for these companies. In particular, those applications where data mining, model building, knowledge discovery, opti-mization, and management decision support are combined yield a new quality in business process optimization. Adaptation and self-adaptation capabilities of the corresponding software products play an extremely important role in this context, as many applications require a dynamic response capability of the applicable solu-tion software. The modern business environment clearly demonstrates the growing need for adaptive business intelligence solutions, and computational intelligence has proven to be the ideal technology to fulfill the needs of companies in the new century. Adaptive business intelligence is the realization of structured management technologies (e.g., 6 Sigma, TQM) using technologies of the 21st century.

References

1. Adriaans P., D. Zantinge, Data Mining, Addison-Wesley, 1996.

2. Bäck T., D.B. Fogel, Z. Michaewicz, Handbook of Evolutionary Computation, Institute of Physics, Bristol, UK, 2000.

3. Bäck T., Evolutionary Algorithms in Theory and Practice, Oxford University Press, New York, 1996.

4. Beyer H.-G., The Theory of Evolution Strategies, Series on Natural Computa-tion, Springer, Berlin, 2001.

5. Robertson P., H. Shrobe, R. Laddaga (eds.), Self-Adaptive Software. Lecture Notes in Computer Science, Vol. 1936, Springer, Berlin, 2000.

6. Schwefel H.-P., Collective Phenomena in Evolutionary Systems. In Preprints of the 31st Annual Meeting of the International Society for General System Re-search, Budapest, Vol. 2, 1025-1033.

7. Schwefel H.-P., Evolution and Optimum Seeking, Wiley, New York, 1995.

___ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______

Extending the Boundaries of Design Opti-mization by Integrating Fast OptiOpti-mization Techniques with Machine Code Based, Linear Genetic Programming

L. M. Deschaine, F.D. Francone

2.1 Introduction

Engineers frequently encounter problems that require them to estimate control or response settings for industrial or business processes that optimize one or more goals. Most optimization problems include two distinct parts: (1) a model of the process to be optimized; and (2) an optimizer that varies the control parameters of the model to derive optimal settings for those parameters.

For example, one of the research and development (R&D) case studies included here involves the control of an incinerator plant to achieve a high probability of environmental compliance and minimal cost. This required predictive models of the incinerator process, environmental regulations, and operating costs. It also re-quired an optimizer that could combine the underlying models to calculate a real-time optimal response that satisfied the underlying constraints. Figure 2.1 shows the relationship of the optimizer and the underlying models for this problem.

The incinerator example discussed above and the other case studies below did not yield to a simple constrained optimization approach or a well-designed neural network approach. The underlying physics of the problem were not well under-stood; so this problem was best solved by decomposing it into its constituent parts—the three underlying models (Figure 2.1) and the optimizer.

This work is, therefore, concerned with complex optimization problems charac-terized by either of the following situations.

First: Engineers often understand the underlying processes quite well, but the software simulator they create for the process is slow. Deriving optimal settings for a slow simulator requires many calls to the simulator. This makes optimization in-convenient or completely impractical. Our solution in this situation was to reverse engineer the existing software simulator using Linear Genetic Programming (LGP)—in effect, we simulated the simulator. Such “second-order” LGP simula-tions are frequently very accurate and almost always orders of magnitude faster than the hand-coded simulator. For example, for the Kodak Simulator, described below, LGP reverse engineered that simulator, reducing the time per simulation from hours to less than a second. As a result, an optimizer may be applied to the LGP-derived simulation quickly and conveniently.

Figure 2.1 How the optimizer and the various models operate together for the in-cinerator solution.

Second: In the incinerator example given above, the cost and regulatory models were well understood, but the physics of the incinerator plant were not. However, good-quality plant operating data existed. This example highlights the second situation in which our approach consistently yields excellent results. LGP built a model of plant operation directly from the plant operation data. Combined with the cost and regulatory models to form a meta-model, the LGP model permits real-time optimization to achieve regulatory and cost goals.

For both of the above types of problems, the optimization and modeling tools should possess certain clearly definable characteristics:

• The optimizer should make as few calls to the process model as possible, con-sistent with producing high-quality solutions,

• The modeling tool should consistently produce high-precision models that exe-cute quickly when called by the optimizer,

• Both the modeling and optimizing tools should be general-purpose tools. That is, they should be applicable to most problem domains with minimal customi-zation and capable of producing good to excellent results across the whole range of problems that might be encountered; and

• By integrating tools with the above characteristics, we have been able to im-prove problem-solving capabilities very significantly for both problem types above.

This work is organized as follows. We begin by introducing the Evolution Strate-gies with Completely Derandomized Self-Adaptation (ES-CDSA) algorithm as our optimization algorithm of choice. Next, we describe machine-code-based, LGP in detail and describe a three-year study from which we have concluded that machine-code-based, LGP is our modeling tool of choice for these types of applications. Fi-nally, we suggest ways in which the integrated optimization and modeling strategy may be applied to design optimization problems.