• Aucun résultat trouvé

CMA-ES and Advanced Adaptation Mechanisms

N/A
N/A
Protected

Academic year: 2021

Partager "CMA-ES and Advanced Adaptation Mechanisms"

Copied!
99
0
0

Texte intégral

(1)

HAL Id: hal-01959479

https://hal.inria.fr/hal-01959479

Submitted on 18 Dec 2018

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

CMA-ES and Advanced Adaptation Mechanisms

Youhei Akimoto, Nikolaus Hansen

To cite this version:

Youhei Akimoto, Nikolaus Hansen. CMA-ES and Advanced Adaptation Mechanisms. GECCO ’18 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion, Jul 2018, Kyoto, Japan. �hal-01959479�

(2)

CMA-ES and Advanced Adaptation Mechanisms

Youhei Akimoto 1 & Nikolaus Hansen 2

1. University of Tsukuba, Japan

2. Inria, Research Centre Saclay, France akimoto@cs.tsukuba.ac.jp

nikolaus.hansen@inria.fr

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial

advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

GECCO '18 Companion, July 15–19, 2018, Kyoto, Japan

© 2018 Copyright is held by the owner/author(s).

ACM ISBN 978-1-4503-5764-7/18/07.

https://doi.org/10.1145/3205651.3207854

(3)

Tutorial: Evolution Strategies and CMA-ES (Covariance Matrix Adaptation)

Anne Auger & Nikolaus Hansen

Inria

Project team TAO

Research Centre Saclay – ˆIle-de-France

University Paris-Sud, LRI (UMR 8623), Bat. 660 91405 ORSAY Cedex, France

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Copyright is held by the owner/author(s).

GECCO ’14, Jul 12-16 2014, Vancouver, BC, Canada ACM 978-1-4503-2881-4/14/07.

https://www.lri.fr/~hansen/gecco2014-CMA-ES-tutorial.pdf

We are happy to answer questions at any time.

(4)

Topics

1. What makes the problem difficult to solve?

2. How does the CMA-ES work?

• Normal Distribution, Rank-Based Recombination

• Step-Size Adaptation

• Covariance Matrix Adaptation

3. What can/should the users do for the CMA-ES to work effectively on their problem?

• Choice of problem formulation and encoding (not covered)

• Choice of initial solution and initial step-size

• Restarts, Increasing Population Size

• Restricted Covariance Matrix

(5)

Topics

1. What makes the problem difficult to solve?

2. How does the CMA-ES work?

• Normal Distribution, Rank-Based Recombination

• Step-Size Adaptation

• Covariance Matrix Adaptation

3. What can/should the users do for the CMA-ES to work effectively on their problem?

• Choice of problem formulation and encoding (not covered)

• Choice of initial solution and initial step-size

• Restarts, Increasing Population Size

(6)

Problem Statement Black Box Optimization and Its Difficulties

Problem Statement

Continuous Domain Search/Optimization

Task: minimize an objective function (fitness function, loss function) in continuous domain

f : X ✓ R n ! R , x 7! f ( x ) Black Box scenario (direct search scenario)

f(x) x

I

gradients are not available or not useful

I

problem domain specific knowledge is used only within the black box, e.g. within an appropriate encoding

Search costs: number of function evaluations

(7)

Problem Statement Black Box Optimization and Its Difficulties

Problem Statement

Continuous Domain Search/Optimization

Goal

I

fast convergence to the global optimum

. . . or to a robust solution x

I

solution x with small function value f ( x ) with least search cost

there are two conflicting objectives

Typical Examples

I

shape optimization (e.g. using CFD) curve fitting, airfoils

I

model calibration biological, physical

I

parameter calibration controller, plants, images

Problems

I

exhaustive search is infeasible

I

naive random search takes too long

I

deterministic search is not successful / takes too long

(8)

Problem Statement Black Box Optimization and Its Difficulties

What Makes a Function Difficult to Solve?

Why stochastic search?

non-linear, non-quadratic, non-convex

on linear and quadratic functions much better search policies are available

ruggedness

non-smooth, discontinuous, multimodal, and/or noisy function

dimensionality (size of search space)

(considerably) larger than three

non-separability

dependencies between the objective variables

ill-conditioning

−4 −3 −2 −1 0 1 2 3 4

0 10 20 30 40 50 60 70 80 90 100

−3 −2 −1 0 1 2 3

−3

−2

−1 0 1 2 3

gradient direction Newton direction

non-smooth level sets

(9)

Problem Statement Black Box Optimization and Its Difficulties

Ruggedness

non-smooth, discontinuous, multimodal, and/or noisy

Fitness

−4 −3 −2 −1 0 1 2 3 4

0 10 20 30 40 50 60 70 80 90 100

cut from a 5-D example, (easily) solvable with evolution strategies

(10)

Problem Statement Non-Separable Problems

Separable Problems

Definition (Separable Problem)

A function f is separable if arg min

(x

1

,...,x

n

) f (x 1 , . . . , x n ) =

arg min

x

1

f (x 1 , . . .), . . . , arg min

x

n

f (. . . , x n )

) it follows that f can be optimized in a sequence of n independent 1-D optimization processes

Example: Additively

decomposable functions

f (x 1 , . . . , x n ) =

X n i=1

f i (x i )

Rastrigin function

−3 −2 −1 0 1 2 3

−3

−2

−1 0 1 2 3

(11)

Problem Statement Non-Separable Problems

Non-Separable Problems

Building a non-separable problem from a separable one

(1,2)

Rotating the coordinate system

f : x 7! f ( x ) separable

f : x 7! f ( Rx ) non-separable

R rotation matrix

−3 −2 −1 0 1 2 3

−3

−2

−1 0 1 2 3

R

!

−3 −2 −1 0 1 2 3

−3

−2

−1 0 1 2 3

1Hansen, Ostermeier, Gawelczyk (1995). On the adaptation of arbitrary normal mutation distributions in evolution strategies:

The generating set adaptation. Sixth ICGA, pp. 57-64, Morgan Kaufmann

(12)

Problem Statement Ill-Conditioned Problems

Ill-Conditioned Problems

Curvature of level sets

Consider the convex-quadratic function f ( x ) = 1 2 ( x x ) T H ( x x ) = 1 2 P

i h i,i (x i x i ) 2 + 1 2 P

i 6 =j h i,j (x i x i )(x j x j )

H is Hessian matrix of f and symmetric positive definite

gradient direction f 0 ( x ) T

Newton direction H 1 f 0 ( x ) T

Ill-conditioning means squeezed level sets (high curvature).

Condition number equals nine here. Condition numbers up to 10

10

are not unusual in real world problems.

If H ⇡ I (small condition number of H ) first order information (e.g. the

gradient) is sufficient. Otherwise second order information (estimation

of H 1 ) is necessary.

(13)

Section Subsection

Non-smooth level sets (sharp ridges)

Similar difficulty but worse than ill-conditioning

1-norm scaled 1-norm 1/2-norm

(14)

Problem Statement Ill-Conditioned Problems

What Makes a Function Difficult to Solve?

. . . and what can be done

The Problem Possible Approaches Dimensionality exploiting the problem structure

separability, locality/neighborhood, encoding

Ill-conditioning second order approach

changes the neighborhood metric

Ruggedness non-local policy, large sampling width (step-size)

as large as possible while preserving a reasonable convergence speed

population-based method, stochastic, non-elitistic recombination operator

serves as repair mechanism

restarts

. . .metaphors

(15)

Topics

1. What makes the problem difficult to solve?

2. How does the CMA-ES work?

• Normal Distribution, Rank-Based Recombination

• Step-Size Adaptation

• Covariance Matrix Adaptation

3. What can/should the users do for the CMA-ES to work effectively on their problem?

• Choice of problem formulation and encoding (not covered)

• Choice of initial solution and initial step-size

• Restarts, Increasing Population Size

(16)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined via operators on a population, in particular, selection, recombination and mutation

Natural template for (incremental) Estimation of Distribution Algorithms

(17)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined

via operators on a population, in particular, selection, recombination

and mutation

(18)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined via operators on a population, in particular, selection, recombination and mutation

Natural template for (incremental) Estimation of Distribution Algorithms

(19)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined

via operators on a population, in particular, selection, recombination

and mutation

(20)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined via operators on a population, in particular, selection, recombination and mutation

Natural template for (incremental) Estimation of Distribution Algorithms

(21)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined

via operators on a population, in particular, selection, recombination

and mutation

(22)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined via operators on a population, in particular, selection, recombination and mutation

Natural template for (incremental) Estimation of Distribution Algorithms

(23)

Evolution Strategies (ES) A Search Template

Stochastic Search

A black box search template to minimize f : R n ! R

Initialize distribution parameters ✓, set population size 2 N While not terminate

1

Sample distribution P ( x | ✓) ! x 1 , . . . , x 2 R n

2

Evaluate x 1 , . . . , x on f

3

Update parameters ✓ F (✓ , x 1 , . . . , x , f ( x 1 ), . . . , f ( x ))

Everything depends on the definition of P and F

deterministic algorithms are covered as well

In many Evolutionary Algorithms the distribution P is implicitly defined

via operators on a population, in particular, selection, recombination

and mutation

(24)

Evolution Strategies (ES) A Search Template

The CMA-ES

Input: m 2 R n , 2 R + ,

Initialize: C = I , and p c = 0 , p = 0 ,

Set: c c ⇡ 4/n, c ⇡ 4/n, c 1 ⇡ 2/n 2 , c µ ⇡ µ w /n 2 , c 1 + c µ  1, d ⇡ 1 + p µ

w

n , and w i=1... such that µ w = P

µ

1

i=1

w

i2

⇡ 0.3 While not terminate

x i = m + y i , y i ⇠ N i ( 0 , C ) , for i = 1 , . . . , sampling m P µ

i=1 w i x i: = m + y w where y w = P µ

i=1 w i y i: update mean p c ( 1 c c ) p c + 1I {k p k <1.5 p n } p

1 ( 1 c c ) 2 p µ w y w cumulation for C p ( 1 c ) p + p

1 ( 1 c ) 2 p µ w C

12

y w cumulation for C ( 1 c 1 c µ ) C + c 1 p c p c T + c µ P µ

i=1 w i y i: y T i: update C ⇥ exp ⇣

d c

⇣ k p k

E kN ( 0 , I ) k 1 ⌘⌘

update of

Not covered on this slide: termination, restarts, useful output, boundaries and encoding

Input: m œ R n ; œ R + ; œ N Ø 2 , usually Ø 5, default 4 +  3 log n Ê

Set c m = 1; c 1 ¥ 2/n 2 ; c µ ¥ µ w /n 2 ; c c ¥ 4/n; c ¥ 1/ Ô n; d ¥ 1; w i =1 ...⁄

decreasing in i and q µ

i w i = 1, w µ > 0 Ø w µ +1 , µ w 1 := q µ

i =1 w i 2 ¥ 3 /⁄

Initialize C = I, and p c = 0, p = 0 While not terminate

x i = m + y i , where y i ≥ N i (0 , C) for i = 1 , . . . , sampling m Ω m + c m y w , where y w = q µ

i =1 w rk( i ) y i update mean p Ω (1 ≠ c ) p + 

1 ≠ (1 ≠ c ) 2 Ô µ w C

12

y w path for p c Ω (1 ≠ c c ) p c + 1I [0 , 2 n ] )

Î p Î 2 * 

1 ≠ (1 ≠ c c ) 2 Ô µ w y w path for C

Ω ◊ exp 1

c

d

1 Î p

Î

E ÎN (0 , I) Î ≠ 1 22

update of C Ω C + c µ q

i =1 w rk( i ) ( y i y T iC) + c 1 ( p c p T cC) update C

Not covered: termination, restarts, useful output, search boundaries and encoding,

corrections for: positive definiteness guaranty, p

c

variance loss, c

and d

for large

(25)

Evolution Strategies (ES) A Search Template

Evolution Strategies

New search points are sampled normally distributed

x i ⇠ m + N i ( 0 , C ) for i = 1 , . . . ,

as perturbations of m, where x i , m 2 R n , 2 R + , C 2 R n n

where

the mean vector m 2 R n represents the favorite solution the so-called step-size 2 R + controls the step length the covariance matrix C 2 R n n determines the shape of the distribution ellipsoid

here, all new points are sampled with the same parameters

The question remains how to update m, C , and .

(26)

Evolution Strategies (ES) The Normal Distribution

Why Normal Distributions?

1

widely observed in nature, for example as phenotypic traits

2

only stable distribution with finite variance

stable means that the sum of normal variates is again normal:

N ( x , A ) + N ( y , B ) ⇠ N ( x + y , A + B )

helpful in design and analysis of algorithms related to the central limit theorem

3

most convenient way to generate isotropic search points

the isotropic distribution does not favor any direction, rotational invariant

4

maximum entropy distribution with finite variance

the least possible assumptions on f in the distribution shape

(27)

Evolution Strategies (ES) The Normal Distribution

Normal Distribution

−40 −2 0 2 4

0.1 0.2 0.3

0.4 Standard Normal Distribution

probability density

probability density of the 1-D standard normal distribution

−5

0

5

−5 0

50 0.1 0.2 0.3 0.4

2−D Normal Distribution

probability density of a 2-D normal

distribution

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

(28)

Evolution Strategies (ES) The Normal Distribution

The Multi-Variate (n-Dimensional) Normal Distribution

Any multi-variate normal distribution N ( m , C ) is uniquely determined by its mean value m 2 R

n

and its symmetric positive definite n ⇥ n covariance matrix C .

The mean value m

determines the displacement (translation) value with the largest density (modal value)

the distribution is symmetric about the distribution mean

−5

0

5

−5 0

50 0.1 0.2 0.3 0.4

2−D Normal Distribution

The covariance matrix C determines the shape

geometrical interpretation: any covariance matrix can be uniquely identified with

the iso-density ellipsoid { x 2 R

n

| ( x m )

T

C

1

( x m ) = n 1 }

(29)

Evolution Strategies (ES) The Normal Distribution

. . . any covariance matrix can be uniquely identified with the iso-density ellipsoid { x 2 R

n

| ( x m )

T

C

1

( x m ) = 1 }

Lines of Equal Density

N m ,

2

I ⇠ m + N ( 0 , I )

one degree of freedom components are

independent standard normally distributed

N m , D

2

⇠ m + D N ( 0 , I )

n degrees of freedom components are

independent, scaled

N ( m , C ) ⇠ m + C

12

N ( 0 , I )

(n

2

+ n)/2 degrees of freedom components are

correlated

where I is the identity matrix (isotropic case) and D is a diagonal matrix (reasonable for separable problems) and A ⇥ N ( 0 , I ) ⇠ N 0 , AA

T

holds for all A .

n

(30)

Evolution Strategies (ES) The Normal Distribution

Multivariate Normal Distribution and Eigenvalues

d

2

· b

2

For any positive definite symmetric C ,

C = d 2 1 b 1 b T 1 + · · · + d 2 N b N b T N

d i : square root of the eigenvalue of C

b i : eigenvector of C , corresponding to d i The multivariate normal distribution N (m, C)

N (m, C) ⇠ m + N (0, d 2 1 )b 1 + · · · + N (0, d 2 N )b N

(31)

Evolution Strategies (ES) The Normal Distribution

The (µ/µ, )-ES

Non-elitist selection and intermediate (weighted) recombination

Given the i-th solution point x i = m + N i ( 0 , C )

| {z }

=: y

i

= m + y i

Let x i: the i-th ranked solution point, such that f ( x 1 : )  · · ·  f ( x : ).

The new mean reads m

X µ i=1

w i x i: = m +

X µ i=1

w i y i:

| {z }

=: y

w

where

w 1 · · · w µ > 0 , P µ

i=1 w i = 1 , P

µ

1

i=1

w

i2

=: µ w4

The best µ points are selected from the new solutions (non-elitistic)

(32)

Evolution Strategies (ES) Invariance

Invariance Under Monotonically Increasing Functions

Rank-based algorithms

Update of all parameters uses only the ranks

f (x

1:

)  f (x

2:

)  ...  f (x

:

)

g(f (x

1:

))  g(f (x

2:

))  ...  g(f (x

:

)) 8 g

g is strictly monotonically increasing g preserves ranks

3

3Whitley 1989. The GENITOR algorithm and selection pressure: Why rank-based allocation of reproductive trials is best, ICGA

(33)

Evolution Strategies (ES) Invariance

Basic Invariance in Search Space

translation invariance

is true for most optimization algorithms

f ( x ) $ f ( x a )

Identical behavior on f and f a

f : x 7! f ( x ), x (t=0) = x 0

f a : x 7! f ( x a ), x (t=0) = x 0 + a

No difference can be observed w.r.t. the argument of f

(34)

Evolution Strategies (ES) Summary

Summary

On 20D Sphere Function: f ( x ) = P

N

i=1

[ x ]

2i

ES without adaptation can’t approach the optimum ) adaptation required

(35)

Step-Size Control

Evolution Strategies

Recalling

New search points are sampled normally distributed

x i ⇠ m + N i ( 0 , C ) for i = 1 , . . . ,

as perturbations of m, where x i , m 2 R n , 2 R + , C 2 R n n

where

the mean vector m 2 R n represents the favorite solution and m P µ

i=1 w i x i:

the so-called step-size 2 R + controls the step length the covariance matrix C 2 R n n determines the shape of the distribution ellipsoid

The remaining question is how to update and C .

(36)

Step-Size Control Why Step-Size Control

Methods for Step-Size Control

1 / 5-th success rule

ab

, often applied with “+”-selection

increase step-size if more than 20% of the new solutions are successful, decrease otherwise

-self-adaptation

c

, applied with “,”-selection

mutation is applied to the step-size and the better, according to the objective function value, is selected

simplified “global” self-adaptation

path length control

d

(Cumulative Step-size Adaptation, CSA)

e

self-adaptation derandomized and non-localized

aRechenberg 1973, Evolutionsstrategie, Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Frommann-Holzboog

bSchumer and Steiglitz 1968. Adaptive step size random search. IEEE TAC cSchwefel 1981, Numerical Optimization of Computer Models, Wiley

dHansen & Ostermeier 2001, Completely Derandomized Self-Adaptation in Evolution Strategies, Evol. Comput.

9(2) eOstermeier et al 1994, Step-size adaptation based on non-local use of selection information, PPSN IV

(37)

Step-Size Control Path Length Control (CSA)

Path Length Control (CSA)

The Concept of Cumulative Step-Size Adaptation

x

i

= m + y

i

m m + y

w

Measure the length of the evolution path

the pathway of the mean vector m in the generation sequence

decrease + +

increase loosely speaking steps are

perpendicular under random selection (in expectation)

(38)

Step-Size Control Path Length Control (CSA)

Path Length Control (CSA)

The Equations

Initialize m 2 R n , 2 R + , evolution path p = 0 , set c ⇡ 4 /n, d ⇡ 1.

m m + y w where y w = P µ

i=1 w i y i: update mean p ( 1 c ) p +

q 1 ( 1 c ) 2

| {z }

accounts for 1 c

p µ w

accounts for |{z} w

i

y w

⇥ exp

✓ c d

✓ k p k

E kN ( 0 , I ) k 1

◆◆

| {z }

>1 () k p k is greater than its expectation

update step-size

(39)

Step-Size Control Path Length Control (CSA)

( 5 / 5 , 10 )-CSA-ES, default parameters

k m x ⇤ k

f ( x ) =

X n i=1

x 2 i

in [ 0 . 2 , 0 . 8 ] n

for n = 30

(40)

Step-Size Control Alternatives to CSA

Two-Point Step-Size Adaptation (TPA)

k x k

C

:= x

T

C

1

x Sample a pair of symmetric points along the previous mean shift

x

1/2

= m

(g)

±

(g)

kN ( 0 , I ) k

k m

(g)

m

(g 1)

k

C(g)

( m

(g)

m

(g 1)

)

Compare the ranking of x

1

and x

2

among current populations s

(g+1)

= (1 c

s

)s

(g)

+ c

s

rank ( x

2

) rank ( x

1

)

| {z 1 }

>0 if the previous step still produces a promising solution

Update the step-size

(g+1)

=

(g)

exp s

(g+1)

d

!

[Hansen, 2008] Hansen, N. (2008). CMA-ES with two-point step-size adaptation. [research report] rr-6527, 2008. Inria-00276854v5.

[Hansen et al., 2014] Hansen, N., Atamna, A., and Auger, A. (2014). How to assess step-size adaptation mechanisms in randomised search.

z

(41)

Step-Size Control Alternatives to CSA

On Sphere with Low Effective Dimension

102 103 104 105

1010 109 108 107 106 105 104 103 102 101 100 101 102 103 104

f(x)

N = 10, M = 10 N = 100, M = 100 N = 100, M = 10

102 103 104 105

10 10 10 9 10 8 10 7 10 6 10 5 10 4 10 3 10 2 10 1 100 101 102 103 104

f(x)

N = 10, M = 10 N = 100, M = 100 N = 100, M = 10

CSA TPA

On a function with low effective dimension f ( x ) = P M

i=1 [ x ] 2 i , x 2 R N , M  N .

N M variables do not affect the function value

(42)

Step-Size Control Alternatives to CSA

Alternatives: Success-Based Step-Size Control

comparing the fitness distributions of current and previous iterations

[Ait Elhara et al., 2013] Ait Elhara, O., Auger, A., and Hansen, N. (2013). A median success rule for non- elitist evolution strategies: Study of feasibility. In Proc. of the GECCO, pages 415–422.

[Loshchilov, 2014] Loshchilov, I. (2014). A computationally efficient limited memory cma-es for large scale optimization. In Proc. of the

Generalizations of 1 / 5th-success-rule for non-elitist and multi-recombinant ES

Median Success Rule

Population Success Rule

controls a success probability

An advantage over CSA and TPA: Cheap Computation It depends only on .

cf. CSA and TPA require a computation of C 1 / 2 x and C 1 x, respectively.

[Loshchilov, 2014]

[Ait Elhara et al., 2013]

(43)

Step-Size Control

Step-Size Control: Summary

Summary

Why Step-Size Control?

to achieve linear convergence at near-optimal rate Cumulative Step-Size Adaptation

efficient and robust for  N

inefficient on functions with (many) ineffective axes Alternative Step-Size Adaptation Mechanisms

Two-Point Step-Size Adaptation

Median Success Rule, Population Success Rule

the effective adaptation of the overall population diversity seems yet to pose open questions, in particular with recombination or without entire control over the realized distribution. a

a

Hansen et al. How to Assess Step-Size Adaptation Mechanisms in Randomised

Search. PPSN 2014

(44)

Step-Size Control Summary

Step-Size Control: Summary

On 20D TwoAxes Function: f ( x ) = P

N/2

i=1

[ Rx ]

2i

+ a

2

P

N

i=N/2+1

[ Rx ]

2i

, R: orthogonal

convergence speed of CSA-ES becomes lower as the function becomes ill conditioned

(a

2

becomes greater) ) covariance matrix adaptation required

(45)

Covariance Matrix Adaptation (CMA)

Evolution Strategies

Recalling

New search points are sampled normally distributed

x i ⇠ m + N i ( 0 , C ) for i = 1 , . . . ,

as perturbations of m , where x i , m 2 R n , 2 R + , C 2 R n n

where

the mean vector m 2 R n represents the favorite solution the so-called step-size 2 R + controls the step length the covariance matrix C 2 R n n determines the shape of the distribution ellipsoid

The remaining question is how to update C .

(46)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

initial distribution, C = I

. . .equations

(47)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

initial distribution, C = I

(48)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

y w , movement of the population mean m (disregarding )

. . .equations

(49)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

mixture of distribution C and step y w ,

C 0.8 ⇥ C + 0.2 ⇥ y w y T w

(50)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

new distribution (disregarding )

. . .equations

(51)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

new distribution (disregarding )

(52)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

movement of the population mean m

. . .equations

(53)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

mixture of distribution C and step y w ,

C 0.8 ⇥ C + 0.2 ⇥ y w y T w

(54)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

m m + y w , y w = P µ

i=1 w i y i: , y i ⇠ N i ( 0 , C )

new distribution,

C 0.8 ⇥ C + 0.2 ⇥ y w y T w

the ruling principle: the adaptation increases the likelihood of successful steps, y w , to appear again

another viewpoint: the adaptation follows a natural gradient

approximation of the expected fitness

. . .equations

(55)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Covariance Matrix Adaptation

Rank-One Update

Initialize m 2 R n , and C = I , set = 1, learning rate c cov ⇡ 2 /n 2 While not terminate

x i = m + y i , y i ⇠ N i ( 0 , C ) , m m + y w where y w =

X µ i= 1

w i y i:

C ( 1 c cov ) C + c cov µ w y w y T w

rank-one |{z}

where µ w = 1 P µ

i=1 w i 2 1

The rank-one update has been found independently in several domains 6 7 8 9

6Kjellstr¨om&Tax´en 1981. Stochastic Optimization in System Design, IEEE TCS

7Hansen&Ostermeier 1996. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation, ICEC

8

(56)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

C (1 c

cov

) C + c

cov

µ

w

y

w

y

Tw

covariance matrix adaptation

learns all pairwise dependencies between variables

off-diagonal entries in the covariance matrix reflect the dependencies

conducts a principle component analysis (PCA) of steps y w , sequentially in time and space

eigenvectors of the covariance matrix C are the principle components / the principle axes of the mutation ellipsoid

learns a new rotated problem representation

components are independent (only) in the new representation

learns a new (Mahalanobis) metric

variable metric method

approximates the inverse Hessian on quadratic functions

transformation into the sphere function

for µ = 1: conducts a natural gradient ascent on the distribution N

entirely independent of the given coordinate system

. . .cumulation, rank-µ

(57)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Invariance Under Rigid Search Space Transformation

−3 −2 −1 0 1 2 3

−3

−2

−1 0 1 2 3

for example, invariance under search space rotation (separable … non-separable)

f-level sets in dimension 2

f = h Rast f = h

(58)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-One Update

Invariance Under Rigid Search Space Transformation

−3 −2 −1 0 1 2 3

−3

−2

−1 0 1 2 3

for example, invariance under search space rotation (separable … non-separable)

f-level sets in dimension 2

f = h RastR f = hR

(59)

Covariance Matrix Adaptation (CMA) Cumulation—the Evolution Path

Cumulation

The Evolution Path

Evolution Path

Conceptually, the evolution path is the search path the strategy takes over a number of generation steps. It can be expressed as a sum of consecutive steps of the mean m .

An exponentially weighted sum of steps y

w

is used

p

c

/

X

g i=0

( 1 c

c

)

g i

| {z }

exponentially fading weights

y

(i)w

The recursive construction of the evolution path (cumulation):

p

c

(1 c

c

)

| {z }

decay factor

p

c

+ p

1 (1 c

c

)

2

p µ

w

| {z }

normalization factor

y

w

|{z}

input = m mold

(60)

Covariance Matrix Adaptation (CMA) Cumulation—the Evolution Path

Cumulation

The Evolution Path

Evolution Path

Conceptually, the evolution path is the search path the strategy takes over a number of generation steps. It can be expressed as a sum of consecutive steps of the mean m .

An exponentially weighted sum of steps y

w

is used

p

c

/

X

g i=0

( 1 c

c

)

g i

| {z }

exponentially fading weights

y

(i)w

The recursive construction of the evolution path (cumulation):

p

c

(1 c

c

)

| {z }

decay factor

p

c

+ p

1 (1 c

c

)

2

p µ

w

| {z }

normalization factor

y

w

|{z}

input = m mold

where µ

w

=

P1w

i2

, c

c

⌧ 1. History information is accumulated in the evolution path.

(61)

Covariance Matrix Adaptation (CMA) Cumulation—the Evolution Path

“Cumulation” is a widely used technique and also know as exponential smoothing in time series, forecasting

exponentially weighted mooving average

iterate averaging in stochastic approximation

momentum in the back-propagation algorithm for ANNs . . .

“Cumulation” conducts a low-pass filtering, but there is more to it. . .

. . .why?

(62)

Covariance Matrix Adaptation (CMA) Cumulation—the Evolution Path

Cumulation

Utilizing the Evolution Path

We used y

w

y

Tw

for updating C. Because y

w

y

Tw

= y

w

( y

w

)

T

the sign of y

w

is lost.

The sign information (signifying correlation between steps) is (re-)introduced by using the evolution path.

p

c

( 1 c

c

)

| {z }

decay factor

p

c

+ p

1 ( 1 c

c

)

2

p µ

w

| {z }

normalization factor

y

w

C ( 1 c

cov

) C + c

cov

p

c

p

c T

| {z }

rank-one

where µ

w

=

P1w

i2

, c

cov

⌧ c

c

⌧ 1 such that 1 /c

c

is the “backward time horizon”.

(63)

Covariance Matrix Adaptation (CMA) Cumulation—the Evolution Path

Cumulation

Utilizing the Evolution Path

We used y

w

y

Tw

for updating C. Because y

w

y

Tw

= y

w

( y

w

)

T

the sign of y

w

is lost.

The sign information (signifying correlation between steps) is (re-)introduced by using the evolution path.

p

c

( 1 c

c

)

| {z }

decay factor

p

c

+ p

1 ( 1 c

c

)

2

p µ

w

| {z }

normalization factor

y

w

C ( 1 c

cov

) C + c

cov

p

c

p

c T

| {z }

rank-one

where µ

w

=

P1

, c

cov

⌧ c

c

⌧ 1 such that 1 /c

c

is the “backward time horizon”.

(64)

Covariance Matrix Adaptation (CMA) Cumulation—the Evolution Path

Cumulation

Utilizing the Evolution Path

We used y

w

y

Tw

for updating C. Because y

w

y

Tw

= y

w

( y

w

)

T

the sign of y

w

is lost.

The sign information (signifying correlation between steps) is (re-)introduced by using the evolution path.

p

c

( 1 c

c

)

| {z }

decay factor

p

c

+ p

1 ( 1 c

c

)

2

p µ

w

| {z }

normalization factor

y

w

C ( 1 c

cov

) C + c

cov

p

c

p

c T

| {z }

rank-one

where µ

w

=

P1w

i2

, c

cov

⌧ c

c

⌧ 1 such that 1 /c

c

is the “backward time horizon”.

(65)

Covariance Matrix Adaptation (CMA) Cumulation—the Evolution Path

Using an evolution path for the rank-one update of the covariance matrix reduces the number of function evaluations to adapt to a straight ridge from about O (n 2 ) to O (n) . ( a )

aHansen & Auger 2013. Principled design of continuous stochastic search: From theory to practice.

Number of f -evaluations divided by dimension on the cigar function f ( x ) = x

21

+ 10

6

P

n

i=2

x

2i

101 102

102 103 104

dimension

c c = 1 (no cumulation)

c c = 1 / p n c c = 1/n

The overall model complexity is n 2 but important parts of the model

can be learned in time of order n

(66)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-µ Update

Rank- µ Update

x

i

= m + y

i

, y

i

⇠ N

i

( 0 , C ) , m m + y

w

y

w

= P

µ

i=1

w

i

y

i:

The rank-µ update extends the update rule for large population sizes using µ > 1 vectors to update C at each generation step.

The weighted empirical covariance matrix C µ =

X µ i=1

w i y i: y T i:

computes a weighted mean of the outer products of the best µ steps and has rank min (µ, n) with probability one.

with µ = weights can be negative 10

The rank-µ update then reads

C ( 1 c cov ) C + c cov C µ

where c cov ⇡ µ w /n 2 and c cov  1.

10Jastrebski and Arnold (2006). Improving evolution strategies through active covariance matrix adaptation. CEC.

(67)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-µ Update

xi = m + yi, yi ⇠ N (0,C)

sampling of = 150 solutions where C = I and = 1

Cµ = µ1 Pyi: yTi:

C (1 1) ⇥ C + 1 ⇥ Cµ

calculating C where µ = 50,

w 1 = · · · = w µ = µ 1 , and c cov = 1

mnew m + µ1 P yi:

new distribution

(68)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-µ Update

Rank- µ CMA versus Estimation of Multivariate Normal Algorithm EMNA global 11

xi = mold + yi, yi ⇠ N (0,C)

xi = mold + yi, yi ⇠ N (0,C)

sampling of = 150 solutions (dots)

C µ1 P(xi: mold)(xi: mold)T

C µ1 P(xi: mnew)(xi: mnew)T

calculating C from µ = 50 solutions

mnew = mold + µ1 Pyi:

mnew = mold + µ1 Pyi:

new distribution

rank- µ CMA conducts a PCA of

steps

EMNA

global

conducts a PCA of

points

m

new

is the minimizer for the variances when calculating C

11 Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larranga, I. Inza and E.

Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102

(69)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-µ Update

The rank- µ update

increases the possible learning rate in large populations

roughly from 2/n

2

to µ

w

/n

2

can reduce the number of necessary generations roughly from O (n 2 ) to O (n) ( 12)

given µ

w

/ / n

Therefore the rank-µ update is the primary mechanism whenever a large population size is used

say 3 n + 10

The rank-one update

uses the evolution path and reduces the number of necessary function evaluations to learn straight ridges from O (n 2 ) to O (n) . Rank-one update and rank-µ update can be combined

. . .all equations

(70)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-µ Update

The rank- µ update

increases the possible learning rate in large populations

roughly from 2/n

2

to µ

w

/n

2

can reduce the number of necessary generations roughly from O (n 2 ) to O (n) ( 12)

given µ

w

/ / n

Therefore the rank-µ update is the primary mechanism whenever a large population size is used

say 3 n + 10

The rank-one update

uses the evolution path and reduces the number of necessary function evaluations to learn straight ridges from O (n 2 ) to O (n) . Rank-one update and rank-µ update can be combined

. . .all equations 12Hansen, M¨uller, and Koumoutsakos 2003. Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). Evolutionary Computation, 11(1), pp. 1-18

(71)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-µ Update

The rank- µ update

increases the possible learning rate in large populations

roughly from 2/n

2

to µ

w

/n

2

can reduce the number of necessary generations roughly from O (n 2 ) to O (n) ( 12)

given µ

w

/ / n

Therefore the rank-µ update is the primary mechanism whenever a large population size is used

say 3 n + 10

The rank-one update

uses the evolution path and reduces the number of necessary function evaluations to learn straight ridges from O (n 2 ) to O (n) . Rank-one update and rank-µ update can be combined

. . .all equations

(72)

Covariance Matrix Adaptation (CMA) Covariance Matrix Rank-µ Update

The rank- µ update

increases the possible learning rate in large populations

roughly from 2/n

2

to µ

w

/n

2

can reduce the number of necessary generations roughly from O (n 2 ) to O (n) ( 12)

given µ

w

/ / n

Therefore the rank-µ update is the primary mechanism whenever a large population size is used

say 3 n + 10

The rank-one update

uses the evolution path and reduces the number of necessary function evaluations to learn straight ridges from O (n 2 ) to O (n) . Rank-one update and rank-µ update can be combined

. . .all equations 12Hansen, M¨uller, and Koumoutsakos 2003. Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). Evolutionary Computation, 11(1), pp. 1-18

0 2500 5000 7500 100001250015000 10−11

10−9 10−7 10−5 10−3 10−1 101 103 105

best f-value

0 2500 5000 7500 100001250015000

1.0

0.5 0.0 0.5 1.0 1.5 2.0 2.5

3.0 mean coordinates

0 2500 5000 7500 100001250015000 10−6

10−5 10−4 10−3 10−2 10−1

100 step-size

0 2500 5000 7500 100001250015000 10−3

10−2 10−1 100 101

102 principle axis length

0 2500 5000 7500 100001250015000 10−11

10−9 10−7 10−5 10−3 10−1 101 103 105

best f-value

0 2500 5000 7500 100001250015000

0.5 0.0 0.5 1.0 1.5 2.0

2.5 mean coordinates

10−5 10−4 10−3 10−2 10−1

100 step-size

10−3 10−2 10−1 100

101 principle axis length

0 2500 5000 7500 100001250015000 10−11

10−9 10−7 10−5 10−3 10−1 101 103 105

best f-value

0 2500 5000 7500 100001250015000

1.0

0.5 0.0 0.5 1.0 1.5

2.0 mean coordinates

0 2500 5000 7500 100001250015000 10−5

10−4 10−3 10−2 10−1

100 step-size

0 2500 5000 7500 100001250015000 10−4

10−3 10−2 10−1 100 101

102 principle axis length

Rank-one update

Rank-µ update

Hybrid (combined) update

f

TwoAxes

( x ) =

5 i=1

x

2i

+ 10

6

10 i=6

x

2i

= 10 (default for N = 10)

Références

Documents relatifs

Jean-Luc Beuchat, Nicolas Brisebarre, Jérémie Detrey, Eiji Okamoto, Masaaki Shirase, Tsuyoshi Takagi.. To cite

Despite the relative stability of the Leptospirillum Group II proteome with respect to changes in abiotic factors, specific subsets of proteins correlated with abiotic perturbations

Tomato yellow leaf curl virus (TYLCV, Begomovirus) is a relevant model for studying recombination in controlled and natural conditions because of the following

There are at least two different ways of adaptation for a population: selection can either act on a new mutation (hard selective sweep), or on preexisting alleles that

The structuring of a European epistemic community, around antitrust lawyers and competition officials, has created a situation where national members of that epistemic community

1: The Credit Assignment mecha- nism associates a reward to each operator, reflecting the operator impact on the progress of the search (e.g. fitness improvement); the

We also provide an asymptotic expansion for the average size of the population and for the critical speed above which the population goes extinct, which is closely related to

In this paper, a new gene based adaptive mutation scheme is proposed for genetic algorithms (GAs), where the informa- tion on gene based fitness statistics and on gene based