• Aucun résultat trouvé

Adaptive Random Search

Dans le document Clever Algorithms (Page 46-51)

Stochastic Algorithms

2.3 Adaptive Random Search

Adaptive Random Search, ARS, Adaptive Step Size Random Search, ASSRS, Variable Step-Size Random Search.

2.3.1 Taxonomy

The Adaptive Random Search algorithm belongs to the general set of approaches known as Stochastic Optimization and Global Optimization. It is a direct search method in that it does not require derivatives to navigate the search space. Adaptive Random Search is an extension of the Random Search (Section2.2) and Localized Random Search algorithms.

2.3.2 Strategy

The Adaptive Random Search algorithm was designed to address the lim-itations of the fixed step size in the Localized Random Search algorithm.

The strategy for Adaptive Random Search is to continually approximate the optimal step size required to reach the global optimum in the search space. This is achieved by trialling and adopting smaller or larger step sizes only if they result in an improvement in the search performance.

The Strategy of the Adaptive Step Size Random Search algorithm (the specific technique reviewed) is to trial a larger step in each iteration and adopt the larger step if it results in an improved result. Very large step sizes are trialled in the same manner although with a much lower frequency.

This strategy of preferring large moves is intended to allow the technique to escape local optima. Smaller step sizes are adopted if no improvement is made for an extended period.

2.3.3 Procedure

Algorithm 2.3.1 provides a pseudocode listing of the Adaptive Random Search Algorithm for minimizing a cost function based on the specification for ‘Adaptive Step-Size Random Search’ by Schummer and Steiglitz [6].

2.3.4 Heuristics

ˆ Adaptive Random Search was designed for continuous function opti-mization problem domains.

ˆ Candidates with equal cost should be considered improvements to allow the algorithm to make progress across plateaus in the response surface.

ˆ Adaptive Random Search may adapt the search direction in addition to the step size.

2.3. Adaptive Random Search 35

Algorithm 2.3.1: Pseudocode for Adaptive Random Search.

Input: Itermax,P roblemsize,SearchSpace,StepSizeinitf actor, StepSizesmallf actor,StepSizelargef actor,StepSizeiterf actor, N oChangemax

Output: S

N oChangecount ←0;

1

StepSizei ←InitializeStepSize(SearchSpace,StepSizeinitf actor);

2

S ←RandomSolution(P roblemsize,SearchSpace);

3

fori= 0 toItermax do

4

S1 ←TakeStep(SearchSpace,S,StepSizei);

5

StepSizelargei ←0;

6

if i modStepSizeiterf actor then

7

StepSizelargei ←StepSizei ×StepSizelargef actor;

8

else

9

StepSizelargei ←StepSizei ×StepSizesmallf actor;

10

end

11

S2 ←TakeStep(SearchSpace,S,StepSizelargei );

12

if Cost(S1)≤Cost(S) —— Cost(S2)≤Cost(S)then

13

if Cost(S2)<Cost(S1)then

14

S ←S2;

15

StepSizei ←StepSizelargei ;

16

N oChangecount ←N oChangecount + 1;

22

if N oChangecount > N oChangemax then

23

N oChangecount ←0;

24

StepSizeiStepSizei

StepSizesmallf actor;

25

ˆ The step size may be adapted for all parameters, or for each parameter individually.

2.3.5 Code Listing

Listing2.2provides an example of the Adaptive Random Search Algorithm implemented in the Ruby Programming Language, based on the specification for ‘Adaptive Step-Size Random Search’ by Schummer and Steiglitz [6].

In the example, the algorithm runs for a fixed number of iterations and returns the best candidate solution discovered. The example problem is an instance of a continuous function optimization that seeks minf(x) where f =Pn

i=1x2i, −5.0< xi <5.0 and n= 2. The optimal solution for this basin function is (v0, . . . , vn1) = 0.0.

1 def objective_function(vector)

2 return vector.inject(0) {|sum, x| sum + (x ** 2.0)}

3 end

4

5 def rand_in_bounds(min, max)

6 return min + ((max-min) * rand())

7 end

8

9 def random_vector(minmax)

10 return Array.new(minmax.size) do |i|

11 rand_in_bounds(minmax[i][0], minmax[i][1])

12 end

13 end

14

15 def take_step(minmax, current, step_size)

16 position = Array.new(current.size)

17 position.size.times do |i|

18 min = [minmax[i][0], current[i]-step_size].max

19 max = [minmax[i][1], current[i]+step_size].min

20 position[i] = rand_in_bounds(min, max)

21 end

22 return position

23 end

24

25 def large_step_size(iter, step_size, s_factor, l_factor, iter_mult)

26 return step_size * l_factor if iter>0 and iter.modulo(iter_mult) == 0

27 return step_size * s_factor

28 end

29

30 def take_steps(bounds, current, step_size, big_stepsize)

31 step, big_step = {}, {}

32 step[:vector] = take_step(bounds, current[:vector], step_size)

33 step[:cost] = objective_function(step[:vector])

34 big_step[:vector] = take_step(bounds,current[:vector],big_stepsize)

35 big_step[:cost] = objective_function(big_step[:vector])

36 return step, big_step

37 end

38

39 def search(max_iter, bounds, init_factor, s_factor, l_factor, iter_mult, max_no_impr)

2.3. Adaptive Random Search 37

40 step_size = (bounds[0][1]-bounds[0][0]) * init_factor

41 current, count = {}, 0

42 current[:vector] = random_vector(bounds)

43 current[:cost] = objective_function(current[:vector])

44 max_iter.times do |iter|

45 big_stepsize = large_step_size(iter, step_size, s_factor, l_factor, iter_mult)

46 step, big_step = take_steps(bounds, current, step_size, big_stepsize)

47 if step[:cost] <= current[:cost] or big_step[:cost] <= current[:cost]

48 if big_step[:cost] <= step[:cost]

49 step_size, current = big_stepsize, big_step

50 else

51 current = step

52 end

53 count = 0

54 else

55 count += 1

56 count, stepSize = 0, (step_size/s_factor) if count >= max_no_impr

57 end

58 puts " > iteration #{(iter+1)}, best=#{current[:cost]}"

59 end

60 return current

61 end

62

63 if __FILE__ == $0

64 # problem configuration

65 problem_size = 2

66 bounds = Array.new(problem_size) {|i| [-5, +5]}

67 # algorithm configuration

68 max_iter = 1000

69 init_factor = 0.05

70 s_factor = 1.3

71 l_factor = 3.0

72 iter_mult = 10

73 max_no_impr = 30

74 # execute the algorithm

75 best = search(max_iter, bounds, init_factor, s_factor, l_factor, iter_mult, max_no_impr)

76 puts "Done. Best Solution: c=#{best[:cost]}, v=#{best[:vector].inspect}"

77 end

Listing 2.2: Adaptive Random Search in Ruby

2.3.6 References

Primary Sources

Many works in the 1960s and 1970s experimented with variable step sizes for Random Search methods. Schummer and Steiglitz are commonly credited the adaptive step size procedure, which they called ‘Adaptive Step-Size Random Search’ [6]. Their approach only modifies the step size based on an approximation of the optimal step size required to reach the global optima.

Kregting and White review adaptive random search methods and propose

an approach called ‘Adaptive Directional Random Search’ that modifies both the algorithms step size and direction in response to the cost function [2].

Learn More

White reviews extensions to Rastrigin’s ‘Creeping Random Search’ [4] (fixed step size) that use probabilistic step sizes drawn stochastically from uniform and probabilistic distributions [7]. White also reviews works that propose dynamic control strategies for the step size, such as Karnopp [1] who proposes increases and decreases to the step size based on performance over very small numbers of trials. Schrack and Choit review random search methods that modify their step size in order to approximate optimal moves while searching, including the property of reversal [5]. Masri et al. describe an adaptive random search strategy that alternates between periods of fixed and variable step sizes [3].

2.3.7 Bibliography

[1] D. C. Karnopp. Random search techniques for optimization problems.

Automatica, 1(2–3):111–121, 1963.

[2] J. Kregting and R. C. White. Adaptive random search. Technical Report TH-Report 71-E-24, Eindhoven University of Technology, Eindhoven, Netherlands, 1971.

[3] S. F. Masri, G. A. Bekey, and F. B. Safford. Global optimization algorithm using adaptive random search. Applied Mathematics and Computation, 7(4):353–376, 1980.

[4] L. A. Rastrigin. The convergence of the random search method in the extremal control of a many parameter system. Automation and Remote Control, 24:1337–1342, 1963.

[5] G. Schrack and M. Choit. Optimized relative step size random searches.

Mathematical Programming, 10(1):230–244, 1976.

[6] M. Schumer and K. Steiglitz. Adaptive step size random search. IEEE Transactions on Automatic Control, 13(3):270–276, 1968.

[7] R. C. White. A survey of random methods for parameter optimization.

Simulation, 17(1):197–205, 1971.

Dans le document Clever Algorithms (Page 46-51)