• Aucun résultat trouvé

Elite Opposition-Based Selfish Herd Optimizer

N/A
N/A
Protected

Academic year: 2021

Partager "Elite Opposition-Based Selfish Herd Optimizer"

Copied!
11
0
0

Texte intégral

(1)

HAL Id: hal-02197808

https://hal.inria.fr/hal-02197808

Submitted on 30 Jul 2019

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Distributed under a Creative Commons

Attribution| 4.0 International License

Elite Opposition-Based Selfish Herd Optimizer

Shengqi Jiang, Yongquan Zhou, Dengyun Wang, Sen Zhang

To cite this version:

Shengqi Jiang, Yongquan Zhou, Dengyun Wang, Sen Zhang. Elite Opposition-Based Selfish Herd Op-

timizer. 10th International Conference on Intelligent Information Processing (IIP), Oct 2018, Nanning,

China. pp.89-98, �10.1007/978-3-030-00828-4_10�. �hal-02197808�

(2)

Elite Opposition-based Selfish Herd Optimizer

Shengqi Jiang 1, Yongquan Zhou1,2, Dengyun Wang1, Sen Zhang1

1College of Information Science and Engineering, Guangxi University for Nationalities, Nanning 530006, China

2Guangxi High School Key Laboratory of Complex System and Computational Intelligence, Nanning 530006, China

yongquanzhou@126.com

Abstract:Selfish herd optimizer (SHO) is a new metaheuristic optimization al- gorithm for solving global optimization problems. In this paper, an elite opposi- tion-based Selfish herd optimizer (EOSHO) has been applied to functions. Elite opposition-based learning is a commonly used strategy to improve the perfor- mance of metaheuristic algorithms. Elite opposition-based learning enhances the search space of the algorithm and the exploration of the algorithm. An elite opposition-based Selfish herd optimizer is validated by 7 benchmark functions.

The results show that the proposed algorithm is able to obtain the more precise solution, and it also has a high degree of stability.

Keywords:Elite opposition-based learning; selfish herd optimizer; metaheuris- tic algorithm

1 Introduction

Metaheuristic optimization algorithm is a simulation of various biological behav- iors in nature and has been concerned by researchers for many years. Many intelligent optimization algorithms for bionic groups are proposed. Such as particle swarm opti- mization(PSO)[1], ant colony optimization(ACO)[2], bat algorithm(BA)[3], grey wolf optimizer(GWO)[4], firefly algorithm(FA)[5], cuckoo search (CS)[6], flower pollina- tion algorithm(FPA) [7] et al.

Selfish herd optimizer (SHO) is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk has been proposed by Fernando Fausto and Erik Cue- vas.et.al [8]. The algorithm is inspired by the Hamilton’s selfish herd theory [9]. In this paper, an elite opposition-based selfish herd optimizer (EOSHO) has been applied to functions optimization. Improve the search ability of the population by doing elite opposition-based learning to the selfish herd group. EOSHO is validated by 7 bench- mark functions. The results show that the proposed algorithm is able to obtain precise solution, and it also has a high degree of stability.

(3)

The remainder of the paper is organized as follows: Section 2 briefly introduces the original selfish herd optimizer;This is followed in Section 3 by new elite opposition- based selfish herd optimizer (EOSHO); simulation experiments and results analysis are described in Section 4. Finally, conclusion and future works can be found and discussed in Section 5.

2 Selfish Herd Optimizer (SHO)

2.1 Initializing the population

The algorithm begins by initialize the set A of Nindividual positions. SHO mod- els two different groups of search agents: a group of prey and a group of predators. As such, the number of prey (Nh) and the number of predators (Np) are calculated by the follow equations:

)) 9 . 0 , 7 . 0 ( (N rand floor

Nh =  (1)

h

p N N

N = − (2) According to the theory of selfish groups, each animal has its own survival value.

Therefore, we distribute the value of each individual’s survival as follows:

worst best

best i

i f f

f SV f

= (a)−

(3) where f a( i) is the fitness value that is obtained by the evaluation that evaluates ai with regard to the objective function f(•). The values of fbest and fworst are the value of the best and worst position in the population.

2.2 Herd movement operator

According to the selfish herd theory, we can get the gravity coefficient between the prey group memberiand the prey group member j, are defined as follows:

2

,

j i j j

ih SVh e h h

h

 = (4) The prey will avoid the predator, so there will be exclusion between them. The re- pulsion factor is shown as follows:

2

, M - M i M

ip SVp e h P

h

 = (5) The leader's position in the next generation is updated as follows:



 

  =  

= SVik h SVkj

N j k

k i k

L h H h h

h

} , , 2 , 1 {max

(6)

( )

( )





 +

=

= +

+

1 2

1 2

, 1 ,

k best L

L

k M L

L

SV if

SV if

k L k best k

k L

k L k M k

k k L L

x h h

p h h

h x h

h p h

h  

(7)

(4)

In the herd, followers and deserters in the next generation of location updates are as follows:





 +

= +

+

) 1 , 0 (

) 1 , 0

1 (

rand SV

if

rand SV

if

k i k i

k i k i

k i k k i

i

h h

d h

f h

h (8)

( ) ( )

( )

( )





 +

= 

k k i k

M i

k k i i

ci i L

i

SV SV if

SV SV if

k i k M k

k i k c k k

i k L k k

i

h h h

h

h h h

h h

h

h h

h h h

h f

,

, ,

2 2

(9) where SV k

h represents the mean survival value of the herd’s aggregation.

( ) ( )

(

x h r

)

d =   h x  − +  − hk

best i

i kbest ki SV

k k

i 2   ,  1 (10)

wherer denotes unit vector pointing to a random direction within the given n- dimensional solution space.

2.3 Predators movement operators A calculation of the probability

j i,

Ρp h that a prey can catch up with each herd, as follows:

=

=

h

m i

j i j

i N

m

Ρ ,

1 , ,

h p

h p h

p

(11)

(

1

)

2

,

j i j j

ih SVh e p h

p

 = (12) The predator's position updating formula can be obtained as follows:

) (

1 2 k

i k r k

i k

i p h p

p + = +  − (13)

2.4 Predation phase

For each predator, we can define a set of threatened prey as follows:

Tp =

hjHSVhSVp pihjRhjK

i j

i , , (14)

When multiple preys enter the range of the predator's attack radius, the predator will choose to kill prey based on the probability of roulette.

2.5 Restoration phase

Mating operations can not produce new individuals in the set of individuals, de- pends on each mating candidate students go, as follows:

=

M h

h h h

m j

j SV

SV

(15)

(5)

In order to obtain a new individual, we should first consider the random acquisition of n individuals in a set of mating candidates. Replacing the hunted individual with a new individual.

(

[ r1,1, r2,2, , r,n]

)

new mix h h hn

h =  (16) Specific implementation steps of the Standard Selfish herd optimizer (SHO) can be summarized in the pseudo code shown in Algorithm 1.

Algorithm 1. SHO pseudo-code 1. BEGIN

2. Initialize each animal populations;

3. Define the number of herd members and predators within A by equation (1) and (2);

4. While (t < iterMax)

5. Calculate the fitness and survival values of each number by equation (4);

6. Update the position of herd numbers by equation (7) and (8);

7. Update the position of herd numbers by equation (13);

8. Define the predator will choose to kill prey based on the probability of roulette by equation (14);

9. Define the random acquisition of n individuals in a set of mating candidates by equation (16);

10. End While

11. Memorize the best solution achieved;

12. END

3 Elite opposition-based selfish herd optimizer (EOSHO)

Opposition-based learning is a technique proposed by Tizhoosh [10], has been ap- plied to a variety of optimization algorithms [11, 12, 13]. The main purpose of it is to find out the candidate solutions that are closer to the global optimal solution. The elite opposition-based learning has been proved that the inverse candidate solution has a greater chance of approaching the global optimal solution than the forward candidate solution. The elite opposition-based learning has successfully improved the perfor- mance of many optimization algorithms. At the same time, it has been successfully applied to many research fields, such as reinforcement learning, morphological algo- rithm window memory and image processing using opposite fuzzy sets.

In this paper, we apply the elite opposition-based learning to the movement of a group of H (the group of prey): select the best fitness value of individuals in H as elite prey, hoping that this elite individual can guide the movement of the whole H group. To explain the concept of elite opposition-based learning, we introduce an

example: We assume that the elite individual in H

is

H

e

= ( h

e,1

, h

e,2

,..., h

e n,

)

.Then the elite opposition-based candidate solutions for

(6)

other individual

H

i

= ( h

i,1

, h

i,2

,..., h

i n,

)

in H can be defined as

' ' ' '

,1 ,2 ,

( , ,..., )

i

=

e e e n

H h h h

. And we can get a equation:

( )

,n , , j ,N;

, , i

db α daj j e,j

j i

=

=

− +

=

2 1 2

1

'

, x

h (17)

Where N is the size of H, n is the dimension of H, (0,1).daj and dbj is the dynamic boundary of the j dimension of an individual. We define the dynamic bound- ary daj and dbj as follows:

( ) ( )

i,j j

i,j j

x db

x da

max min

=

= (18)

In order to prevent the elite opposition-based learning point from jumping out of the dynamic boundary, we set that when the elite opposition-based learning point jumps out of the dynamic boundary range, we will reset the reverse point, which is as follows:

(

j j

)

i j i j

i =randda ,db if 'da or 'db

' H H

H (19)

By this method, the algorithm in the global search process is enhanced and the di- versity of the population is improved.

4 Simulation experiments and result analysis

In this section, we used 6 standard test functions [14, 15] to test to get the perfor- mance of EOSHO. The rest of this section is organized as follows: the experimental setup is given in Section 4.1, and the performance of each algorithm is compared to section 4.2.The space dimension, scope, optimal value and the iterations of 6 func- tions are shown in Table 1.

Table 1 Benchmark test functions.

Benchmark Test Functions Dim Range fmin

=

= n

i

xi

f

1 2

1 3

0

[-

100,100] 0

= =

+

= n

i i n

i

i x

x x

f

1 1

2( ) | | | | 3

0 [-10,10] 0

= + − + −

= 1

1

2 2

2 1

3( ) [100( ) ( 1) ]

D

i

i i

i x x

x x

f 3

0 [-30,30] 0

=

+

= n

i

i random x

x f

1 4

4( ) (0,1) 3

0

[-

1.28,1.28] 0

=

+

= n

i

i

i x

x x f

1 2

5( ) [ 10cos(2 ) 10] 3

0

[-

5.12,5.12] 0

(7)

e

n x n x

x f

n

i

i n

i i

+ +









− 

=

 

=

=

20

2 1 cos 1 exp

2 . 0 exp 20 ) (

1 1

2

6

3

0 [-32,32] 0

4.1 Experimental setup

All of the algorithms was programmed in MATLAB R2016a, numerical experi- ment was set up on AMD Athlont (tm) II*4640 processor and 2 GB memory.

4.2 Comparison of each algorithm performance

The proposed elite opposition-based selfish herd optimizer compared with swarm intelligence optimization algorithms, such as CS [6], PSO [1], MVO [16], ABC [17], SHO [7], we compare their optimal performance by means of mean and standard deviation respectively. The set values of the control parameters of the algorithm are given in Table 4.

For the standard reference function in Table 1, the comparison of the test results is shown in Table 3-4. In this paper, the population size is 50, the maximum number of iterations is 1000, and the results have been obtained in 30 trials. The Best, Mean and Std. represent the optimal fitness value, the average fitness value and the standard deviation. We compared the results of EOSHO and other algorithms to test the func- tion, and listed the results ranking of EOSHO on the right side of the Table 2.

Table 2 the initial parameters of algorithms

Algorithm Parameter Value

CS The probability of discovery (pa) 0.25

PSO Cognitive constant(C1) 1.5

Social constant(C2) 2

Inertia constant() 1

MVO The probability of the existence of the wormhole (Wep)

linearly decreased from 0.2 to 1

ABC Limit D

whereDdenotes dimension of the problem, trial denotes internal information in ABC.

According to the results of Table 3. It can be noted that the single unimodal func- tions is suitable for the benchmark development. Compared with the original SHO algorithm, the calculation accuracy of the EOSHO algorithm has been greatly im- proved. For f1~ f3 the optimal fitness value, mean fitness value and standard devia- tion of EOSHO algorithm ate better than that of the other five algorithms, ranking first in all the algorithms. Fig 1-6 shows EOSHO and other algorithm convergence and the anova tests of the global minimum plots. As can be seen from Fig 1, EOSHO

(8)

can obtain higher optimization precision. The results show that for the 3 test functions, the EOSHO algorithm can provide good results and have smaller variance, which means that EOSHO has more advantages than MVO, PSO, CS, ABC and SHO algo- rithm in solving the problem of single unimodal function. EOSHO in the convergence speed and accuracy is relatively fast. Overall, the EOSHO algorithm accelerates the convergence speed and enhances the calculation accuracy.

Table 3 Simulation results for f1~ f3 in low-dimension

Bench- mark functions

Result

Algorithm

Ran

MVO PSO CS ABC SHO EOSHO k

f1 (D=30)

Best 0.1091 9.9E-53 0.05397 3.7E-08 5.2E-11 3.9E-257 1 Mean 0.17877 6.9E-39 0.08540 5.1E-07 2.9E-10 3.7E-235 Std 0.039 3.8E-38 0.01535 5.0E-07 3.1E-10 0

f2 (D=30)

Best 0.1091

1.2E-05 1.12153 1.1E-05 5.3E-11 1.5E-134 1 Mean 0.17877

0.01125 2.27540 3.1E-05 2.9E-10 6.5E-122 Std 0.039

0.03003 0.66911 1.8E-05 3.2E-10 3.4E-121

f3 (D=30)

Best 0.0031 0.0039 0.17700 0.39195 1.4E-05 3.14E-06 1 Mean 0.01188 0.01164 0.39681 0.72350 0.00030 9.18E-05 Std 0.00597 0.00367 0.11289 0.20553 0.00041 6.73E-05

Fig.1 The convergence curves of f1 Fig.2 The convergence curves of f2

Fig.3 The convergence curves of f3 Fig.4 Standard deviation for f1

(9)

Fig.5 Standard deviation for f2 Fig.6 Standard deviation for f3 According to the results of Table 4, the performance of the EOSHO algorithm in the multimodal function is much better than that of the other comparison algorithms.

Table 4 Simulation results for f4~ f5 in low-dimension

Benchmark functions Result

Algorithm

Rank

MVO PSO CS ABC SHO EOSHO

f4 (D=30)

Best 51.8575 21.8890 30.7915 1.0967 12.9344 0 Mean 99.9845 48.8524 44.6965 5.5057 31.9771 0 1 Std 25.8392 14.7027 8.0742 3.1272 10.3539 0

f5 (D=30)

Best 0.0998 8.3E-14 2.1600 0.0001 4.7E-14 8.9E-16 1 Mean 0.6275 0.5604 2.9345 0.0006 0.3424 8.8E-16

Std 0.8018 0.6797 0.4305 0.00081 0.8058 0

Compared with the unimodal functions, the multimodal functions have many local optimal solutions, and its number increases exponentially with the dimension. This makes them suitable for benchmarking the search capabilities of the algorithms. The results show that the advantages of EOSHO algorithm in exploration are also very competitive. For f4~ f5, compared with the result obtained by SHO algorithm, the calculation accuracy of optimal value of EOSHO algorithm is improved. For

5 4~ f

f The ranking of optimal fitness value for EOSHO algorithm is first, which indicates that EOSHO algorithm has been substantially improved.

Fig.9 The convergence curves of f5 Fig.10 Standard deviation for f5

(10)

Fig.11 The convergence curves of f6 Fig.12 Standard deviation for f6 According to Fig 9 and Fig 11, we can see that EOSHO has faster convergence speed and higher optimization accuracy. According to Fig 10 and Fig 12, we can get that EOSHO has strong stability. It can be seen that EOSHO has higher convergence accuracy and stronger robustness. On the whole, the SHO algorithm and elite opposi- tion-based strategy improves accuracy of the SHO algorithm, which is helpful to find the global optimal solution.

5 Conclusions and future works

In this paper, to improve the convergence speed and calculation accuracy of the SHO algorithm, we add the elite opposition-based learning strategy to prey movement operators. The EOSHO algorithm is proposed, which can better balance exploration and exploitation, the elite opposition-based learning strategy increases the diversity of population to avoid the search stagnation. The EOSHO algorithm is testing for 7 benchmark functions. The optimization performance of the EOSHO algorithm has been greatly improved compared with the basic SHO algorithm. The optimal fitness valued of EOSHO algorithm is relatively small in the six algorithms.

For EOSHO, there are various idea that still deserve in the future study, Firstly, there exists many N-P hard problems in literature, such as planar graph coloring prob- lem, radial basis probabilistic neural networks; Secondly, it is suggested to apply it to more engineering examples.

Acknowledgements

This work is supported by National Science Foundation of China under Grants No.

61463007, 61563008, and by Project of Guangxi Natural Science Foundation under Grant No. 2016GXNSFAA380264.

(11)

References

[1] Kennedy J, Eberhart R. Particle swarm optimization. IEEE International Conference on Neural Networks, 1995. Proceedings. IEEE, 2002:1942-1948 vol.4.

[2] Dorigo, M, Birattari, M, Stutzle, T. Ant colony optimization. Computational Intelligence Magazine, IEEE, 2004, 1(4):28-39.

[3] XinShe Yang. A New Metaheuristic Bat-Inspired Algorithm. Computer Knowledge and Technology, 2010, 284:65-74.

[4] Mirjalili S, Mirjalili S M, Lewis A. Grey Wolf Optimizer. Advances in Engineering Software, 2014, 69(3):46-61.

[5] Yang, Xin‐She. Firefly Algorithm. Engineering Optimization. 2010:221-230.

[6] Yang X S, Deb S. Cuckoo Search via Levy Flights. Mathematics, 2010:210 - 214.

[7] Yang X S. Flower Pollination Algorithm for Global Optimization[C]// International Con- ference on Unconventional Computation and Natural Computation. Springer-Verlag, 2013:240-249.

[8] Fausto F, Cuevas E, Valdivia A, et al. A global optimization algorithm inspired in the behavior of selfish herds[J]. Biosystems, 2017, 160:39-55.

[9] Hamilton, W.D., 1971. Geometry ot the selfish herd. J. Theor. Biol. 31 (2), 295–311.

[10] Tizhoosh H R. Opposition-Based Learning: A New Scheme for Machine Intelligence.

Computational Intelligence for Modelling, Control and Automation, 2005 and Interna- tional Conference on Intelligent Agents, Web Technologies and Internet Commerce, In- ternational Conference on. IEEE, 2005:695-701.

[11] Rahnamayan S, Tizhoosh H R, Salama M M A. Opposition-Based Differential Evolution.

Advances in Differential Evolution. Springer Berlin Heidelberg, 2008:64-79..

[12] Zhao R, Luo Q, Zhou Y. Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization[J]. Algorithms, 2017, 10(1):9.

[13] Zhou Y, Wang R, Luo Q. Elite opposition-based flower pollination algorithm[J]. Neuro- computing, 2016, 188:294-310.

[14] Tang K, Li X, Suganthan P N, et al. Benchmark Functions for the CEC'2010 Special Session and Competition on Large-Scale Global Optimization[J]. Nature Inspired Com- putation & Applications Laboratory, 2013.

[15] N. Hansen, A. Auger, S. Finck, R. Ros, Real-Parameter Black-Box Optimization Bench- marking 2009 Experimental Setup, Institute National de Recherche en Informatique et en Automatique (INRIA), 2009, Rapports de Recherche RR-6828, /http://hal.inria.fr/inria- 00362649/en/S.

[16] MartÍ V, Robledo L M. Multi-Verse Optimizer: a nature-inspired algorithm for global optimization[J]. Neural Computing & Applications, 2016, 27(2):495-513.

[17] Dervis Karaboga, Bahriye Basturk. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimiza- tion, 2007, 39(3):459-471.

Références

Documents relatifs

More precisely, we learn global and local distribution features in each photo of the input set (which may depict different numbers of ani- mals) and transfer them to the group

Section 3 presents our Selective Biogeography- Based Optimization (SBBO) with selective migration op- erator and a more efficient computing resource allocation strategy for DC

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Environmental and consumer protest has led the technology providers to revise their political strategies in the biotechnology conflict.. Thus, in Germany reflexive

This paper’s topic is the discussion of an en- visioned query optimization process, that goes beyond traditional query processing and optimizations and addresses the issue

Fig 3 presents the selection ratio of each operator for some test problems. From the results, we can see that the selection ratio of each operator is quite different from problem

In order to improve the performance of ALPSO on multi-modal problems, we introduce several new major features in ALPSO-II: (i) Adding particle’s status moni- toring mechanism,

To overcome the above problems when using the multi- population method, CPSO employs a global search method to detect promising sub-regions and a hierarchical clustering method