• Aucun résultat trouvé

Identification of Jiles-Atherton model parameters using Particle Swarm Optimization

N/A
N/A
Protected

Academic year: 2022

Partager "Identification of Jiles-Atherton model parameters using Particle Swarm Optimization"

Copied!
5
0
0

Texte intégral

(1)

HAL Id: hal-00179710

https://hal.archives-ouvertes.fr/hal-00179710

Submitted on 16 Oct 2007

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Particle Swarm Optimization

Romain Marion, Riccardo Scorretti, Nicolas Siauve, Marie-Ange Raulet, Laurent Krähenbühl

To cite this version:

Romain Marion, Riccardo Scorretti, Nicolas Siauve, Marie-Ange Raulet, Laurent Krähenbühl. Iden-

tification of Jiles-Atherton model parameters using Particle Swarm Optimization. Compumag 2007,

Jun 2007, Aachen, Germany. pp.1003. �hal-00179710�

(2)

Identification of Jiles-Atherton model parameters using Particle Swarm Optimization

Romain Marion, Student Member,IEEE,, Riccardo Scorretti, Nicolas Siauve, Marie-Ange Raulet and Laurent Kr¨ahenb ¨uhl

Abstract—This paper presents the use of the multi-objective Particle Swarm Optimization technique for the identification of Jiles-Atherton model parameters. This approach, implemented for the first time in order to solve this kind of problem, is tested for two magnetic materials : NO 3% SiFe and NiFe 20-80. The results are compared with those obtained with a Direct Search Method and a GA procedure. Experimental measures performed on both samples of materials allow us to complete and argue the validation for the PSO method.

Index Terms—Magnetic hysteresis, Optimization methods, Ge- netic algorithms, Magnetic materials, Modeling, Magnetic field measurement.

I. INTRODUCTION

The estimation of ferromagnetic losses in electromagnetic devices by field calculations requires accurate laws for the materials. These laws must consider the dynamic effects induced in the circuits (such as eddy-currents, wall motion or pinning effect) and hysteretic phenomenon of the materials behaviour. Generally, dynamic models of materials behaviour require a static hysteresis models. Thus it is crucial to dispose of a static hysteresis model.

The description of magnetization based on Jiles-Atherton (J-A) theory [1] is often used because it can be easily implemented. Moreover the J-A model requires few memory storage, as its status is completely described by only five pa- rameters. However, convergence problems may be encountered in the identification of these parameters by using iterative procedure [2] [3].

Recently, based on theories and algorithms of optimization, many researchers have proposed new stochastic optimization methods and ”intelligent” algorithm, such as the genetic algorithm (GA) [4] [5], artificial neural network [6], chaos optimization algorithm [7], ant colony algorithm [8], line up competition algorithm [9] and various hybrid method [10] [11]

[12]. However, each method has its own applicability domain and constraints. Even worst, the problem of finding the global optimal of a non linear function may be NP-complete[13].

In the case of the optimization of J-A’s parameters, GA [14] and simulated annealing method [15] have been recently introduced. Like these evolutionary computation techniques, particle swarm optimization (PSO) is a population-based search algorithm.

Manuscript received June 24, 2007.

Authors are with AMPERE Laboratory, UMR CNRS 5005, Universit´e de Lyon, F-69003, France, http://ampere-lab.fr

: Universit´e Lyon 1, Lyon, F-69003, France

: Ecole Centrale de Lyon, Ecully, F-69134, France email:romain.marion@univ-lyon1.fr

After a reminder of the J-A model, this paper explains the idea and the procedure of the basic PSO. Then some improvements are described (multiobjective and constrained problem, swarm mutation). Finally, an experimental validation is led with comparison between PSO and Direct Search Method (DSM) or GA. An opening on an hybrid algorithm is also discussed.

II. J-A MODEL

Let us remind the J-A model. The following form of J-A equations are considered : [16]

dM

dH = (1−c)dMdHirre +cdMdHane

1−αcdMdHane −α(1−c)dMdHirre (1) where :

Man is the anhysteretic magnetization provided by the Langevin’s equation

Man(He) =Ms(coth(He

a )− a He

) (2)

He is the Weiss’ effective field :He=H+αM

Mirris the irreversible magnetization component defined by :

dMirr

dHe

= Man−Mirr

kδ with δ=sign(dH

dt ) (3) α, a, c, k and Ms are the parameters of the model where a is a form factor, c the coefficient of reversibility of the movement of the walls,Msthe saturation magnetization,kand αrepresent the hysteresis losses and the interaction between the domains respectively.

III. BASICPSO A. Idea

PSO is an evolutionary computation technique developed by Kennedy and Eberhart in 1995 [17] [18]. PSO is initial- ized with a population of random solution called particles.

Each particle is also associated with a velocity. Particles fly through the search space with velocities which are dynamically adjusted in a collaborative way. Therefore, particles have a tendency to fly towards optimal solution(s).

(3)

Each particleiof the swarm is defined as a potential solution of the identification problem in a five dimensional space. This particleiis associated to a positionxi = (αi, ai, ci, ki, MSi), and has its own speed (these values are randomized initially into a defined interval).

The fitness function for a particleiis defined as the squared error between the measured values and the calculated ones (obtained by considering the associated position) of a static hysteresis major loop :

f itness1= 1 N

v u u t

N

X

i=1

Bexp(i)−Bsim(i) max(Bexp)

2

(4) whereN, BsimansBexprepresents respectively the number of points of measurement, the calculated values and the measured values.

The position with the lowest fitness score in each iteration is defined to be the entire swarm’s global best (gbest) position.

In addition, each particle keeps trace of its own best position that it has visited, known as the particle’s personal best (pbest).

The particle motions are governed by the following rules which update particle positions xi with variation’s step for each parametersvi= (vαi, vai, vci, vki, vMSi):

vit=ω×vit1+p1×rd1×(pbest−xti)...

...+p2×rd2×(gbest−xti) (5) xt+1i =xti+vit (6) wherexiis the current position of particlei,vi is the velocity of the i-th particle, ω is an inertia weight, p1 and p2 are cognitive and social parameters,rd1 andrd2 are two random numbers between 0-1 andtis the current iteration. In addition, the value of the inertia weight ω in the PSO is gradually decreased in order to improve the accuracy during the final steps of optimisation :

ω=(ωstart−ωend)×(M axiter−Iter) M axiter

end (7) where ωstart and ωend are initial and final values for the random inertia weight.

In order to avoid convergence problem, velocity are re- stricted to a maximum valueVmax. Then, we are ensured that a maximum scope of the searching space is covered.

IV. IMPROVEMENT(PSO+) A. Multiobjective problem

It appears that the fitness explained previously is not a sufficient criterion for any magnetic material optimization.

In order to improve the convergence, we introduced another fitness function (8) which represents the area error per cycle between measurement and simulation (that is, the discrepancy between the measured and computed losses during a single cycle) :

f itness2=|Areasimu−Areameas| Areameas

(8) We can define a Pareto front with these two fitnesses. However, the apparition of this front means a disappearance of the global and personal best position concept : there is an impossibility to design an only leader for the entire swarm. Therefore, we had to revise the algorithm core.

To solve this difficulty we replaced the global best position gbest (which in the former version was unique for the whole population) with the nearest non-dominated particle, by using the following norm in the space of the fitness values :

N orm= q

∆f itness21+ ∆f itness22 (9) In this way each particle has its owngbest, which depends on its position into the space of fitness (Fig 1).

Fig. 1. Example of the use of the multi-objective criterion. Each particle of the front has a space dominance

B. Constrained problem and swarm’s modification

In order to make easier the convergence and to eliminate non-physical solutions, the search domain has been bounded (table I).

TABLE I PARAMETER RANGES

Parameter Range

α [1·1012; 1·102] a [0.01; 10000]

c [0.01; 0.99]

k [0.01; 10000]

M s [105; 107]

Moreover several sets of parameters don’t produce an hysteris curve and their fitness values are huge. So we introduce a swarm modification by deleting these ”crazy” particles (if their fitness is more than 106∗F itnessgbest) and replacing them by a new randomly initialized.

(4)

V. VALIDATION– DISCUSSION

As a first step of validation, we fed basic and improved PSO with artificial data generated by the J-A model. The purpose of this step is to check the capability of our PSO algorithm to retrieve (known) J-A parameters in the ideal case where pro- vided data are prefectly consistent with the model to fit. Two different materials have been used. As comparison, two other optimization methods (DSM and GA) have been used to solve this same problem. The foundations and implementations of the DSM and GA techniques are developped in several works [5][19] . The GA method has already been implemented for the J-A parameters identification [20]. The same parameters for this method (mutation, selection and crossover probability) as then ones specified are considered. In practice, the Matlab Optimization Toolbox[21] have been used.

The improved PSO and GA methods are carried out 50 times from different intial seeds of the random number gen- erator to ensure the repetitiveness of convergence. It has been observed that the final solutions that we obtain with these two algorithms don’t differ much (standard deviation are less than 1% of the mean value). So presented parameters are the mean of the 50 parameters. The number of individuals was set to 50.

The convergence criterion is reached if one of the following criteria (10) is satisfied.

q

f itness21+f itness22<103

or IterationN umber >250 (10) The further step has been to test the PSO with true mea- surements. Again, we considered two materials.

A. NO 3% SiFe material

The material sample is built of a stack of rings made of NO 3% SiFe. The static first magnetization curve and the static major loop of the sample are measured at 1 Hz.

The current excitation waveform is sinusoidal. The curve used during the different optimizations is a major loop with a saturation point Hmax = 1500 A/m;Bmax = 1.37 T, a coercitive field Hc = 42 A/m and remanent induction Br= 0.86T. The table II compares the values of the different parameters obtained by using both PSO and PSO+, DSM and GA algorithm. Four methods lead to close solutions. The

TABLE II OPTIMIZATION RESULTS

Parameters PSO PSO+ Direct Search GA

α 8.8448e-5 8.8163e-5 7.755e-5 8.746e-5

a 38.3704 38.5632 35.4831 38.6395

c 0.13568 0.14238 0.22365 0.14189

k 50.7865 51.6492 56.9687 52.7493

Ms 1.1163e6 1.1158e6 1.1129e6 1.1148e6

Iterations 134 46 226 53

PSO and GA methods require a similar number of iterations to converge, conversely to the DSM which needs five times more iterations. Modifications performed on PSO technique allow obtain the convergence more quickly. The accuracy of optimized parameters remains correct.

With the aim of analysing and comparing the efficiency of each method, the discrepancy between the measured datas and the calculated ones by the J-A model by considering the four sets of parameters is computed. In the table III, the error is calculated in several characteristic points : B1

2 (respectively B1

2) is a point on the descending part of the B-H major loop, whose H-coordinate is equal to 0.5Hmax (respectively

−0.5Hmax) and B1M is a point on the first magnetization B-H curve, whose H-coordinate is equal to0.25Hmax.

TABLE III

ESTIMATION ERRORS FOR A MAJOR HYSTERESIS LOOP

Characteristic PSO PSO+ Direct Search GA

point error error error error

H c 0.3% 0.3% 0.8% 0.3%

Br 0.6% 0.6% 6.2% 0.7%

B1

2 0.1% 0.1% 0.52% 0.1%

B

12 0.3% 0.2% 1.8% 0.2%

B1M 4.1% 3.2% 42% 3.5%

PSO and GA allow to obtain an accurate determination of the first magnetization (B1M), contrary to a DSM.

B. NiFe 20-80 material

The sample is a stack of rings made of NiFe. Because of thickness of each ring, a very low frequency (0,05 Hz) operation is used to measure the static characteristic of the material.

The curve used during the different optimizations is a major loop with a saturation pointHmax= 20A/m;Bmax= 0.81T, a coercitive fieldHc= 0.91A/m and remanent inductionBr= 0.35T. The DSM for this material leads to negative values of αandc(not physical). However the PSO, PSO+ and GA suit.

Results obtained with PSO+ and GA are reported in table IV.

TABLE IV

OPTIMIZATION RESULTS AND ESTIMATION ERRORS FOR MAJOR HYSTERESIS LOOP

Parameter PSO+ GA Point PSO+ GA

error error

α 6.3452e-5 6.8493e-5 H c 25% 42%

a 14.6830 13.6392 Br 0.3% 1.3%

c 0.8326 0.8730 B1

2 0.9% 1.1%

k 5.6289 5.8289 B

12 0.8% 0.8%

Ms 9.2367e5 9.2689e5 B1M 6.3% 12.7%

Iterations 85 139

The high relative error obtained for Hc is not relevant because the material has a very small coercivity field (less than 1 A/m).

In order to obtain more insight about the performances of our optimization methods, the estimated parameters has been used to simulate a minor loop with the J-A model. The figure 2 shows the comparison between measurements and simulations by using the sets of parameters provided by PSO+ and GA.

(5)

−10 0 10 20 30 40 50 60 70 80 90 0.65

0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1

MEASUREMENT GENETIC ALGORITHM PARTICLE SWARM

Fig. 2. Dissymmetric minor loop using parameters identified with major loop

Although parameters were identified on a major loop, it is noticed that they are advisable to recreate a dissymmetric minor loop. However PSO+ is more accurate than GA.

VI. COMPARISONGA / PSO+

In order to make a comparison between GA and PSO+, we made different tests on the second material (NiFe 20-80).

A summary is given in table V. CPU time is expressed in seconds.

TABLE V

COMPARATIVE BETWEENPSOANDGA

Methods Pop size Nb iter CPU time Fitness 1 Fitness 2

GA 10 250 No convergence

PSO+ 10 250 234 2.4783e-2 6.9368e-3

GA 20 250 No convergence

PSO+ 20 42 168 5.8946e-5 5.2789e-4

GA 30 94 268 4.9824e-4 7.8393e-5

PSO+ 30 54 216 7.8932e-4 1.9930e-4

GA 40 112 467 6.6832e-5 7.6389e-4

PSO+ 40 68 268 1.6893e-4 3.6892e-5

GA 50 139 624 4.8021e-4 5.3789e-4

PSO+ 50 85 348 4.6892e-5 5.6830e-4

We notice that for a population of 10 individuals, GA doesn’t converge while PSO+ nearly converge with the max- imal iterations (p

f itness21+f itness22= 2,6%). This trend is confirmed by the simulations with 20 particles : GA fails to converge whereas PSO+ provides good results. For all the other simulations we observe that GA and PSO+ both converge, but PSO+ requires less time.

VII. CONCLUSION

The PSO method is implemented for the first time to solve the optimization of J-A model parameters. The classical and the improved PSO methods have been succesfully tested for several materials. Experimental results allow to validate these algorithms. The results obtained by both methods are also compared with those obtained with DSM and GA.

kind of identification; however, with our tuning, PSO+ is more faster than GA. In fact, tuning for GA (mutation, crossover and selection probabilties) is difficult to choose, whereas PSO algorithms are generally simpler to tune. Moreover this algorithm is more easy to implement than GA.

In future work it should be possible to create an hybrid PSO-GA algorithm which uses operations of GA into PSO system. The PSO method is being implemented to optimize other kind of applications of our laboratory.

REFERENCES

[1] D. Jiles and D. Atherton, “Theory of ferromagnetic hysteresis,” Journal of Magnetism and Magnetic Materials, pp. 48–60, 1986.

[2] D. Jiles and J. Thoelke, “Theory of ferromagnetic hysteresis : determi- nation of model parameters from experimental hysteresis loops,” IEEE Transactions on magnetics, pp. 3928–3930, 1989.

[3] D. Jiles, J. Thoelke, and M. Devine, “Numerical determination of hysteresis parameters for the modeling of magnetic properties using th theory of ferromagnetic hysteresis,” IEEE Transactions on magnetics, pp. 27–35, 1992.

[4] J. H. Holland, Adaptation In Natural And Artificial Systems. University of Michigan Press, 1975.

[5] J. V. Leite, S. L. Avila, N. J. Batistela, W. P. Carpes, N. Sadowski, P. Kuo-Peng, and J. P. A. Bastos, “Real coded genetic algorithm for jilesatherton model parameters identification,” IEEE Transactions on magnetics, vol. 40, pp. 888–891, 2004.

[6] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences, vol. 79, pp. 2554–2558, 1982.

[7] F. A. Acosta, “On some chaos techniques and the modelling of nonlinear time series,” Signal Processing, vol. 55, pp. 269–283, 1996.

[8] M. Dorigo and L. M. Gambardella, “Ant colonies for the travelling salesman problem,” BioSystems, vol. 43, pp. 73–81, 1997.

[9] L. Yan and D. Ma, “Global optimization of non-convex nonlinear pro- grams using line-up competition algorithm,” Computers and Chemical Engineering, vol. 25, pp. 1601–1610, 2001.

[10] F. G. et al., “Genetic algorithm and tabu search. hybrid for optimiza- tions,” Computers and Operations Research, vol. 22, pp. 111–134, 1995.

[11] F.-T. Lin, C.-Y. Kao, and C.-C. Hsu, “Applying the genetic approach to simulated annealing in solving some np-hard problems,” IEEE Transac- tions on Systems, Man, and Cybernetics, vol. 23, pp. 1752–1767, 1993.

[12] R. Ostermark, “Solving a nonlinear non-convex trim loss problem with a genetic hybrid algorithm,” Computers and Operations Research, vol. 26, pp. 623–635, 1999.

[13] A.A.V.V., Handbood of global optimization, H. E. R. Panos M. Pardalos, Ed. Kluwer Academic Publishers, 2002, vol. 2.

[14] K. Chwastek and J. Szczyglowski, “Identification of a hysteresis model parameters with genetic algorithms,” Mathematics and Computers in Simulation, pp. 206–211, 2006.

[15] E. D. M. Hernandez, C. Muranaka, and J. Cardoso, “Identification of the Jiles-Atherton model parameters using random and deterministic searches,” Physica B, pp. 212–215, 2000.

[16] A. Benabou, S. Clenet, and F. Piriou, “Comparison of Preisach and Jiles- Atherton models to take into account hysteresis phenomenon for finite element analysis,” Journal of Magnetism and Magnetic Materials, pp.

139–160, 2003.

[17] J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proceedings of IEEE International Conference on Neural Network, pp. 1942–1948, 1995.

[18] R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” Proceedings of the 6th International Symposium on Micro Machine and Human Science, pp. 39–45, 1995.

[19] J. Nelder and R. Mead, “A simplex method for function minimization,”

The Computer Journal, vol. 7, pp. 308–313, 1965.

[20] K. Chwastek and J. Szczyglowski, “Identification of a hysteresis model parameters with genetic algorithms,” Mathematics and Computers in Simulation, vol. 71, pp. 206–211, 2006.

[21] http://www.ie.ncsu.edu/mirage/GAToolBox/gaot/.

Références

Documents relatifs

The PSO algorithm is employed so as the particles can find the optimal position, lead particles to high probability area, prevent the particles degradation; then genetic algorithm

Han’s model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization... Journal of Theoretical Biology,

About the convergence of PSO, Van Den Bergh and Engelbrecht (2006) looked at the trajectories of the particles and proved that each particle converges to a stable point

Abstract : The purpose of this paper is to propose a robust and fast method to estimate the parameters of Jiles-Atherton model of ferromagnetic hysteresis by using genetic

The fitness function for a particle is defined as the squared error between the measured values and the calculated ones (ob- tained by considering the associated position) of a

It should be noted that although [3] has addressed the same subject, but its update equation seems more like a heuristic method which employs a random weighted addition of

Pour déterminer le rôle des cinq paramètres qui décrivent le modèle de Jiles – Atherton, nous avons présenté dans une première étape les résultats de simulation

Effects of Particle Swarm Optimization Algorithm Parameters for Structural Dynamic Monitoring of Cantilever Beam.. Xiao-Lin LI 1,2 , Roger SERRA 1,2 and Julien