• Aucun résultat trouvé

Determination of Electron Spin Resonance Static and Dynamic Parameters by Automated Fitting of the Spectra

N/A
N/A
Protected

Academic year: 2021

Partager "Determination of Electron Spin Resonance Static and Dynamic Parameters by Automated Fitting of the Spectra"

Copied!
27
0
0

Texte intégral

(1)

HAL Id: jpa-00249428

https://hal.archives-ouvertes.fr/jpa-00249428

Submitted on 1 Jan 1995

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Determination of Electron Spin Resonance Static and Dynamic Parameters by Automated Fitting of the

Spectra

Claude Chachaty, Edgar Soulié

To cite this version:

Claude Chachaty, Edgar Soulié. Determination of Electron Spin Resonance Static and Dynamic Parameters by Automated Fitting of the Spectra. Journal de Physique III, EDP Sciences, 1995, 5 (12), pp.1927-1952. �10.1051/jp3:1995240�. �jpa-00249428�

(2)

Classification Physics Abstracts

82.80Ch 33.35+ 02.60Pn

Overview Article

Determination of Electron Spin Resonance Static and Dynamic

Parameters by Automated Fitting of the Spectra

Claude Chachaty and Edgar J. SouliA

CEA / DSM/ DRECAM/Service de Chimie MolAculaire, Centre d'Etudes de Saday,

91191 Gif-sur-Yvette cedex, France

(Received 19 June 1995, accepted 22 September1995)

Abstract. When measurements on single crystals are not feasible, approximate values of the static spin Hamiltonian parameters are in many cases obtained from the ESR spectra of

randomly oriented paramagnetic species. The accuracy of such determinations is considerably improved by optimizing these parameters by means of automated simulation programs resorting

to the nonlinear least squares fit of experimental spectra. The principles of the simplex method of Nelder and Mead and of the method of Leveuberg-Marquardt, generally used for that purpose,

are reported. Examples are given of the applications of the latter, which has the advantage of

converging rapidly, to S

= 1/2 paramagnetic species in rigid matrices. Optimization procedures based on the Levenberg-Marquardt algorithm, are extended to the determination of dynamic parameters of nitroxide spin-probes, namely their tumbling correlation times in fluid and viscous isotropic media as well as in liquid crystalline phases or their exchange rates between inequivalent

sites. Lastly, it is shown

on the example of a nitroxide biradical, that similar methods

can be

applied to the study of the dynamics of multiple conformational changes in a paramagnetic

flexible molecule.

1. Introduction

The ESR spectroscopy has long been used to identify the free radicals in solution or in frozen media by analysis of the hyperfine structure of the spectra which results from the scalar and

dipolar couplings of the unpaired electron spins with nearby nuclei (see for instance Ref. [lj)-

The ESR has also significantly contributed to the understanding of the electronic structure of transition metal ions [2,3] as well as of rare earth and actinide ions in a variety of matrices [2j.

The accumulated experience over more than forty years has revealed a variety of effects intervening in the so-called spin Hamiltonian which determine the positions and intensities of

resonance lines.

@ Les Editions de Physique 1995

(3)

1928 JOURNAL DE PHYSIQUE III N°12

~ihen ESR spectra are observable in solutions, the analysis of their patterns generally pro-

vides the g factor and the hyperfine coupling constants. In the case of single crystals, one can follow the positions of resonance lines with the orientation relative to the static magnetic field and accurately determine the components of magnetic tensors. The theory of the spin Hamil- tonian has essentially been developed on the basis of such measurements on single crystals.

In many cases, the compound being investigated cannot be obtained as a single crystal and

is available only as a polycrystalline or amorphous solid, so that the preceding information is not directly available. However a wealth of information is contained in the ESR spectrum of a

polyoriented system, where the observed resonances are the sums of a quasi-infinite number of lines corresponding to randomly distributed orientations. In this case, a first estimate of the ESR parameters can be provided by simulation of the experimental spectrum, introducing ap- proximate values deduced from the positions of absorption and dispersion-like singularities of the absorption first derivative [4j. If a reasonable agreement is achieved, one can then proceed

to an automated adjustment of the parameters by means of nonlinear optimization [5].

This approach, which has been proven successful for rigid polyoriented samples [6-8j may be extended to systems where the paramagnetic species undergoes a motion at a rate comparable

to or larger than the anisotropy of magnetic tensors expressed in frequency units [9,10]

In the present review, we will first describe in Section 2 a few of the mathematical tools pertaining to nonlinear optimization, which have been used in the adjustment of the param-

eters involved in automated computer simulation. Section 3 provides examples of parameter

adjustment applied to rigid polyoriented samples. The rigid character generally prevails at low temperature and may be lost when the temperature increases, with a progressive release of molecular motions, revealed by spectral changes. This is discussed in Section 4 which treats of the determination of reorientation correlation times in the slow and fast tumbling regimes

and gives some examples of spectral simulations in isotropic viscous or fluid media as well as in liquid crystalline membranes. While Sections 2-4 deal with S

= 1/2 species, Section 5 gives

an example of automated computer simulation of the spectra of a flexible biradical at different temperatures, taking into account the conformer populations as well as the rates of internal and overall tumbling motions.

2. Parameter Adjustment from an Observed ESR Spectrum

2. I. THE CRITERION. In contrast with optical spectroscopies (e.g. infrared or Raman), the ESR theory predicts not only the position of the lines but their widths and relative intensities

as a function of the control variable I.e. the magnetic field strength in the present case. This is true also for NMR and M0ssbauer spectroscopies as well as for neutron inelastic scattering.

In other words, given a set of adjustable parameters, the value of an ESR signal for each

sampled value of the magnetic field may be calculated and compared to an experimental one, normalized at the same scale. The obvious assumption underlying parameter optimization is that when the parameters of the simulation approach those having a true physical meaning, the

calculated values of the signal come closer to the observed ones. However, a given modification of a parameter may improve the agreement in some part of the spectrum and degrade it in

another one. It is therefore necessary to define a function which quantitatively assesses the overall quality of the agreement between the observed and computed spectra. This function should depend on all deviations between the observed and calculated values, be positive and

decrease towards zero as this agreement improves.

(4)

Among many others, two functions satisfy these requirements: the sum of squares of differ-

ences and the sum of the absolute values of differences. The almost universal adoption of the

first one results from the following reasons:

the sum of squares of deviations carries the properties of continuity and derivability of the

signal with respect to the adjustable parameters, a condition not fulfilled by the sum of the absolute values of differences.

if the errors on the observed values obey a Gaussian distribution law, the most likely

parameters, in the statistical sense, correspond to the minimum of the sum of squares of differences [II].

The criterion of the least sum of absolute values will in general result in different parameters.

How different they are from those corresponding to the least squares will provide with a reliable estimation of the knowledge of the parameters which have been acquired (see for instance Ref. [6]). The application of this latter criterion, however, requires a much more elaborate optimization method (12) than for the least square criterion, because it involves an objective function which is not derivable. Once the criterion or objective function has been selected, we

can turn our attention to the minimization of the objective function.

2?. THE KEY FEATURES oF OPTIMIzATION. Given the sampled values B~(I < I < N)

of the magnetic field, the corresponding values of the observed signal O~, a set of adjustable parameters xj(I < j < k) and the functional dependence f(B~, xi-.,xk) of the simulated

signal, the function to be minimized has the form:

N

F(xi,

,

~k) = ~j (f(B~,

xi,

,

xk) O~)~ ill

~=1

In this review. we will briefly sketch the framework of function minimization. The above function being nonlinear, any algorithm to minimize such a function is iterative and requires

a set of starting values X1°~ of the adjustable parameters. Thus, the minimization will lead from the set X1°) to

a new set Xl~~ of parameters, from XII) to Xl~), etc... The process will stop when a convergence condition is fulfilled. This condition may be that:

The value of the function has decreased below a given value.

The decrease of the function between two successive iterations has fallen below a given value.

The norm of the gradient of the function has fallen below a given value.

The distance in Rk bet~v.een two successive points has decreased below a given value.

Or a combination of the above criteria.

2.3. LOCAL AND GLOBAL ALGORITHMS. In the part of the R~

space where it is defined, an objective function may have a single minimum or several ones. In the latter case, the smallest

minimum of the function is called the global minim~m whereas the other minima are called local minima. In most cases, especially in spectroscopy, a sum of squares of differences has many minima. Several of these minima may correspond to a partial matching between the positions of the observed and the calculated lines in an ESR spectrum. If the minimization has led to a local minimum, the objective function should first increase before another minimum is reached. Therefore, the search for a global minimum is much more difficult than that of a

local minimum [13j.

Practically, it is advisable to draw on all available knowledge in order to make a good initial choice of the adjustable parameters, this choice being guided by a non-automated preliminary fit of the experimental spectrum. In the determination of the principal values of the A and g

(5)

1930 JOURNAL DE PHYSIQUE III N°12

tensors, a condition of convergence applies to the optimization process: this initial choice must result at least in a partial overlap of all the computed lines with the experimental ones. On the other hand, we have observed that the initial choice of the intrinsic linewidths and of the

motional or exchange parameters is much less critical.

2.4. CHoicE OF THE ALGORITHM. In the course of the last fifty years, many algorithms

to minimize a function of k variables have been de,~ised. We will only describe two algorithms

which have been used for minimizing the sum of the squares of differences in the case of ESR spectroscopy, namely the nonlinear Simplex of Nelder and h~ead [14] and the algorithm of

Levenberg-Marquardt [15].

The algorithms for minimizing a general function F of k real variables belong to three

categories:

those which only use the values of the function F, such as the nonlinear simplex algorithm of Nelder and Mead described below,

those which also use the first partial derivatives 3F/3xi,

,

3F/3xk, such as the gradient

or steepest descent algorithm of Cauchy,

those which furthermore use the second partial derivatives 3~F/3~j3xj, such as the Newton

algorithm, based on a Taylor expansion of the function F to the second order.

It is easily understandable that an algorithm of the first category may be less efficient than

an algorithm of the second category, itself less efficient than an algorithm of the third one.

However, it very often occurs that a function is derivable but that the expressions of its deriva- tives in closed forms are so complicated that they are not established. In such a case, the partial derivatives may be approximated by finite differences. In space Rk, the approximate

determination of the gradient requires k + I values of the function F. Even though the de- termination of an approximate gradient by finite differences requires k + I more computation

time than the mere calculation of the objective function, it has proven advantageous in many optimization problems to resort to an algorithm which makes use of the gradient.

2.5. THE NONLINEAR SIMPLEX ALGORITHM oF NELDER AND MEAD. The common judg-

ment that the lengthy calculation of partial derivatives would not result into a practical benefit, explains the success met by the the nonlinear simplex algorithm of Nelder and Mead [14] in

many scientific and engineering applications. Chemical physicists, in particular in ESR spec- troscopy [7,9j, have used it for many years. This algorithm is applicable to a real function of k real variables, having no particular property. Thus, it does not make use of the fact that the

function to be minimized is a sum of squares.

Given a starting point X1°) and the unit vector e~ along the k directions of space, one defines k other points by moving from ~K1°) to Xl~)

= -K1°) + (.e~ along an unit vector, ( being a scaling factor for the i~~ parameter. The k +1 points define a polyhedron in the R~ space

or "simplex" which is a triangle in the R~ space, a tetrahedron in the R~ space, etc... The objective function is calculated on all vertices of the simplex. Let Xl~) be the vertex

on which the objective function is maximum. The k other points define a plane or hyperplane in the R~ space. The exploration of the space will continue in a region away from that where the objective function is maximum. For that purpose, the point symmetrical of Xl'~) with respect

to the hyperplane of the k other points, called Xlk+~), is selected. If the value of the objective function in this point is smaller than in the point Xl~), then the point Xlk+I) will replace Xl~~ and

a new simplex is defined. Otherwise, another point will be selected on the line defined by these points, such that the objective function is smaller than in Xl~'. The volume of the simplex in the R~ space may be larger (expansion) or smaller (contraction) than that of

the original simplex. The selected point will replace Xl~) in the simplex. If the search of this

(6)

point along the line between Xl~) and Xl ~+~) fails, the point X@I where the objective function has the second largest value, will be chosen instead of Xl~) and the

same procedure will be

applied. After that, the optimization has well progressed, a contraction can finally occur in all directions. The minimization is stopped when the residual simplex is small enough.

While this algorithm is very robust, I.e. capable of finding a minimum for a large variety of situations, it requires many iterations.

2.6. THE LEVENBERG-MARQUARDT ALGORITHM. For this reason, it is advantageous to

replace the algorithm described above, by the more efficient Levenberg-Marquardt algorithm [15] which takes advantage of two essential features of the objective function:

the objective function is twice continuously derivable, the objective function, is a sum of squares.

For twice derivable functions whose partial derivatives are to be calculated by finite dif- ferences approximations, several algorithms have been devised which are more efficient than the algorithm of Nelder and Mead. They will not be reviewed here because the algorithm of

Levenberg-Marquardt described below, achieves a good efficiency without involving the many subtle and complex calculations encountered in those algorithms.

When the function to be minimized has the form of equation (I), the values of the first partial derivatives of the functions f enable to calculate an approximation of the second partial derivatives of the function F. The minimization of the function F by the Newton method may be applied and proves advantageous even if the first partial derivatives of the f functions have to be approximated by finite differences.

Let us first remark that the function to be minimized is dimensionless. An adjustable

parameter x~ may have the dimension of a length, a time, a wavenumber etc... The partial derivatives 3f/3z~ thus have the dimension of the inverse of x~. As a consequence, a scaling

has to be applied so that the parameters to be adjusted by the minimization algorithm are

dimensionless.

The partial derivative of the function F with respect to xj is expressed as:

~~

= 2

(

lflt~; xi,..

,

zk) o/ ~fl~~> xi, ..., xk) ~~~

~~J

~=i

az~

where t~ is a sampled value of the control variable, here the magnetic field, or, in shortened form:

)

~~~~~~~

~~~~) ~~~

Let J stand for the Jacobian matrix, namely the matrix of the partial derivatives of the f function:

~~ ~~~~~~~j ~~~~

~~~

Let AI stand for the Hessian matrix, namely the matrix of the second derivatives of the function F with respect to xj and xm

~~~ axjaxm ~~~

The second derivative of the function F with respect to variables xj and xm has the expres- sion:

3/~m ~

~

~m

~~ ~ ~~~~~~ ~~~

/~~Xj

~~~

(7)

1932 JOURNAL DE PHYSIQUE III N°12

Marquardt noted that if the function f is not too irregular, its second derivative varies

smoothly as a function of t~. For a set of parameters which is not too bad, the differences

f(t~) O~ vary in an almost random manner. Thus the summation over the second term of the above expression is likely to be small. If the parameters are close to the solution, the individual

differences themselves will be small. On that basis, this second term is skipped to obtain an

approximate value of 3~F/3xj3xm. The approximate Hessian matrix has the form M = j J.

The displacement of the current point in the space Rk of the parameters may be obtained by solving the linear system M /hX

=

j V where V is the vector of differences.

Marquardt had the intuition that it would be efficient to combine the steepest descent (or gradient) method with that of Newton. The steepest descent algorithm always converge but

may be slow, whereas the Newton method may converge rapidly close to the solution, but may fail because the approximate Hessian matrix is not inversible. To avoid this difficulty, Marquardt replaces the Hessian matrix M by M + ~D, where D is an appropriate diagonal

matrix and ~ a factor. An initial value of ~ and a real number r are selected, then for each iteration, two displacements of the current point Xl~) in space R~, corresponding to J and ~/r

are calculated and the values of the objective function F are determined on the trial points.

If F(J/r) < F(J). J is replaced by J/r. In this manner, the bias introduced by the addition of the diagonal matrix is kept as small as possible. One checks that between two successive

iterations, the function F does not increase. Otherwise, J is repeatedly replaced by Jr until F decreases.

The Levenberg-Marquardt algorithm succeeds in finding a meaningful optimum in many real problems, provided that the starting values of the adjustable parameters are reasonable

enough.

In the spectral simulations reported below, we have resorted to the Levenberg-Marquardt algorithm. The reason given above explain why this algorithm should be more efficient than the simplex method of Nelder and Mead. This greater efficiency has been tested in a case relevant to ESR: for the same simulation program of a single crystal spectrum and the same starting values of the adjustable parameters, optimization proceeds about 30 times faster with the Le,~enberg-Marquardt algorithm than for the nonlinear simplex algorithm [16j.

For the programs written in Fortran, we have used the subroutines BSOLVE [17] and N2FB [18]. For those written in APL,

we have used the unpublished function MARQ of Jenkins [19j.

3. The ESR Spectra of Polyoriented Species in Rigid Media

3.I. THE PRINCIPLES oF SIMULATIONS. The ESR spectrum of a paramagnetic species

in macroscopically disordered media (powders, glasses and other amorphous materials) is the

sum of spectra corresponding to randomly distributed orientations with respect to a laboratory

framework defined by the director field Bo and the microwave field El

For each of these orientations, the resonance conditions are expressed by

hu = E~(B) Ej(B) where u is the spectrometer frequency, E~(B) and Ej(B) being two

eigenvalues of the Hamiltonian matrix for a value B of the director field (see for instance Refs. [1-3j). The Hamiltonian operator is given by the expression:

7i=fl~Bo.g.S+S.D.S+S.A.I-gnflnBo.I+I.Q.1 (7)

where g, D, A and Q are second rank tensors representing the electron Zeeman interaction, the dipolar coupling between electron spins (for S > I), the hyperfine coupling and the nuclear

quadrupolar coupling (for I > I), respectively. The other symbols have their usual meaning.

The nuclear Zeeman term is often ignored but may have an important contribution to the

Références

Documents relatifs

In addition, the mixing ratio of O 3 , which is the second most important scavenger of alkenes, is fixed at the same value (20 nmol/mol) in all model runs. Thus, the simulated

La survie médiane sans progression est de 11,1 mois par rapport à 5,3 mois pour le nivolumab et 2,8 mois pour l’ipilimumab chez les patients atteints d’un mélanome

A dedicated original experi- ment (thixotropic injection of a disk) has been de- veloped at Cemef, and the constitutive parameters have been determined thanks

These values can be measured by varying the angle between the chain axis and the oscillating field B1, in the plane perpendicular to the static field: the maxi- mum ratio of

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

The shapes obtained following the combination of the genetic algorithm reported in [20] and the topological optimization by the mixed relaxation/level set method are presented