• Aucun résultat trouvé

ADAPTIVE CONTROL FOR A CLASS OF STOCHASTIC PASSIVE HOPFIELD NETWORKS

N/A
N/A
Protected

Academic year: 2022

Partager "ADAPTIVE CONTROL FOR A CLASS OF STOCHASTIC PASSIVE HOPFIELD NETWORKS"

Copied!
13
0
0

Texte intégral

(1)

on the occasion of his 70th birthday

ADAPTIVE CONTROL FOR A CLASS OF STOCHASTIC PASSIVE HOPFIELD NETWORKS

ADRIAN-MIHAIL STOICA and ISAAC YAESH

The paper presents passivity conditions for a class of stochastic Hopfield neural networks with state-dependent noise and with Markovian jumps. Our contribu- tions are mainly based on the stability analysis of the class of stochastic neu- ral networks considered, using infinitesimal generators of appropriate stochastic Lyapunov-type functions. The passivity conditions derived are expressed in terms of the solutions of some specific systems of linear matrix inequalities. The theore- tical results are illustrated by a simplified adaptive control problem for a dynamic system with chaotic behaviour.

AMS 2000 Subject Classification: 93E15, 93E35.

Key words: stochastic stability control, Hopfield neural network, recurrent neural network,S-procedure, linear matrix inequality, adaptive control.

1. INTRODUCTION

Hopfield networks are symmetric recurrent neural networks which exhibit motions in the state space which converge to minima of energy.

Symmetric Hopfield networks can be used to solve practical complex problems such as implement associative memory, linear programming solvers and optimal guidance problems. Recurrent networks which are non symme- tric versions of Hopfield networks play an important role in understanding human motor tasks involving visual feedback (see [3] and [2] and the references therein). Such networks are subject to effects of state-multiplicative noise, pure time delay (see [7], [10] and [15]) and even multiple attractors which can be caused by Markov jumps. Even without Markov jumps, a non symmetric class of Hopfield networks is able to generate chaos [9]. Therefore, Hopfield networks can be used [14] as identifiers of unknown chaotic dynamic systems.

The resulting identifier neural networks have been used in [14] to derive a locally optimal robust controller to remove the chaos in the system.

MATH. REPORTS11(61),4 (2009), 369–381

(2)

In this paper, the robust controller of [14] is replaced by a direct adap- tive controller. More specifically, we consider the so called Simplified Adap- tive Control (SAC) method [8] which applies a simple proportional controller whose gain is adapted according to the squared tracking error. Since such controllers’ stability proof involves a passivity condition, a passivity result for the Hopfield network is derived. The results presented in this paper are, in fact, developed for a generalized version of non symmetric Hopfield networks including Markov jumps of the parameters and state multiplicative noise, thus allowing a wider stochastic class of chaos generating systems to be conside- red. They also represent an extension of the problem presented in [17] to the case where the stochastic Hopfield network is passivied via unconstant state-feedback gains.

The paper is organized as follows. In Section 2 the problem is formulated while in Section 3 we derive Linear Matrix Inequalities (LMIs) based conditions for passivity analysis. Section 4 deals with an adaptive control application, Section 5 includes a chaos control example and Section 6 the conclusions.

Throughout the paper, Rn denotes then dimensional Euclidean space, Rn×m is the set of all n×m real matrices, and the notation P >0 (respec- tively, P≥0) forP ∈ Rn×n means that P is symmetric and positive definite (respectively, semi-definite). Throughout the paper (Ω,F, P) is a given prob- ability space; the argument θ∈Ω will be suppressed. Expectation is denoted by E{·} and conditional expectation of x on the event θ(t) =iis denoted by E[x|θ(t) =i].

2. PROBLEM FORMULATION

The neural network proposed by Hopfield can be described by a system of ordinary differential equations of the form

(2.1) v˙i(t) =aivi(t) +

n

X

j=1

bijgj(vj(t)) + ¯cii(v), 1≤i≤n,

where vi is the voltage on the input of the ith neuron, ai < 0, 1 ≤ i ≤ n, bij = bji and the activations gi(·), i = 1, . . . , n are C1-bounded and strictly increasing functions.

This network is usually analyzed by defining the network energy func- tional

(2.2) E(v) =−

n

X

i=1

ai

Z vi

0

udgi(u) du du−1

2

n

X

i,j=1

bijgi(vi)gj(vj)−

n

X

i=1

¯ cigi(vi).

(3)

It can be seen that dEdt = −Pdgi(vi)

dvi κi(v)2 ≤ 0, where the zero rate of the energy is only obtained at the equilibrium points, also referred to as attractors, where

(2.3) κi(v0) = 0, 1≤i≤n.

The network is then described in matrix form as

(2.4) v(t) =˙ Av(t) +Bg(v) + ¯C, 1≤i≤n, where

A:= diag(a1, . . . , an), B := [bij]i,j=1,...,n, C¯:=

¯

c1 ¯c2 . . . c¯n

T

, v:=

v1 v2 . . . vn

T

and

g(v) :=

g1(v1) g2(v2) . . . gn(vn)T

.

The stochastic version of this network driven by white noise has been consid- ered in [7] where the stochastic stability of (2.1) has been analyzed and it has been shown that the network is almost surely stable when the condition dEdt ≤0 is replaced by LE≤0, whereL is the infinitesimal generator associated with the Itˆo type stochastic differential equation (2.4). This condition has been shown in [7] to be only satisfied in cases where the driving noise in (2.1) is not persistent. This non persistent white noise can be interpreted as a white-noise type uncertainty in A and B, namely, state-multiplicative noise. In [16] and [15] Hopfield networks with Markov jump parameters have been considered to represent also nonzero mean uncertainties in these matrices. Encouraged by the insight gained in [3] and [2] regarding the role of state-multiplicative noise and time delay (see also [13]) in visuo-motor control loops, we generalize the results of [16] to include this effect. The Lur’e-Postnikov systems approach ([12], [1]) is invoked to analyze stability and disturbance attenuation (in the H norm sense), and the results are given in terms of Linear Matrix Inequa- lities (LMI).

To analyze input output properties we first define the error of the Hop- field network output with respect to its equilibrium points by

(2.5) x(t) =v(t)−v0.

and assume that the errors vector x(t) satisfy

dx(t) = [A0(θ(t))x(t) +B0(θ(t))f(y(t)) +D(θ(t))u(t)] dt (2.6)

+A1(θ(t))x(t)dη(t) +B1(θ(t))f(y(t))dξ(t), where

(2.7) y(t) =C(θ(t))x(t)

(4)

and the system measured output is

(2.8) z(t) =L(θ(t))x(t) +N(θ(t))u(t).

Note that (2.6) was obtained from (2.4) by replacing Adt by A0dt+A1dη, Bdt by B0dt+B1gdξ and f(x) =g(x+v0)−g(v0). The control input u(t) is introduced to provide a stochastic version of [14] allowing the considered Hopfield network to also serve a chaotic system identifier. In [14] the control signal is of the form φ(r)urather than just u, whereφ(r) is a diagonal matrix having fi(ri) on its diagonal, with r =H(θ(t))x. For simplicity, here,φ=I, which also is motivated by the example presented in Section 4.

Note also that the matrices A0(θ(t)), A1(θ(t)), B0(θ(t)), B1(θ(t)), D(θ(t)), C(θ(t)) and L(θ(t)) are piecewise constant matrices of appropri- ate dimensions whose entries are dependent upon the mode θ(t)∈ {1, . . . , r}, where r is a positive integer denoting the number of possible modes between which the Hopfield network parameters can jump. Namely, A0(θ(t)) attains the values of A01, A02, . . . , A0r, etc. It is assumed that θ(t), t ≥0, is a right continuous homogeneous Markov chain on D ={1, . . . , r} with a probability transition matrix

(2.9) P(t) = eQt, Q= [qij], qij ≥0, i6=j,

r

X

j=1

qij = 0, i= 1,2, . . . , r.

Given the initial conditionθ(0) =i, at each time instantt, the mode may maintain its current state or jump to another mode i 6= j. The transitions between the r possible states i∈ D may be the result of random fluctuations of the actual network components (i.e., resistors, capacitors) characteristics or can used to artificially model deliberate jumps which are the result of pa- rameter changes in an optimization problem the network is used to solve. In visuo-motor tasks, one may conjecture that proportional and derivative feed- backs are applied on the basis of time sharing, where transition probabilities define the statistics of switching between the two modes. Although there is no evidence for this conjecture, the model analyzed in the present paper can be used to check its stability and L2 gain.

In the forthcoming analysis, it is assumed that the components fi, i= 1, . . . , n, off(ξ) satisfy the sector conditions

(2.10) 0≤ζifii)≤ζi2σi, which are equivalent to

(2.11) −Fii, fi) :=fii)(fii)−σiζi)≤0.

It is further assumed that

(2.12) dfi

i ≤σi, i= 1, . . . , n,

(5)

which is not restrictive since it is fulfilled by the usual nonlinearities as satu- ration, sigmoid, etc., used in the neural networks.

As mentioned above, the passivity (in stochastic sense) condition for systems (2.6)–(2.8) which is expressed as

(2.13) J =E

Z 0

zT(t)u(t) dt

>0, x(0) = 0, will be further analyzed.

3. PASSIVITY ANALYSIS

Introduce the Lyapunov-type function (3.14) V (x(t), θ(t))xT(t)P(θ(t))x(t) + 2

n

X

k=1

λk

Z C(k)(θ(t))x 0

fk(s)ds depending on the nonlinearities fk(yk) =fk(Ci(k)x) via the constantsλk,k= 1, . . . , n, where Ci(k) is the kth row in Ci, i= 1, . . . , r. As it was mentioned in [1],V of (3.14) defines a parameter-dependent Lypaunov function. Indeed, in the special case of fi(xi) = σixi one gets V(x, σ1, σ2, . . . , σn) = xT(P + S12ΛS12)x, where the notation

(3.15) S := diag (σ1, σ2, . . . , σn) and Λ := diag (λ1, λ2, . . . , λn) has been introduced.

Using the Itˆo-type formula (see [4], [5] and [6]) forV(x, θ) we get E{V (x, θ(t)|θ(0))} −E{V (0, θ(0)|θ(0))}=E

Z t 0

LV (x, θ(s)) ds

with the infinitesimal operator Lgiven by

LV (x, θ) := (A0(θ)x+B0(θ)f(y) +D(θ)u)T ∂V

∂x+ (3.16)

+xTAT1(θ) ¯P A1(θ)x+fTB1T(θ) ¯P B1(θ)f+

r

X

j=1

qijxTPjx,

where

P¯(θ, λ1, λ2, . . . , λn) :P(θ) + diag

λ1∂f1

∂x1

, . . . , λn∂fn

∂xn

and f :=f(y(t).

Here, the dependence on the argument t was omitted for simplicity. Then condition (2.13) is fulfilled if

(3.17) LV <2zTu,

(6)

which becomes

xTAT0i+fTB0iT+uTDiT

Pix+CiTΛf + (3.18)

+ xTPi+fTΛCi

(A0ix+B0if+Diu) + +xTAT1iiA1ix+fTB1iTiB1if+

r

X

j=1

qijxTPjx−

−uTLix−xTLTi u−uT(Ni+NiT)u <0, i= 1, . . . , r, where ¯Pi denotes ¯P(θ=i, λ1, λ2, . . . , λn).

In order to explicitly express (3.18), assumption (2.12) will be used.

Indeed, using inequalities (2.12) we see that conditions (3.18) are fulfilled if

−Fi0(x, f) :=xT

AT0iPi+PiA0i+AT1i

Pi+CiTS12ΛS12Ci A1i+

r

X

j=1

qijPj

x+

(3.19)

+uTDTi CiTΛf+fTΛCiDiu+fTh

B0iTCiTΛ + ΛCiB0i+ +B1iT

Pi+CiTS12ΛS12Ci B1ii

f+fT B0iTPi+ ΛCiA0i x+

+xT PiB0i+AT0iCiTΛ

f−uT(Li−DiTPi)x−

−xT(LTi −PiDi)u−uT(Ni+NiT)u <0.

Using the S-procedure (e.g., [1]), we deduce that (3.17) subject to (2.11) is satisfied if there exist τi ≥0,i= 1,2, . . . , n, such that

(3.20) Fi0(x, f)−

n

X

k=1

τkFk(x, f)≥0.

Denoting

(3.21) T := diag(τ1, τ2, . . . , τn) and noticing that

n

X

k=1

τkFk(x, f) =

n

X

k=1

τkfk2−τkσkfkyk=fTT f−1

2fTT CiSx−1

2xTSCiTT f, one gets from (3.20) that

xTZi11x+fTZi12T x+xTZi12f +fTZi22f −uT(Li−DTi Pi)x+uTDTi CiTΛf +fTΛCiDiu−xT(LTi −PiDi)u−uT(Ni+NiT)u <0, i= 1, . . . , r,

(7)

where

Zi11:=AT0iPi+PiA0i+AT1iPbiA1i+

r

X

j=1

qijPj,

Zi12:=PiB0i+AT0iCiTΛ +1

2SCiTT, Zi22:=B0iTCiTΛ + ΛCiB0i+B1iTPbiB1i−T, (3.22)

with

(3.23) Pbi:=Pi+CiTS12ΛS12Ci. These conditions are fulfilled if

(3.24)

Zi11 Zi12 PiDi−LTi Zi12T Zi22 ΛCiDi DTi Pi−Li DiTCiTΛ −(NiT+Ni)

<0, i= 1, . . . , r, with the unknown variables Pi, Λ and T.

The above developments can be stated as

Theorem 1. System (2.6)–(2.8) is stochastically stable and strictly pas- sive if there exist symmetric matrices Pi >0, i= 1, . . . , r, and diagonal ma- trices Λ>0 and T >0 satisfying system (3.24)of LMIs.

4. SIMPLIFIED ADAPTIVE CONTROL

In this section it is shown how a stochastic system of the form (2.6)–(2.7) can be regulated using a direct adaptive controller of the type

(4.25) u=−Kz,

where

(4.26) K˙ =zzT.

This type of adaptive control is well-known in the deterministic case (see, e.g., [8]) and in the present section we analyze the particularities arising in the stochastic framework (see also [18]). Throughout this section it is assumed that Ci = 0, which corresponds to the situation when the nonlinearity f is missing, orCiDi= 0, i= 1, . . . , r. Under this assumption, system (2.6)–(2.7) with Ni = I for tending to zero satisfies the strictly passivity condition (3.24) if there exist symmetric matrices Pi > 0, i = 1,2, . . . , r, satisfying the conditions

"

Zi11 Zi12 Zi12T Zi22

#

<0 (4.27)

(8)

and

(4.28) PiDi =LTi , i= 1,2, . . . , r,

(see e.g. [18] and [8]). The stochastic closed-loop system obtained from (2.6), and (2.7) with u=Kz can be written as:

dx= [(A0(θ)−D(θ)KeL(θ))x+B0(θ)f(y) +D(θ) ¯u] dt (4.29)

+A1(θ)xdη+B1(θ)f(y)dξ, z=L(θ)x,

where ¯u:=−(K−Ke)z. The above equations hold for anyKeof appropriate dimensions but it is assumed that Ke is such that system (4.29) is stochas- tically passive (in this case some authors call the open-loop system almost passive-AP). Note that the existence ofKe is needed just for stability analy- sis, but its value is not used in the implementation. In the present context, the stochastic stability of this direct adaptive controller (4.25), (4.26) (which is usually referred to as simplified adaptive control-SAC) will be guaranteed by the stochastic version of the AP property. Two cases will be emphasized in what follows.

Case a): Ke is constant. As in [8], to prove the closed-loop stability, we will choose a generalization of the Lyapunov function of (3.14), namely (4.30) V(x(t), K(t), θ(t)) =V(x(t), θ(t)) + tr (K(t)−Ke)T(K(t)−Ke), where tr denotes the trace and V is given by (3.14) with Pi, i = 1, . . . , r, satisfying conditions of the form (4.27) and (4.28) written for the passive system (4.29) relating ¯u and z. Then, denoting ˜A0i := A0i−DiKeLi, the infinitesimal generator of V of the form (4.30) along the trajectory (4.29) has the expression

LV(x(t), K(t), θ(t) =i) = 2h

0ix+B0if+Diu¯iT

Pix+CiTΛf (4.31)

+xTAT1iPiA1ix+

r

X

j=1

qijxTPjx+fTB1iTB1if+ 2 tr ˙K(K−Ke)T

=xT

T0iPi+Pi0i+AT1iPiA1i+

r

X

j=1

qijPj

x+fT

B0iTPi+ ΛCi0i

x

+xT

PiB0i+ ˜AT0iCiTΛ

f+ ¯uTDTi Pix+xTPiDiu¯+ ¯uTDTi CiTΛf +fTΛCiDiu¯+fT B0iTCiTΛ + ΛCiB0i+B1iTPiB1i

f+ 2 tr ˙K(K−Ke)T

=

xT fT

" Z˜i11i12i12Ti22

# x f

+ ¯uTDiTPix+xTPiDiu¯ +¯uTDiTCiTΛf+fTΛCiDiu¯+ 2 tr ˙K(K−Ke)T,

(9)

where

i11:= ˜AT0iPi+Pi0i+AT1iPbiA1i+

r

X

j=1

qijPj, Z˜i12:=PiB0i+ ˜AT0iCiTΛ,

i22:=B0iTCiTΛ + ΛCiB0i+B1iTPbiB1i, (4.32)

with Pbi defined by (3.23). Taking into account thatPiDi =LTi , ¯u=−(K− Ke)z and the assumption CiDi = 0, it follows from (4.31) that

LV(x(t), K(t), θ(t) =i) =

xT fT

"

i11i12i12Ti22

# x f

+ (4.33)

2 tr

K˙ −zzT

(K−Ke)T. For ˙K=zzT we have

LV(x(t), K(t), θ(t) =i) =

xT fT

"

i11i12i12Ti22

# x f

. (4.34)

Since system (4.29) satisfies the passivity condition (3.24), we get

"

Zi11 Zi12 Zi12T Zi22

#

<0.

(4.35)

By repeating the S-procedure arguments that follow (3.20), we deduce that if (4.35) is fulfilled then

xT fT

"

i11i12i12Ti22

# x f

<0

forxandf satisfying (2.11). Thus,LV(x(t), K(t), θ(t) =i)<0, which proves the stochastic stability of the resulting closed-loop system.

Case b): Ke is time-varying. In this case one can use a control law of the form (4.25) with the modified adaptive gain

K˙ =zzT+ ˙Ke (4.36)

but in this case, instead of the constant positive definite matrices in the ex- pression (3.14) of the Lyapunov function, one must use the uniformly positive definite matrices Pi(t),t≥0,i= 1, . . . , r, satisfying the system

"

i+ ˜Zi11i12

i12Ti22

#

<0, i= 1, . . . , r, (4.37)

(10)

where ˜Z11i,Z˜12i and ˜Z22i are given by (4.32). The existence of such positive definite matrices Pi(t), i= 1, . . . , r, together with the diagonal matrix Λ>0, is guaranteed by the fact that Ke(t) was assumed to be such that the closed loop system (4.29) is passive. The implementation of the adaptive control law with K given by (4.36) requires in this case the knowledge of the function Ke(t), t≥0.

We next apply this result to a chaos control problem.

5. EXAMPLE – CHAOS CONTROL

Consider a slightly modified version of the third order chaos generator model of [9] described by (2.6)–(2.7), where

A0 =

− 1 0

0 − 1

a1 a2 a3

, B0 =D=LT=

 0 0 1

,

A1 =

0 0 0 0 0 0 0 0 σ

, B1=

0 0 0 0 0 0 0 0 0

, CT=

 β 0 0

, (5.1)

with a1 = −2, a2 = −1.48, a3 = −1, σ = 0.1, = 0.01 and β = 10. The nonlinearity is f(y) =αtanh(y), whereα = 1.

To establish stability, we verify (4.27)–(4.28) withKe= 105using YALMIP [11] where A0 is replaced by A0 −DKeL. Therefore, by the results of Sec- tion 4, the closed-loop system with controller (4.25), (4.26) is expected to be stochastically stable.

Next, we simulate the above system for 500 sec with an integration step of 0.001 s withu= 0 for t≤250 s and with the SAC controlleru=−Kz, where K˙ =z2in the rest of the time. The results are given in Figures 1–3: the phase- plane (i.e.,x1 versusx2) trajectories are depicted in Figure 1, the components xi,i= 1,2,3, of the state-vector and the control input are depicted in Figure 2 while the adaptive gainK is depicted in Figure 3. It is seen from these figures that the chaotic behavior characterizing the system in open-loop, is replaced by a stable trajectory at t≥250 s, where the SAC is applied.

6. CONCLUSIONS

A class of stochastic Hopfield networks subject to state-multiplicative noise where the network weights jump according a Markov chain process has been considered. Stochastic passivity conditions for such systems have been derived in terms of Linear Matrix Inequalities. The results have been illus- trated via simplified adaptive control of a dynamic system which exhibits a

(11)

chaotic behavior when it is not controlled. The control efficiency in stabilizing the chaotic process has been demonstrated by simulations. The results of this paper should encourage further study of attempts to control chaotic systems with simplified adaptive controllers.

Fig. 1. Simulation results.

Fig. 2. Simulation results.

(12)

Fig. 3. Simulation results.

REFERENCES

[1] S. Boyd, El. Ghaoui, L. Feron and V. Balakrishnan,Linear Matrix Inequalities in System and Control Theory. Philadelphia, PA, SIAM, 1994.

[2] J.L. Cabrera, R. Bormann, C. Eurich, T. Ohira and J. Milton, State-dependent noise and human balance control. Fluctuation and Noise Lett.4(2001), L107–L118.

[3] J.L. Cabrera and J.G. Milton, Human stick balancing: tuning Levy flights to improve balance control. Chaos14(2004), 691–198.

[4] V. Dr˘agan and T. Morozan, Stability and robust stabilization to linear stochastic sys- tems described by differential equations with Markovian jumping and multiplicative white noise. Preprint17/1999, “Simion Stoilow” Institute of Mathematics of the Romanian Academy.

[5] V. Dr˘agan and T. Morozan,The linear quadratic optimiziation problems for a class of linear stochastic systems with multiplicative white noise and Markovian jumping. IEEE Trans. on Automat. Control49(2004), 665–675.

[6] X. Fen, K.A. Loparo, Y. Ji and H.J. Chizeck, Stochastic stability properties of jump linear systems. IEEE Trans. Automat. Control37(1992), 38–53.

[7] S. Hu, X. Liao and X. Mao,Stochastic Hopfield neural networks. J. Phys. A36(2003), 2235–2349.

[8] H. Kaufman I. Barkana and K. Sobel,Direct Adaptive Control Algorithms – Theory and Applications, 2nd Edition. Springer, New York, 1998.

[9] H.S. Kwok, G.Q. Zhong and W.K.S. Tang,Use of Neurons in Chaos Generation. ICONS 2003, Portugal, 2003.

[10] X. Liao, G. Chen and E.N. Sanchez, LMI Based Approach to Asymptotically Stability Analysis of Delayed Neural Networks. IEEE Trans. Circuits Systems E: Fundamental Theory Appl.49(2002), 1033–1039.

(13)

[11] J. L¨ofberg, YALMIP: A toolbox for Modeling and Optimization in MATLAB. Proc.

CACSD Conf., Taipei, Taiwan, 2004.

[12] A.I. Lure and V.N. Postnikov,On the theory of stability of control systems. Appl. Math.

Mechanics8(1944),3, 245–251. (Russian)

[13] F. Mazenc and S.-I. Niculescu, Lyapunov stability analysis for nonlinear delay system.

Systems and Control Lett.42(2001), 245–251.

[14] A.S. Poznyak, W. Yu and E.N. Sanchez, Identification and control of unknown chaotic systems via dynamic neural Networks. IEEE Trans. Circuits Systems E: Fundamental Theory Appl.46(1991), 1491–1495.

[15] A.-M. Stoica and I. Yaesh,Delayed Hopfield networks with multiplicative noise and jump Markov parameters. Proc. MTNS 2006, Kyoto, Japan.

[16] A.-M. Stoica and I. Yaesh, Hopfield networks with jump Markov parameters. WSEAS Trans. Systems4(2005),4, 301–307.

[17] A.-M. Stoica and I. Yaesh, Passivity of a class of Hopfield networks. Application to chaos control. ICINCO, Portugal, 2008.

[18] I. Yaesh and U. Shaked, Stochastic passivity and its application in adaptive control.

CDC 2005, Kyoto, Japan, 2006.

Received 3 May 2009 “Politehnica” University of Bucharest Faculty of Aerospace Engineering Str. Polizu, No. 1, 011061, Bucharest, Romania

amstoica@rdslink.ro and

Control Department, IMI Advanced Systems Div.

P.O.B. 1044/77 Ramat-Hasharon, 47100, Israel

iyaesh@imi-israel.com

Références

Documents relatifs

By showing that there is no unique model of Arabidopsis thaliana flower morphogenesis’ regulatory network, and that other possible behaviors exist (additional cyclic attractors

In this paper we show that the mixture states play a crucial role when the task posed to the network is to extract meaningful information from the activity patterns

The maximum number of random patterns that can be retrieved increases in a non linear way with z and the asymptotic mean overlap between input and output patterns

By analogy with spin glasses it is believed, for neural networks, that the true replica broken solution removes the reentrant phase and increases ac(TM) below the replica

On the other hand, when one of the concepts is chosen as the initial state of the dynamics, we find that for s &lt; sc the net stabilizes in a different state, yielding another

or they are very close to energy minima. Our results show that a critical number of examples must be taught to the system in order to start generalization. Once the network is in

Stochastic activity networks for the modeling of repairable systems including diagnosis performanceS.

This work discusses the development of a passive monitoring software tool for evaluation of topology and routes used in WH networks, with a specific architecture to