• Aucun résultat trouvé

Short-term memory in a sparse clock neural network

N/A
N/A
Protected

Academic year: 2021

Partager "Short-term memory in a sparse clock neural network"

Copied!
11
0
0

Texte intégral

(1)

HAL Id: jpa-00246756

https://hal.archives-ouvertes.fr/jpa-00246756

Submitted on 1 Jan 1993

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Short-term memory in a sparse clock neural network

S. Semenov, A. Plakhov

To cite this version:

S. Semenov, A. Plakhov. Short-term memory in a sparse clock neural network. Journal de Physique

I, EDP Sciences, 1993, 3 (3), pp.767-776. �10.1051/jp1:1993161�. �jpa-00246756�

(2)

J.

Phys.

I France 3

(1993)

767-776 MARCH 1993, PAGE 767

Classification

Physics

Abstracts

05.90 64.60 87.10

Short-term memory in

a

sparse clock neural network

S. A. Semenov and A. Yu. Plakhov

Institute of

Physics

and

Technology,

Prechistenka Str, 13/7, Moscow119034, Russia

(Received 7

September

1992,

accepted

21 October 1992)

Abstract. We propose a model of short-term

(working)

memory in a clock neural network. The retrieval dynamics of a

strongly

diluted version of the model is solved. The related

phase diagrams reflecting

fixed

point

retrieval behaviour of the system are also obtained and discussed.

1. Introduction,

A direct and

intuitively

evident way to avoid

overloading

in

Hopfield-type

neural networks is that of an introduction of

non-additivity

into

leaming

rules which saturates neuronal

couplings

and, in

fact,

is

responsible,

for

palimpsestic

behaviour of associative memory models

[1-3].

In this

regard

several

leaming

schemes for

binary

networks were

proposed [2, 4, 5], analysed [3- 6]

and discussed from a

biological viewpoint [7]. Newly developed

models have even closer

relations with

psychological phenomena [8].

It is of interest therefore to

investigate

manifestation of short-term memory effects in other

important

attractor-like neural network

models,

such as, for

instance,

clock or Potts models. In recent years, a clock model which

exploits

a Hebb-like

learning prescription

has been treated

[9, 10].

As it was

found,

the

system,

when

being

overloaded

by

learnt

pattems,

falls to the

state of total memory confusion. It thus seems valuable to invent a tractable

learning procedure

that

prevents

memory

overloading

in a clock neural network.

In this paper we

present

and

analyse

a model of a clock neural network

exhibiting properties

of short-term

(working)

memory. In our model intemeuronal

couplings

are

given by complex

numbers with fixed

modulus,

and the

leaming

of any new pattem consists of Hebb-like

modification and then renormalization of each

coupling.

This

factually

leads to memorization of new pattems while

simultaneously forgetting

the most ancient ones and as a result the total

length

of stored pattems list remains constant. We conclude in effect that main features inherent to the models of

working

memory

designed

for

binary

neural networks such as, for

example,

an existence of critical and

optimal embedding strengths

are

preserved

in our model.

The paper is

organized

as follows. In the next section we describe a

general

model which will be solved in the third section in the limit of extreme dilution. In the forth section

stability analysis

is carried out

providing phase diagrams

in the

parameter

space of the model.

Then,

in section

5,

we

analyse

in detail the deterministic

dynamics

limit. The paper finishes with

concluding

remarks.

(3)

2. Definition of the model.

Let us consider a network of N

planar spins (neurons).

The state of each neuron is

given by complex

variable s~,

[sk(

"

I, (k

=

I,

,

N

).

The state of the system is then described

by

vector s =

(si,

,

s~). Spin dynamics

is

govemed by

local field of the form

N

~k(S)

"

I

l~ki St

>

f=1

with

complex couplings T~i.

We

imply

that stochastic evolution of the system is

given by

either

parallel

or

sequential

heat bath

dynamics

with time

step

At

by assuming

transition of neuron k to the state z in the next time moment t + At to be

given by probability density

~

(s~(t

+ At

)

= z = p

(z v~(t ) ),

where

exp(ji

Re

(z*

&(jzj I>

,

~~~~~~ 11)

~~~~p

R~

~w*v»&(jwj I)dw

~

~

v~(t)

=

z T~i si(t> (2>

Here asterisk marks

complex conjugation

and the

couplings

T~t

=

K~t J~t

combine the effects of

leaming (J~t)

and dilution

(K~t).

The factors

K~t

are chosen to be

independent

random

variables with distribution

p

(Kki)

=

(C

IN

(K~t

I/

$)

+

(I

C/N

(K~t),

here C is the mean number of

couplings

of each neuron. Since

K~t

and

Kt~

do not correlate with each other the dilution is

asymmetric.

The parameter fl in

(I)

is the inverse temperature

determining

the level of stochastic noise. For

parallel (synchronous) updating

of all the system

a time increment At

=

I is

implied. Sequential dynamics

consists in random choice of network's neuron and

updating

it in time interval At

=

N

according

to stochastic

dynamics (1)-(2).

In the deterministic

limit, fl

= oJ, the

dynamics

becomes

simply s~(t

+

At)

=

v~

(t )/

v

~

(t )

During

the

learning

process random

pattems

which are nominated

configurations

N

s~,

v

= ,

1, 0,

1,

, p chosen

independently

with uniform distribution

©(s)

=

fl g(s~),

k I

g(s )

=

(2

ar

)~ ( Is

I

),

are

sequentially presented

to the network. If one tries to store the

pattem's

sequence

using

Hebb-like

prescription

~ki

"

iS~ Si'

v

the memory deteriorates at some critical

loading

and the system loses the

ability

to retrieve any pattem

taught [9, 10].

To overcome the state of memory

overloading,

it seems to be reasonable to restrict the range of

possible

values of

complex couplings

in a

quite

natural way

by assuming J~t

to take the values on the unit

circle,

I,e.

J~i

=

I. Below we propose an

embedding

scheme

enabling

us to memorize the most recent pattems in the sequence leamt. This is achieved in

step-by-step «embedding

and renormalization» of

couplings.

That

is,

at the time of

acquisition

of the v~th pattem, the

couplings

are to be

changed by

~ ~

J(I

+

es( s)

Jki

"

~ ~

~

(3)

(J~t

+ esk si

(4)

N° 3 SHORT-TERM MEMORY 769

where

e is the

embedding strength.

In contrast to Hebb-like additive

leaming proposed by

Noest

[9],

in which both

amplitudes

and

phases

of

couplings J~t

are

modified,

the rule

(3) implies

that the information is stored in the

phases fl~t

= arg

J~t only.

Note that the

learning procedure (3)

and the network

dynamics (1)-(2)

are invariant with respect to

global

rotation of the

system,

s~ - cs~,

[c[

= I.

Following

Noest

[9]

let us define an «

overlap

» between network

configurations

s and

S'by

m(S, S'>=

'

~l Z Sksl*

For random noncorrelated

configurations,

we have m N ~~~ while ones, which coincide with each other after some

global rotation, specify [m

=

1.

3.

Extremely

diluted network.

If the mean number of unbroken bonds of neuron,

C,

becomes of order

logN

as

N - oJ, one may

neglect

correlations in the « ancestor tree » and an

analytical

treatment of

dynamics

can be used

(Derrida

et al.

[I II).

In the

following

we solve the

dynamics

of such an

extremely

diluted network

(1)-(3)

with

embedding strength

e in

teaming

rule

(3) going

to zero.

We will be interested in storage and retrieval of the ~p + I

)-th pattem

from the end of pattem sequence..

,

s~ ~,

s°,

s~,

.,

sP,

I-e- the pattem

s°.

First of all, we will fix indices k and

f

and

concentrate on the contribution of the pattem

to the formation of the

coupling J~i.

It is easy to show that in view of the

independence

and the uniform distribution of

(s( s)*,

v =

,

1, 0, 1,

,

p)

on the unit circle every

given

increment

4(1m

fl

(t+ flit

does not correlate with the

previous

ones.

(We

remind that

fib

= arg

J(I).

Indeed, the

quantity

J(/ s( s)*

is

uniformly

distributed on the unit circle and does not correlate with all

quantities

sf~~(sf~~)*,

pm v and hence with

Jft.

From

this, according

to

(3),

it follows that

J(t+ J(/

=

(I

+

eJ(/ s( s)

* I +

eJ(/ s( s)*

does not correlate with all

Jfi,

p w v. In terms of arguments it means the above stated. From this fact one

immediately

gets mutual

independence

of all the increments

(4j,

p

=. ,

1, 0,

1,

,

p).

As e

-

0, by taking

linear

part

of

expression (3)

we

get J(t+ J(/

= I + I e Im

(J(/ s( s)* )

+ O

(e~), (4)

or, in terms of arguments,

4(1

= E Im

(J(/ S( S)~)

+ O

(E~).

Variance of

4(I

is therefore estimated as

e~/2

+ O

(e~).

The

resulting change flit+ fill

which is obtained after

storing

of p most recent pattems

s~,.

,

sP is thus

given by

the sum of p

independent

random

quantities flit+~ fill

p

=

I Al

with total variance

pe~/2

+

O~pe~).

a i

Consequently,

for

large

p the

quantity f~t

=

~pe~/2)~~/~ (flit+~-flit)

is

approximately

Gaussian with a zero mean and unit variance.

Introducing

the notation y

m

p/C

for the short-term storage

capacity

and e;=

eC~'~

for the

reduced

embedding strength (further

on we Will call e

embedding strength

to be more

concise),

we now

impose

the constraint that both y and e are

kept

fixed in the limit p, C

- oJ. This

implies

for e

scaling e~C~~~~

In these terms one gets

flit+~- fl(t

=

(ye~/2)~~~ (~t.

(5)

Retuming

to

complex

notations and

using

formula

(4),

we can write down

~~

"

~(i

~XP

(' ~

~ki )

~

"

J~i

+

fi

[S~

Si~ (J~i)~

S~~

Sil

+

(E~)~

X

eXp(I /~ (ki)

2 C

We suppose next that the initial network's state

s(0)

has a

macroscopic overlap only

with the pattem

s°. Studying

the retrieval process of pattem

s°,

we

represent

the local field in the

moment t, in the form

vk(t>

=

iBk(t>

+

fk(t>i si

>

(5>

where

Bk(t

=

fi I Kki Si~

S(

(t )

eXp

(I /~

~ki )

2 C >

~

j

and

~k(t)

=

§ I

(4i

~

~ (4i)~

S~~

Si

+ °

(~ )j ~XP1' ~

~ki )

St

(I)

>

(~)

~fj

where the sum

z

is

given

over all

i's

connected to neuron

k, K~t

# 0.

B~(t)

in

(5)

can be

~fj

interpreted

as a

signal

from pattem

s°.

It is

represented by

a

self-averaging quantity

with the

mean

equal

to

Bm(t),

here

m(t)

is a

time-dependent overlap

between

configurations s(t)

and

~o

m(t)

#

( Sk(t)

S~~

~

k=1

and

B

=

(e/2)(exp(I fi ())

=

(e/2) exp(- ye~/4)

<

( .)

,

signs

average over normalized Gaussian variable

D. f~(t)

in

(5) plays

the role of noise which, as can be shown

using

standard arguments

[11],

is

represented by

the sum of

uncorrelated items that

approaches,

as C

~

log

N

- oJ, a

complex spherically

distributed Gaussian

quantity

with zero mean and unit variance, I,e.

f~(t)j~)

= I.

Let us consider the case of

parallel dynamics

with At

=

I.

By using

the

expression

for conditional transition

probability (I),

the thermal average of

s~(t

+

I)s(*

under fixed

f~(t ) equals c s(*

zp

(z s((Bm (t )

+

f~(t )))

dz. Due to rotational invariance of the function p, p

(z

v

)

m p

(sz

sv

) provided

s

=

I,

after substitution w

=

s(*

z we can rewrite the

expression

for thermal average as

s~

(t

+ I

s(*

= wp

(w

Bm

(t

+

f~

(t

)

dw

c

Taking

average over

f~(t),

we

get immediately

(s~(t

+ I

s(*)~

= F

(m(t)) (7)

(6)

N° 3 SHORT-TERM MEMORY 771

with

F(m>= lwjp(wjBm

+

f»~dw,

c

where

(. )

~ means

averaging

over normalized

complex spherically

distributed Gaussian

variable

f.

Without any loss of

generality

we may assume the initial

overlap

m

(0 )

to be real and

positive (it

can

always

be achieved

by

some rotation of the whole

system). Obviously,

the function F

(m )

takes real

positive

values for

positive

m and hence the

overlap dynamics

occur

on

positive

half-axis.

As a

result, taking

into account that an

overlap

is a

self-averaging quantity,

in virtue of

(7),

it

straightforwardly

follows an evolution

equation

in terms of

overlap

m(t

+ I

=

F

(m (t (8>

In the case of

sequential dynamics

similar consideration

gives

evolution of the form

=

F

(m(t m(t> (9>

Evidently,

iterative

(8)

and differential

(9) equations

have the same fixed

point

solution m*

= F

(m*)

that defines retrieval

overlap.

In

principle,

this solution can be obtained

by

numerical

integration

which appears to be time

consuming

for nonzero temperatures. We

therefore confine ourselves to the zero temperature limit

only

which is

relegated

to section 5.

For

arbitrary temperatures,

the

stability

of trivial solution m

= 0 will be examined

(Sect. 4).

4.

Stability analysis

and

phase diagrams.

Since F

(m)

is a

strictly

convex function

(that

can be checked

through

routine

calculation)

the condition for the existence of a

unique

nontrivial solution m * of evolution

dynamics (8)

or

(9)

is then

F'(m

=

0)

> 1.

Thus,

the critical relation

(here

x

= 3l

(f )

is

denoted)

defines a critical surface in the parameter space

(e,

y,

fl)

at which the second order transition to memory deterioration takes

place.

To obtain this relation in an

explicit form,

we first introduce the notation

f

= x +

iy,

z =

exp(14 )

in which the

expression

for aplax in

(lo)

takes the form

~~

= exp

[fl (x

cos

4

+ y sin ~fi

)]

exp

[fl (x

cos it + y sin it

)] fl (cos

4 cos

it ) dir

x aX

~

1@

-2

x

exp

~p (x

cos

it

+ y sin it

)] dir (I1)

Then

using polar

coordinates x +

iy

=

rexp(10)

and

talcing

into account that

(. )~

=

m «

(2 ar)~ dr~ exp(- r~)

do..

,

after substitution

(11)

into

(lo)

one obtains

-

«

? °~ dr~ exp(- r~> j"

do

j" d#R

(r,

o,

#

CDs

#

= i

,

o «

«

(7)

where

R(r,

0,

4 1«

= exp

[fir

cos

(4

0

)]

exp

[fir

cos (11 0

)] (cos 4

cos it

)dir

x

«

-2

x

- exp~pr

cos (11 0

)] dir

«

Replacing 4

0

by

x and

performing integration

over

4,

X we

finally get

~~

e~ Y~~~

l~ dr~ ~~~(1 ~~~~~~ j

= ,

(12)

4

~

I~(fir)

here

I~ (x ) 1«

= ar exp

(x

cos

4

cos

(n4 d4

is the modified Bessel function of n-th order.

o

The solutions to

equation (12)

can be obtained

numerically providing

the critical surface in the parameter space. In

figures

1-3 we present

phase diagrams corresponding

to

sectioning

this

surface

by planes

with one of the parameters

being

fixed.

The critical lines

y~(e

; fl

=

Cst)

are

plotted

in

figure

I for several values of the noise

parameter fl.

The function

y~(e

;

fl

=

Cst)

reaches its maximum at the

point

e~~~(fl

), shifting

towards

larger

values of e as temperature increases, and becomes zero at some

point

e~(fl (critical embedding

st

j~th

at

given temperature). e~(fl

increases with

temperature

from e~

=

e~(fl

=

oJ)

=

4/ ar,

behaving

as

fl~~

as

fl

-0

(it

can be

straightforwardly

deduced from

(12)).

In the limit

e - oJ, at any temperature, the maximal storage

capacity

vanishes as

y~(e

fl = Cst e~ ~ ln e. In

fact,

the storage

capacity

reduces

sufficiently

with temperature increase

(see Fig. I) indicating

noticeable

sensitivity

of memory

functioning

to

dynamical

noise.

O.1 6

~O.12

- U O Cu O

° O.08

© Oh

$

O

$

O.04 '

ooo

ini>erse

em

strength

Fig.

I.- Critical lines in

(e~~- y)-plane

at fixed noise parameter p (from top to bottom p = w, 10, 4, 2, 1, 0.5). Nonzero retrieval regions are below the corresponding curves.

(8)

N° 3 SHORT-TERM MEMORY 773

A

family

of critical curves in the

(e fl )-plane

is

plotted

for several fixed

yin figure

2.

For the choice y

=

0,

the critical

temperature

goes to

infinity

as e, e - oJ. It

implies

that at

arbitrarily high temperatures

an intensive number of

pattems

p =

o(C)

can be stored for

sufficiently high

values of e.

Otherwise,

if y > 0 is

assumed,

there exists a critical

temperature

above which the

pattems'

list of

length yC

cannot be retrieved for any e. As y increases the

area of nonzero retrieval

drops

and vanishes at y = yo = 0.145.

4.OO

3.OO

~

©

~$ (

2.OO

Cu

E

©

i .oo

o.oo

invhrse embedding strength

Fig.

2. Critical lines in (e~ p

~)-plane plotted

for several values of storage

capacity

y (from top to

bottom y

= 0, 0.01, 0.05, 0.I). Nonzero retrieval

regions

are below the

corresponding

curves.

Critical lines below which nonzero retrieval appears are drawn for certain values of e in the

(y fl~~ )-plane (Fig. 3).

When

embedding strength

exceeds e~, a finite area of nonzero

retrieval arises near the

origin. Figure

3

displays

a characteristic

change

of the

shape

of retrieval

regions

with an increase of e. As e

- oJ, the

slope

of the critical curve grows

infinitely.

5. Zero

temperature

limit.

In the most

important

case of absence of

dynamical noise, fl

= oJ, the distribution

density

p

becomes a delta function

p(z[v)=&

z- ~

(V(

and F

(m)

takes the form

F

(m

=

~"~ ~

~

(Bm+f(

~

(13)

JOURNAL DE PHYSIOUE I -T 3, N'3 MARCH 1993

(9)

1.60

1.20

~

©

li

~j O.80

E

CL

©

O.40

o.oo

storage capacity

Fig.

3.- Critical lines in (y

fl~~hplane pictured

for

a set of e values (from top to bottom

e =

6.0, ED " 3.7, 2.5). Nonzero retrieval

regions

are below the

corresponding

curves.

Using polar

coordinates

(r,

4

),

Bm +

f

= r exp

(14

we can rewrite

(13)

as

lm

«

F

(m

=

(2

ar

)~ dr~ d4

cos

4

exp

(- r~

+ 2 Bmr cos

4

B~

m~ )

,

0

-

«

and after

integrating

over r we get the

expression

for F

(m)

«

F

(m)

=

(2

ar )~

exp(- B~ m~) d4 (1

+

ar~'~Q exp(Q~) II

+

W(Q)I)

cos

4

,

(14)

-

«

lx

where

Q

= Bm cos 4 and W

(x )

= ar

~/~ exp

(- t~ )

dt.

x

The critical relation

(12)

is

greatly simplified

at zero temperature

giving

a critical line of the form

y~(e)

= 2

e~~In (are~/16).

The system is thus able to store p =

y~(e )C

>0 last pattems if the

embedding strength

exceeds threshold value e~ =

4/

7~=

2.257. The function

y~(e)

reaches its maximum

yo =

y~(eo)

=

ar/8 e =0,145

(optimal

short-term storage

capacity)

at

optimal embedding

strength

eo = 4

/~

= 3.721 and y~ vanishes ~p~ =

o(C ))

as e

- oJ

(see Fig, I). Evidently,

a

qualitative picture

of memory

performance

bears a

strong

resemblance to diluted versions of the

Hopfield-Parisi

model and similar ones

[5].

It is

interesting

to note that the ratio between

doubled

optimal

short-term

storage capacity

and critical

storage capacity

of a sparse

phasor

model

by

Noest

[9] (the coupling Jki

defined

by (3)

has

only

one

degree

of freedom

fl~t

instead of two in the

phasor model)

is

exactly

e~ ~,

coinciding

with an

analogous

ratio for

strongly

diluted version of

binary

models

[5].

(10)

N° 3 SHORT-TERM MEMORY 775

Fixed

point

solutions m*

=

F

(m*)

of retrieval

dynamics

at zero

temperature

have been obtained

numerically

for varied choices of storage

capacity

y and

embedding strength

e.

Retrieval

overlap

m* as a function of y is

displayed

in

figure

4. The

system

exhibits a

continuous transition to zero retrieval

phase

when the short-term

storage capacity

reaches its

critical value

y~(e)

=

y~(e fl

= oJ

).

The

optimal overlap

value is

m?

=

m*(y

=

0; eo)=

0.889. If e grows from e~ to eo, the maximal

storage capacity y~(e; fl

=

oJ)

enhances

(see

also

Fig. I). Otherwise,

for

high

intensities of

learning,

e > eo, it reduces

but, simultaneously,

the retrieval

quality improves greatly

at low

storage capacities

so that

m*(y e)

- I when e

- oJ, y =

O(e~~). Thus,

an increase of

embedding strength

leads to

improvement

of

recalling precision

of the new

pattems

at the expense of

maximal storage

capacity.

1.oo

*i O.80

CL O

j

O.60

>

O

)

O.40

©

°C

$

~ 0.20

o.oo

m 6

storage capacity

Fig.

4. Fixed

point

solution m* of retrieval

dynamics

(8, 14) versus storage

capacity

y for certain values of the

embedding strength

e at fl

= w (from top to bottom e

= 6.0, so

= 3.7, 2.5).

Retrieval

overlap

versus inverse

embedding strength

is drawn for selected values of storage

capacity yin figure

5. For y

>

0,

the retrieval

overlap

becomes zero at

sufficiently large

e

because the

progressive

increase in

embedding strength produces

a more intensive

forgetting

of the most ancient pattems so as that the set of p =

yC

pattems cannot be retrieved as a whole.

6.

Concluding

remarks.

We have

presented

the model of

working

memory in a clock neural network which has been solved in the limit of extreme dilution. Our

analysis

shows that the system possesses nonzero

short-term storage

capacity,

when

embedding strength

exceeds some threshold

value,

and demonstrates continuous transition to memory loss at critical surface in the parameter space. It

(11)

oo

~.

o

O.80

CL O

[

O.60

>

O

)

O.40

© 'C

$

~ O.20

O.OO

inverse

e

stre

Fig.

5. Retrieval

overlap

m* as a function of inverse

embedding strength

e~ for several values of storage

capacity

y at fl

= w (from top to bottom y

= 0, 0.01, 0.05, 0,1).

tums out that the

qualitative picture

of memory

performance

is

essentially

the same as it was found for

binary

nets

[5].

It is worth

mentioning

that

proposed learning algorithm

seems to

acquire

necessary

simplicity

to be

implemented technologically

on the base of coherent

optics technique [12].

In the

end,

it should be noted that neural network models

using

n-vector

spins

as neurons can

be

developed

and treated in close

analogy

with the present consideration.

References

[I] HOPFIELD J. J,, Proc. Nail. Acad. Sci. USA 79 (1982) 2554.

[2] PARISI G., J.

Phys.

A 19 (1986) L617.

[3] HEMMEN VAN J. L., KELLER G. and KUHN R., Europhys. Lett. 5 (1988) 663.

[4] NADAL J. P., TOULOUSE G., MEzARD M., CHANGEUX J. P. and DEHAENE S.,

Europhys.

Lett. 1 (1986) 535.

[5] DERRIDA B. and NADAL J. P., J. Stat.

Phys.

49

(1987)

993.

[6] MEzARD M., NADAL J. P. and TouLousE G., J. Phys. France 47 (1986) 1457.

[7] NADAL J. P., TOULOUSE G., MEzARD M., CHANGEUX J. P. and DEHAENE S., in Computer

Simulation in Brain Science

(Cambridge Cambridge University

Press, 1988) p. 221.

[8] WONG K. Y. M., KAHN P. E. and SHERRINGTON D., J.

Phys.

A 24 (1991) ll19.

[9] NOEST A. J., Europhys. Lett. 6 (1988) 469.

[lo] COOK J. J., J. Phys. A 22 (1989) 2057.

[ll] DERRIDA B., GARDNER E. and ZIPPELIUS A., Europhys. Lett. 4 (1987) 167.

j12] ANDERSON D. Z., in Proceedings of the Conference on Neural network models for

optical

computing,

13-14 January 1988, Los Angeles, Califomia, I. Athale, A. Ravindra Eds., p. 417 and references therein.

Références

Documents relatifs

Supplementary Materials: The following are available online at http://www.mdpi.com/1999-4915/10/9/496/s1 , Figure S1: Alignment of 17 “Megaviridae” polB reference nucleotide

Henry F, Goffi n V, Maibach HI, Piérard GE (1997) Regional differences in stratum corneum reactiv- ity to surfactants: quantitative assessment using the corneosurfametry

Ainsi, le premier défi a été de définir l’orientation des terres décrites dans les documents seigneu- riaux avant de les tracer sur un fond de carte général de la

Pour g´en´eraliser cette ´equation ` a tous les ´etats de l’univers, il faut d´eterminer un tenseur qui joue un rˆole analogue au tenseur de Ricci pour la connexion ∇ d´efinie

Abstract We consider backward stochastic differential equations (BSDEs) with a particular quadratic generator and study the behaviour of their solutions when the probability measure

Proportions of hits (for test-items T) and false alarms (for similar lures S and different lures D) presented as a function of version (expressive/mechanical) and test item (T),

A similar configuration is 18R 18C / 24, with the same arrival configuration but another departure runway; in the situation where one of the configurations has

• Si vous cliquez (gauche) sur un « objet » que vous venez de « coller » dans votre image, soit venue d’une autre image, soit résultant d’un « copier/coller » d’une