READ THESE TERMS AND CONDITIONS CAREFULLY BEFORE USING THIS WEBSITE.
https://nrc-publications.canada.ca/eng/copyright
Vous avez des questions? Nous pouvons vous aider. Pour communiquer directement avec un auteur, consultez la
première page de la revue dans laquelle son article a été publié afin de trouver ses coordonnées. Si vous n’arrivez
pas à les repérer, communiquez avec nous à PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca.
Questions? Contact the NRC Publications Archive team at
PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca. If you wish to email the authors directly, please see the
first page of the publication for their contact information.
NRC Publications Archive
Archives des publications du CNRC
This publication could be one of several versions: author’s original, accepted manuscript or the publisher’s version. /
La version de cette publication peut être l’une des suivantes : la version prépublication de l’auteur, la version
acceptée du manuscrit ou la version de l’éditeur.
Access and use of this website and the material on it are subject to the Terms and Conditions set forth at
Time Series Models Discovery with Similarity-Based Neuro-Fuzzy
Networks and Genetic Algorithms: A Parallel Implemention.
Valdés, Julio; Mateescu, G.
https://publications-cnrc.canada.ca/fra/droits
L’accès à ce site Web et l’utilisation de son contenu sont assujettis aux conditions présentées dans le site
LISEZ CES CONDITIONS ATTENTIVEMENT AVANT D’UTILISER CE SITE WEB.
NRC Publications Record / Notice d'Archives des publications de CNRC:
https://nrc-publications.canada.ca/eng/view/object/?id=7ec04322-d92d-42cb-8447-e0171e7443c7
https://publications-cnrc.canada.ca/fra/voir/objet/?id=7ec04322-d92d-42cb-8447-e0171e7443c7
Council Canada
Institute for
Information Technology
de recherches Canada
Institut de technologie
de l’information
Time series model mining with similarity-based
neuro-fuzzy networks and genetic algorithms: a
parallel implementation.
*
Valdés, J., and Mateescu, G.
October 2002
* published in: Proceedings of the RSCTC'02, Special Session on Distributed and
Collaborative Data Mining, Pennsylvania. October 2002. NRC 44933.
Copyright 2002 by
National Research Council of Canada
Permission is granted to quote short excerpts and to reproduce figures and tables from this report,
provided that the source of such material is fully acknowledged.
Similarity-Based Neuro-Fuzzy Networks and Geneti Algorithms: A Parallel Implementation
JulioJ.Valdes 1
andGabriel Matees u 2 1
NationalResear hCoun ilofCanada InstituteforInformationTe hnology 1200MontrealRoad,OttawaONK1A0R6,Canada
julio.valdesnr . a 2
NationalResear hCoun ilofCanada InformationManagementServi esBran h 100SussexDrive,OttawaONK1A0R6,Canada
gabriel.matees unr . a
Abstra t. Thispaper presents aparallel implementation of a hybrid dataminingte hniqueformultivariateheterogeneoustimevarying pro- essesbasedona ombinationofneuro-fuzzyte hniquesandgeneti algo-rithms.Thepurposeistodis overpatternsofdependen yingeneral mul-tivariatetime-varyingsystems,andto onstru tasuitablerepresentation forthe fun tionexpressingthosedependen ies.Thepatterns of depen-den yarerepresentedbymultivariate,non-linear,autoregressivemodels. Givenasetoftimeseries,themodelsrelatefuturevaluesofonetarget serieswithpastvaluesofallsu hseries,in ludingitself.Themodelspa e isexploredwithageneti algorithm,whereasthefun tional approxima-tion is onstru ted with a similarity based neuro-fuzzy heterogeneous network.This approa hallows rapidprototypingofinteresting interde-penden ies,espe iallyinpoorly known omplexmultivariatepro esses. Thismethod ontains ahighdegree ofparallelism atdierent levelsof granularity, whi h an be exploited whendesigning distributed imple-mentations,su haswork rew omputationinamaster-slaveparadigm. Inthepresentpaper, arst implementationat thehighest granularity levelispresented. Theimplementationwastested forperforman e and portabilityindierenthomogeneousandheterogeneousBeowulf lusters with satisfa tory results. An appli ation example with a known time seriesproblemispresented.
1 Introdu tion
Multivariatetime-varyingpro essesare ommonin awidevarietyofimportant domainslikemedi ine,e onomi s,industry, ommuni ations,environmental s i-en es, et . Developments in sensor and ommuni ation te hnology enable the simultaneous monitoringandre ordingof largesetsof variablesqui kly, there-fore generating largesets of data. Pro essesof this kind are usually des ribed
bysetsofvariables,sometimesofheterogeneousnature.Somearenumeri , oth-ers are non-numeri , for example, des ribing dis rete state transitions. In real worldsituations, it is pra ti ally impossible to re ord all variables at all time frames, whi h leadstoin ompleteinformation. Inpra ti e,thedegreeof a u-ra y asso iated with the observed variables is irregular, resulting in data sets with dierent kindsand degreesof impre ision. All of these problems severely limittheappli abilityofmost lassi almethods.Manyte hniqueshavebeen de-velopedfortime seriespredi tionfrom avarietyof on eptualapproa hes([3℄, [6℄), but the problem of nding models of internal dependen ies has re eived mu hlessattention.However,inrealworldmultivariatepro esses,thepatterns of internal dependen ies areusually unknownand theirdis overyis ru ial in order to understand and predi t them. In the present approa h, the spa e of possible models of a given kind is explored with geneti algorithms and their quality evaluated by onstru ting a similarity-based neuro-fuzzy network rep-resenting afun tional approximationfor apredi tion operator.This approa h to model mining is ompute-intensive, but it is well suited for super omput-ers and distributed omputing systems. In the parallel implementation of this soft- omputing approa h to model dis overy, several hierar hi allevels an be identied, all involvingintrinsi allyparallel operations. Therefore,a variety of implementations exploiting dierent degreesof granularityin the evolutionary algorithm and in the neuro-fuzzynetwork is possible. Here, following a parsi-moniousprin iple,thehighestlevelis hosenforarstparallelimplementation: that ofpopulationevaluation withinageneti algorithm.
2 Problem Formulation
Thepatternofmutualdependen iesisanessentialelementofthismethodology. Thepurposeistoexploremultivariatetimeseriesdataforplausibledependen y modelsexpressingtherelationshipbetweenfuturevaluesofapreviouslysele ted series (the target),withpastvaluesofitself andother timeseries. Someofthe variables omposing thepro ess may be numeri (ratioorinterval s ales),and somequalitative (ordinalornominal s ales).Also, theymight ontainmissing values.Manydierentfamiliesoffun tionalmodelsdes ribingthedependen yof futurevaluesofatargetseriesonthepreviousvalues anbe onsidered,andthe lassi allinearmodelsAR, MA,ARMAandARIMA[3℄,havebeenextensively studied. The hoi e of the fun tional family will in uen e the overall result. The methodology proposed here does notrequire aparti ular model. Be ause thegeneralizednonlinearARmodelexpressedbyrelation(1)isasimplemodel whi hmakesthepresentationeasierto follow,weusethisbasi model:
S T (t)=F 0 B B S 1 (t 1;1 );S 1 (t 1;2 );;S 1 (t 1;p1 ); S 2 (t 2;1 );S 2 (t 2;2 );;S 2 (t 2;p2 ); ::: S n (t n;1 );S n (t n;2 );;S n (t n;pn ) 1 C C A (1)
where S T
(t) is the target signal at time t, S i
is the i-th time series, n is the total number of signals, p
i
is the number of time lag terms from signal i in uen ingS
T (t),
i;k
isthek-thlagterm orrespondingtosignali(k2[1;p i
℄), andFistheunknownfun tion des ribingthepro ess.
The goal is the simultaneous determination of: i) the number of required lags for ea h series, ii) the sets of parti ular lags within ea h series arrying dependen yinformation,andiii)thepredi tionfun tion,insomeoptimalsense. Thesize of thespa eof possiblemodelsis immense(evenforonlyafew series and a limited number of time lags), and the la k of assumptions about the predi tionfun tionmakesthesetof andidatesunlimited.Anaturalrequirement onfun tion Fisthe minimizationof asuitablepredi tionerror andtheideais to ndareasonablysmall subsetwiththebestmodelsin theabovementioned sense.
2.1 A SOFT COMPUTING MODELMINING STRATEGY
Asoft omputingapproa htothemodelminingproblem anbe:i)exploration of themodel spa ewith evolutionary algorithms, and ii) representationof the unknown fun tion with a neural network (or a fuzzy system). The use of a neuralnetworkallowsa exible,robustanda uratepredi torfun tion approx-imator operator.Feed-forward networks and radial basis fun tions are typi al hoi es. However,the use of these lassi al network paradigmsmightbe diÆ- ult or even prohibitive, sin e forea h andidate model in thesear h pro ess, anetwork of the orresponding typehasto be onstru ted and trained.Issues like hosingthenumberof neuronsin thehiddenlayer,mixing of numeri and non-numeri information(dis ussedabove),and working withimpre isevalues add even more omplexity. Moreover, in general, these networks require long and unpredi table training times. Theproposed method uses aheterogeneous neuronmodel[9℄,[10℄.It onsidersaneuronasageneralmappingfroma hetero-geneousmultidimensionalspa e omposedby artesianprodu tsoftheso alled extendedsets,toanotherheterogeneousspa e.Theseareformedbytheunionof real,ordinal,nominal,fuzzysets,orothers(e.g.graphs),withthemissingvalue (e.g. forthereals
^
R=R[fg,where isthe missingvalue). Their artesian produ t forms the heterogeneous spa e, whi h in thepresent ase, is given by
^ H n = ^ R nr ^ O no ^ N nn ^ F nf
.In theh-neuron, theinputs, andthe weights, areelementsofthen-dimensionalheterogeneousinputspa e.Amongthemany kindsofpossiblemappings,theoneusingasimilarityfun tion[4℄asthe aggre-gationfun tionandtheidentitymappingasthea tivationfun tionisusedhere. Itsimageistherealinterval[0,1℄andgivesthedegreeofsimilaritybetweenthe inputpatternandneuronweights.SeeFig-2.1(left).
Theh-neuron anbeused in onjun tion withthe lassi al(dot produ t as aggregation and sigmoid or hyperboli tangent as a tivation), forming hybrid networkar hite tures.Theyhavegeneralfun tionapproximationproperties[1℄, andaretrainedwithevolutionaryalgorithmsinthe aseofheterogeneousinputs andmissing valuesdueto la kof ontinuityin thevariable's spa e.Thehybrid networkusedherehasahiddenlayerofh-neuronsandanoutputlayerof lassi al
neurons.Inthespe ial aseofpredi tingasinglereal-valuedtargettimeseries, thear hite tureis showninFig-2.1(right).
x
3.8
?
yellow
^
F
nf
^
no
O
^
n
R
r
^
n
N
n
H
^
n
,
H
^
n
{
{
{
{
h
(
(
H
1
.
.
.
Similarity
Neuro−fuzzy
Network
H
H
2
n
O
Fig.1.Left:Aheterogeneous neuron.Right:Ahybridneuro-fuzzynetwork. Thisnetworkworks likeak-bestinterpolatoralgorithm:Ea h neuroninthe hidden layer omputes its similarity with the input ve tor and the k-best re-sponses are retained (k is a pre-set number of h-neurons to sele t). Using as a tivation alinearfun tion withasingle oeÆ ientequalto theinverse ofthe sum ofthe k-similarities omingfrom the hidden layer,theoutput isgivenby (2). output=(1=) X i2K h i W i ; = X i2K h i (2)
whereKisthesetofk-besth-neuronsofthehiddenlayerandh i
isthesimilarity valueofthei-best h-neuronw.r.ttheinputve tor.Thesesimilaritiesrepresent thefuzzymembershipsoftheinputve tortotheset lassesdenedbythe neu-ronsin thehiddenlayer.Thus, (2)representsafuzzyestimateforthepredi ted value.Assumingthatasimilarityfun tionShasbeen hosenandthatthetarget isasingletimeseries,this ase-basedneuro-fuzzynetworkisbuiltandtrainedas follows:DeneasimilaritythresholdT 2[0;1℄andextra t thesubsetL ofthe set ofinputpatterns (L)su h thatforeveryinputpatternx2, there exist al 2 L su h that S(x;l) T. Several algorithms for extra ting subsets withthisproperty anbe onstru tedinasingle y lethroughtheinputpattern set (note that ifT =1, thehiddenlayerbe omesthewhole trainingset). The hiddenlayeris onstru tedbyusingtheelementsofLash-neurons.Whilethe outputlayerisbuiltbyusingthe orrespondingtargetoutputsastheweightsof theneuron(s).Thistrainingpro edureisveryfastandallows onstru tionand testing of many hybridneuro-fuzzy networks in ashort time. Dierent sets of individual lagssele tedfrom ea h timeseries will denedierent trainingsets, andtherefore,dierenthybridneuro-fuzzynetworks.Thisone-to-one orrespon-den ebetweendependen ymodelsandneuro-fuzzynetworks,makesthesear h inthemodelspa eequivalenttothesear hinthespa eofnetworks.Thus,given amodel des ribingthe dependen iesandaset oftimeseries, ahybridnetwork
an be onstru teda ordingto theoutlined pro edure,andtested forits pre-di tion erroron asegmentof thetarget series notused for training(building) thenetwork.TheRoot Mean Squared(RMS) errorisa typi algoodnessof t measureandistheoneused here.Forea h modelthequalityindi atorisgiven by thepredi tionerroron thetest set of itsequivalent similarity-based neuro-fuzzy network(the predi tion fun tion). Thesear hfor optimal models anbe madewithan evolutionaryalgorithmminimizing thepredi tionerrormeasure. Geneti AlgorithmsandEvolutionStrategiesaretypi alforthistaskandmany problemrepresentationsarepossible.Geneti algorithmswereusedwitha sim-plemodel odinggivenbybinary hromosomesoflengthequaltothesumofthe maximalnumberoflags onsideredforea hofthetimeseries(thetimewindow depth). Within ea h hromosome segment orresponding to a given series, the non-zero valueswillindi ate whi h time lagsshould bein luded in themodel, asshowninFig-2.1.
1
S (t − 3)
2
S (t − 4)
. . .
signal
2
signal
1
S (t − 6)
1
1
S (t − 5)
. . .
N
signal
S (t − 2)
1
2
S (t − 1) ,
,
,
011011
100100
,
110110
Fig.2.Chromosomede odi ation.
Thesystemar hite tureis illustaredin Fig-2.1. Theseries aredivided into trainingandtestsets.Amodelisobtainedfromabinary hromosomeby de od-i ation.With the model andtheseries, ahybridneuro-fuzzynetwork isbuilt andtrained,representingapredi tionfun tion.Itisappliedto thetest setand apredi tion error is obtained,whi h is used bythe geneti algorithm internal operators.Modelswithsmallererrorsarethettest.
At theendof theevolutionary pro ess,the best model(s)are obtainedand ifthetesterrorsarea eptable,theyrepresentmeaningfuldependen ieswithin the multivariate pro ess. Evolutionary algorithms an't guarantee the global optimum, thus, the models found an be seen only as plausible des riptors of importantrelationshipspresentinthedataset.Otherneuralnetworksbasedon thesame model may havebetter approximation apabilities. Inthis sense,the proposeds hemeshould beseenasgivinga oarsepredi tionoperator.The ad-vantageisthespeedwithwhi hhundredsofthousandsofmodels anbeexplored andtested(notpossiblewithotherneuralnetworks).On ethebest modelsare found,morepowerfulfun tion approximators anbeobtainedwithothertypes of neural networks,fuzzy systems, orother te hniques. This method depends
110001000110
n
H
O
.
..
H
H
1
2
test
Error
prediction
GA
10010001100
Best model(s)
Signal set
training
Fig.3.Systemar hite ture.
ondierentparameterswhi h mustbedened in advan e(the similarity fun -tion,thesimilaritythreshold,et ).Inorderto a ountforanoptimalsele tion of theseparameters,meta-evolutionaryparadigmsliketheoneoutlined in Fig-4 are relevant. The outermost geneti stru ture explores the spa e of problem parameters.
3 Parallel Implementation
Thehybridnature ofthesoft- omputingmethod des ribedintheprevious se -tionallowsseveraldierentapproa hesfor onstru tingparallelanddistributed omputer implementations. Hierar hi ally, several levels an be distinguished: the evolutionary algorithm (the geneti algorithm in this ase) operates on a least squaredtypefun tional ontaininganeuro-fuzzynetwork.In turn,inside thenetworkthereareneuronlayers,whi h themselvesinvolvetheworkof indi-vidualneurons(h-neuronsand lassi al).Finally,insideea hneuron,asimilarity fun tionisevaluatedontheinputandtheweightve torina omponentwise op-eration(e.g.a orrelation,adistan emetri ,orotherfun tion).Alloftheseare typi al asesofthework rew omputation paradigmat dierentlevelsof gran-ularity.Clearly,a ompletelyparallelalgorithm ouldbeultimately onstru ted byparallelizingalllevelstraversingtheentirehierar hy.Thisapproa hhowever, willimposeabigamountof ommuni ationoverheadbetweenthephysi al om-putationelements,espe iallyinthe aseofBeowulf lusters(themostaordable super omputer platform).Followingthe prin ipleofparsimony,the implemen-tationwasdoneatthehighestgranularitylevelinthegeneti algorithm,namely at thepopulationevaluation level( learly, otheroperations anbeparallelized withintheotherstepsofthegeneti algorithmlikesele tion,et .).The lassi al
110001000110
n
H
O
.
..
H
H
1
2
τ
max
1010101101
Best model(s), best parameters
Error
prediction
10010001100
GA
GA parameters
training
test
Signal set
i
< K, {S },
, T , ... >
0010100101
s
Fig.4.Meta-geneti algorithmar hite ture.Theoutermostgeneti stru tureexplores the spa eof problem parametersandthe innerpro essndsthe bestmodel(s)for a givensetofthem.
type of geneti algorithm hosen (with binary hromosomes, simple rossover and mutation, et ) makesthe population initialization and evaluation stepsa natural rst hoi e. The evaluation of a population involves the parallel ev al-uation of all its individuals (models) whi h an be done in parallel. At this level, there is ahigh degreeof parallelism.A master-slave omputation stru -tureis employed, where themaster initializesand ontrols theoverall pro ess, olle tingthe partialresults, andthe slaves onstru tthe neuro-fuzzynetwork based on the de oded hromosome, and evaluate it on the time series. In our implementation,theworkloadismanaged dynami ally,so thattheloadis well balan edin aheterogeneous lusterenvironment.The ommuni ationoverhead wasredu edbyrepli atingthedatasetonallthema hinesinthe luster.Thus, the master program is relieved from sending a opy of the entire data set to ea h slave program ea h time a model has to be evaluated. Messages sent to theslavesarebinary hromosomes,whilemessagesre eivedba kbythemaster ontainonlyasingle oatingpointnumberwiththeRMSerrorasso iatedwith the hromosome(model).
TheParallelVirtualMa hinePVM[5℄messagepassingsystem(version3.4) hasbeenused,withGaLib2.4[12℄asthegeneralgeneti algorithmlibrary.The samesour e ode orrespondingtothepreviouslydes ribedparallel implemen-tationwas ompiledwiththeg++ ompilerintwodierentdistributed environ-ments, both being Beowulf lusters running RedHat Linux 7.2and onne ted withanEtherFast-100ethernetswit h(Linksys):
{ a two-node luster (ht- luster), with a PentiumIII pro essor (1000 MHz, 512MBRAM),andanAMDAthlonpro essor(750 Mhz,256MBRAM).
{ a4CPU homogeneous luster (HG- luster),with twodualXeonpro essor (2GHz,1GBRAM)DELLWorkstations.
Both lusters were ben hmarked with an o-the-shelf Poisson solver giv-ing the following results: (i) ht- luster: 68.59 MFlops for Pentium III, 111.58 MGlops for Athlon, 180.17MFlops total; (ii) HG- luster: 218.5 MFlops/CPU, 874MFlops total.
3.1 EXAMPLE
The parallel implementation was tested using the Sunspot predi tion one di-mensionalproblem [7℄.This univariatepro essdes ribesthe Ameri anrelative sunspot numbers (mean number of sunspots for the orresponding months in the period 1=1945 12=1994), from AAVSO - Solar Division [12℄. It ontains 600observations, and in this ase, the rst400 were used as trainingand the remaining200fortesting.Amaximumtimelagof30yearswaspre-set,dening asear hspa esizeof2
30
models.
Noprepro essing was applied to the time series. This is notthe usual way toanalyzetimeseriesdata,butbyeliminatingadditionalee ts,theproperties of the proposed pro edure in terms of approximation apa ityand robustness are better exposed.Thesimilarityfun tion used wasS =(1=(1+d)), where d is anormalizedeu lideandistan e.The numberofresponsiveh-neuronsin the hiddenlayerwassettok=7,andthesimilaritythresholdfortheh-neuronswas T =1.Noattempttooptimizetheseparameterswasmade,butmeta-algorithms anbeusedforthispurpose.
Theexperimentshavebeen ondu tedwiththefollowingsetofgeneti algo-rithmparameters:numberofgenerations=2,populationsize=100,mutation probability=0.01, rossoverprobability=0.9.Singlepoint rossoverandsingle bit mutation wereused asgeneti operatorswith roulettesele tionand elitism beingallowed.
Theperforman eofthetwo lustersw.r.ttheparallelalgorithmisillustrated in table1.
As anillustration of theee tiveness of themethod, arun with 2000 gen-erations and 50 individuals per population was made. The best model found ontained 10time lags, namely: (t-1), (t-2), (t-4),(t-10), (t-12),(t-14),(t-16), (t-20),(t-28),(t-29).ItsRMSpredi tionerrorinthetestsetwas20.45,andthe realandpredi tedvaluesareshowninFig 5.
4 Con lusions
Time series model mining using evolutionary algorithms and similarity-based neuro-fuzzynetworkswithh-neuronsis exible,robustandfast.Itsparallel im-plementation runs well oninexpensiveBeowulf lusters, makingintensivedata miningintimeseriesaordable.Thismethodisappropriatefortheexploratory stages in the study of multivariate time varying pro esses for qui kly nding
Table 1.ht- lusterandHG- lusterperforman e No. No. Time(se s) Ratio Time(se s) Ratio slavesCPUsTime(se s)(2CPU/1CPU)Time(se s)(4CPU/2CPU)
1 2 117 0.991 60 1 4 116 60 2 2 120 0.642 60 0.45 4 77 27 3 2 116 0.689 61 0.377 4 80 23 4 2 120 0.658 60 0.467 4 79 28 5 2 116 0.672 60 0.45 4 78 27
0
50
100
150
200
250
300
400
450
500
550
600
real
predicted
Fig.5.Comparisonoftherealandpredi tedvaluesforsunspotdata(testset).
plausible dependen y models and hiddenintera tions between time dependent heterogeneous sets of variables, possibly with missing data. The dependen y stru ture is approximatedor narrowed down to amanageable set of plausible models.Thesemodels anbeusedbyothermethodssu hasneuralnetworksor non-soft- omputingapproa hesfor onstru tingmorea uratepredi tion oper-ators.
Many parallel implementations of this methodology are possible. Among manyelementstobe onsideredare:deepergranularityintheparallelization,use of other hromosome s hemes for model representation, variations in the type of geneti algorithmused (steadystate,mixed populations,dierent rossover, mutation andsele tionoperators, et .),useofotherkindsofevolutionary algo-rithms (evolutionstrategies,ant olonymethods, et .),variationsinthe neuro-fuzzyparadigm(kindofh-neuronused,itsparameters,thenetworkar hite ture, et ),thesizesofthetrainingandtestset,themaximumexplorationtime-depth
window.These onsiderationsmakemeta-evolutionaryalgorithmsanattra tive approa h, introdu ing a higher hierar hi al level of granularity. Furthermore, theintrinsi parallelismofthealgorithmsallowsforeÆ ientparallel implemen-tations. Theresultspresentedare promisingbut should be onsidered prelimi-nary.Furtherexperiments,resear hand omparisonswithotherapproa hesare required.
5 A knowledgments
TheauthorsareverygratefultoAlanBartonandRobertOr hardfrom the In-tegrated ReasoningGroup, Institute forInformation Te hnology,National Re-sear hCoun ilofCanada.A.Bartonmadeinsightful ommentsandsuggestions, whileR.Or hardgavehisbigunderstandingandsupportforthisproje t.
Referen es
1. Belan he,Ll.:Heterogeneousneuralnetworks:Theoryandappli ations.PhD The-sis, Department of Languages and Informati Systems, Polyte hni University of Catalonia,Bar elona,Spain,July,(2000)
2. Birx,D.,Pipenberg,S.:Chaoti os illatorsand omplexmappingfeedforward net-worksforsignaldete tioninnoisyenvironment.Int.JointConf.OnNeuralNetworks (1992)
3. Box,G.,Jenkins,G.:TimeSeriesAnalysis,Fore astingandControl. Holden-Day. (1976)
4. Chandon, J.L.,Pinson,S.:AnalyseTypologique.ThorieetAppli ations.Masson, Paris,(1981)
5. Gueist, A., et.al. :PVM. Parallel Virtual Ma hine. Users Guide and Tutorialfor NetworkedParallelComputing.MITPress02142,(1994)
6. Lapedes, A., Farber, R. :Nonlinearsignal pro essing using neuralnetworks: pre-di tion and system modeling. Te h. Rep. LA-UR-87-2662, Los Alamos National Laboratory,NM,(1987)
7. Masters,T.:Neural,Novel&HybridAlgorithmsforTimeSeriesPredi tion.John Wiley&Sons,(1995)
8. Spe ht,D.:Probabilisti NeuralNetworks,NeuralNetworks3.(1990),109{118 9. Valdes,J.J., Gar
ia, R.:Amodelfor heterogeneousneuronsanditsusein ong-uringneuralnetworks for lassi ation problems.Pro .IWANN'97,Int. Conf. On Arti ialandNaturalNeural Networks.Le tureNotesinComputerS ien e1240, SpringerVerlag,(1997),237{246
10. Valdes, J.J., Belan he, Ll., Alquezar, R. : Fuzzy heterogeneous neurons for im-pre ise lassi ation problems. Int. Jour. Of Intelligent Systems, 15 (3), (2000), 265{276.
11. Valdes, J.J.:Similarity-basedNeuro-FuzzyNetworksandGeneti Algorithmsin TimeSeriesModelsDis overy.NRC/ERB-1093,9pp.NRC44919.(2002)
12. Wall, T. :GaLib:A C++ Library of Geneti Algorith Components.Me hani al Engineering Dept.MIT(http://lan et.mit.edu/ga/),(1996)
13. Zadeh, L.: Therole of soft omputing and fuzzy logi inthe on eption, design and deployment ofintelligent systems.Pro .SixthIntIEEE Int. Conf. OnFuzzy Systems,Bar elona,July1-5,(1997)