• Aucun résultat trouvé

of ault

N/A
N/A
Protected

Academic year: 2022

Partager "of ault"

Copied!
188
0
0

Texte intégral

(1)
(2)
(3)
(4)
(5)

Neural Network Based Incipient Fault Detection of Induction Motors

By

"Mohd.Rokonuzzaman, B.Sc. Eng.

A THESISSUMDMITTED TOTHE SCHOOLOF GRADUATE STUDIESINPARTIAL FULFILMENTOF THE

REQUIREMENTS FOR THEDEGREEOF MASTEROFENGINEERING.

FACULTYOFENGINEERING ANDAPPLIEDSCIENCE. MEMORIAL UNIVERSITYOF NEWFOUNDLAND.

MARCH,1 995

ST.JOHN'S NEWFOUNDLAND CANADA.

(6)

1+1

Naliona! Ubra ry 01Canada Acquisitionsand Bibliographicsevcee Branch

395w~S1_

OItawa. 0Il 8lio K' AON~

Directiondesacquisitions cl des servicesbibliographiques :J95.n.oIW.,ainglon

~m~0rt8nol

The author has granted an irrevocab le non-excluslvelicence allowing the NationalLibrary of Canada to reproduce , loan, distribute or sell copies of his/her thesis byany meansend In any formor format, making this thesis availabletoInterested persons.

The author retains ownership of thecopyrightIn his/her thesis.

Neither the thesis nor substantial extracts fromitmay be printed or otherwise reproduced without his/herpermission.

ISBN 0-612-01914-4

Canada

l'auteura accordeune licence irrevocable at non exclusive permettant II la Blbllotheque nation ale du Canada de reprodulre,preter,distribuer ou vendra des copies do sa these de quelque manlere at SOUS quelque forme quecesalt pour mettredesexemplaires decette these

a

la disposition des personneslnteressees.

L'aut eurconserve la proprletedu droit d'auteur qui protege sa these.NIta these nldes extralts substantlels de celle-cl ne doivent etre imprlmes ou autrementreprodults sans son autorisation.

(7)

Abstract

Anincipi ent fault detection scheme ofinductio nmoto rs thro ughthe recog- nition of frequency spect ra ofthe stat orcurrent hasbeendevelopedin this thesis.It isbased ontheadaptiveresonance theory ofneuralnetworks. This faultdiagnosis scheme is notonlycapableof detectinga faultbut also can reportif it cannot ident ify a parti cularfaultso that necessary preventive stepscan betaken to updatethe underlyingneuralnetworkto adapt tothis undetectedfault.Moreover ,it can updateits elftocopewiththis dynamic situationret ainingalreadyacquiredknowledgewith outtheneed of retrain ing withtheold patt ern s.

A laboratoryexperimental set-up usingadigital signal processing(DSP) technique has been employedtocollectthefrequency spectraof the stator currentat differentfault conditions. A wound -rotorind uction motor hasbeen usedasthe testmotor tocreate differenttypes offaultsmaking unbalance inthest ator androtorcircuits.A 24-bit highspee d

nsp

boardhas been

usedwith a personal computertodevelop areel-time interactivesoftware to collect thespectr a . A driverfor theUP-plotterhas alsobeen developed to directlyplot the frequencyspectraof the stator current.

Adapt ive resonancetheory(ART )based network is arecent additio nto the neural network family.A Dewsoftwarebas beensuccessfullydeveloped and implemented in thelaboratory experiment using ART neuralnetwork.

Its performancesintraining, recallingand dynamicupdatinghavebeen stud - ied with a setof example patterns.The incipient faults ofa a-phaeewound rotorinduction motorhave been successfullydiagonizedbythis neural net- work.

(8)

Acknowledgements

I would liketo express my sincere gratitudeto mythesis supervisor,Professor M.A.Rahman forhis importantcontrib utionto this work.It is through his patience, understandingandadvice thatthis work has been done.

Special thanksaredue to technicalstaffat the facultyof Engineering and AppliedSciencefor theirassistance intheexperimentalpart of the thesis.I wouldlike to extend my tha nkstoother resear chers workingin the Power Research Laborato ryattheMemorial Universityof Newf()undland for their kind co-operation.

Iamindebted to the Governmentof Canada aswellMthe Government ofBangladesh to providefi.oanei~andothertechnical assistances to carry outmy graduatestudy.1wish to thankthe officialsofthe Canadian in- ternational Development Agency(CIDA),Memor ial University of Newfound- land(MUN)andBangladesh Institute of Technology( BIT),Rajshabi far their respectiveMsistanees.ParticularthankstoProfessorM.A.Rahman,Direc- tor,CIDA/MUN/BITproject.

Final ly,loweitto mydear family for theirpatience,encouragemen tand blessingswhenIamthousan dsof miles awayfrom them.

iii

(9)

Contents

1 Introduction

1.1IncipientFault Detectionof Induction Motors 1.1.1 A BriefDescription

1.1.2 MathematicalAnalysisofInduction Motor.

1.2 Art ificialNeuralNetworkforFault Detection.... 1.2.1 ExpertApproachforFault Detection. ...

1.2.2 Learning Skills of Artifid lLlNeural Networks.• . 1.3 LiteratureReview. ...

1.4 Objectiveofthe PresentWork ,. . ..•.... 12

1.5 Overviewof Thesis . 12

2.1Int roduction,... . .

2 Anoverv iewofArtificialNeural Networla:

2.1.1 Models ofa Neuron... ... . 14 14 15

2.1.2 NetworkArchitecture.. . .. 17

2.1.3 Artificial Intelligence and NeuralNetworks.. 18

2.2 Backpropaga tion 19

(10)

2.3 The BinaryAssociativeMemory(BAM)and the HopfieldMem- ory .

2.3.1 The BAM.

2.3.2 The Hopfield Memory .

2.4 Simulated Annealing .

2.4.1 The Boltzman Machine.. .. . . ... 2.5 TheCounter Propagation Network

21 22 2.

25 25 28 2.5.1 CPN BuildingBlocks.. •.. . .. . ... ... 29

2.5.2 Training the CPN... ••.•. 32

2.5.3 ForwardMapping...•. . • . 33

2.6 Self·OrganizingMaps. 34

2.6.1 Unit Activations 35

2.6.2 The SOM Learning Algorithm.. 35

2.7 Spat iotemporaI Pa.tternClassification... . . . ..•. .. 35 2.7.1 The FormalAvalanche.. .. .. . . .• . .•. . . 36 2.7.2 Architecturesof Spatiotemporal Networks(STNS). .. 36 2.8 The Neocognitron...•. .. ...• .. ...• .... 37 2.8.1 Neocognitron Architecture..• . • .. ....•...• 37 2.8.2 Neocoguiaon Data Processing. ... .... 38

2.9 Adaptive ResonanceTheory(ART) .. .. 39

2.9.1 ARTNetwork Description.. .... .. .•. .•... 39 2.9.2 ARTI .. ..•... .. ... . . ...•. .. 41

2.9.3 ART2 ... .... ... ... 43

2.10Comparative analysis and selection of suitable Network. .•. 43

(11)

3 ART2 NeuralNetwork 3.1 Introduction.. 3.2 ART2Architecture..

3.2.1 The AttentionalSubsystem 3.2.2 The Orienting Subsystem 3.2.3 GainControlinART2 .. . 3.2.4 Least-mean-squareEquations

45 45

. 45

46 '0 50 51 3.2.5 Bott om-UpLeast-mean -squ areInitial ization 52

3.2.6 ART2Pre cessingSummary•. 52

3.3 ART2Simulator. . ... .. 55

3.3.1 Model ofARI'2 as an Object .. 55

3.3.2 ModifiedStructure ofTraining andRecallingPattern in theNetwork .... .•.. .... .. 55 3.3.3 DynamicUpdating .. .•.. . ,•. . 56 3.4 Experimental Varificationof Perfonn ance . . . 56 3.4.1 Training oftheNeuralNetwork with testPatttern... 58 3.4.2 DynamicNeuron Addition . .. .. . . ... 60 3.4.3 Patte rnRecall fromthe Network 61

4 Fault Related InformationCollection 63

4.1 Introduction ,. ... ... .... 63

4.2 Modelof Spectra Collection .. ... 63 4.3 DiscreteTimeSignals and Systems . . . ....•.. 64

4.3.1 Discrete-TimeSignals: Sequences... 65

4.3.2 Discrete-Time Systems.. . . .•. . . . ... 65

(12)

4.3.3 Samplingof Continuoue-TirneSignals. 66 4.3.4 The Discrete Fourier Transform(DFT) 67 4.3.5 Computationof the Discrete Fourier Transform 67 4.3.6 FourierAnalysis of Signals Using the DiscreteFourier

Transform... 68

4.4 Motorola'sDSP56000DSP Family 70

4.5 Ariel's Interface and DSP Library . 74

4.5.1 InterfaceLibrary ... . 74

4.5.2 DSP.Library... . 75

4.6 ApplicationSoftware for SpectraCollection.. 76 4.7 Experimental Setup.. •.. ... .. . 76 4.8 Spectra for DifferentFault Conditions.. . .. ... 78 4.8.1 No Fault condition:.. .• •... 80

Stator phase 1is open: 81

Stator phase 2 is open: .. 82

Sta.tor phase 3 is open: 82

Shortcircuit fault through resistance in StatorPhase 1: 83 Shortcircuitfault through resistanceinStator Phase2: 84 Short circuitfault through resistance in Stator Phase 3: 85 Short circuit fault in StatorPhase 1: ..•..•. .•. 86 4.8.2

4.8.3 4.8.4 4.8.5 4.8.6 4.8.7 4.8.8

4.8.9 Short circuitfaultin Stator Phase 2: 87 4.8.10Shortcircuitfault in Stator Phase 3: 88 4.8.11Rotor open circuitfault in phase 1:. .. . . •. . . .. 89 4.8.12Rotor opencircuit fault in phase2: 90 4.8.13 Rotor open circuit fault in phase3: 91

vii

(13)

4.8.14 RotorphaseIisshorted to neural : 92 4.8.15Rotor phase2 is shorted to neura.!: 93 4.8.16 Rotor phase 3 is shortedto neural: \J.I 4.8.17 Rotor short circuitfault throughresistancein phase1: 95 4.8.18Rotor shortcircuit faultthroughresistancein phase2: 96 4.8.19Rotor shortcircuit fault through resistan ce inphase3: 97 4.8.20Rotor phase1 is unbalancedthrough an externalresistor: 98 4.8.21Rotor phase 2 isunb~ancedthrough an external resistor:99 4.8.'22Rotorphase 3is unbalancedthrough anexternal resistor :100 4.8.23Simultane ousopen circuit fault in Rotor phase2 as

wellas Statorphase1:•.• ..• ....101 4.8.24 Simultaneo usopen circuitfault in Roto rphase2as

well as Statorphase 2:... ... .. . 102 4.8.25Simultaneousopen circuitfaultin Roto rphase 2 as

wellas Statorphase 3: .103

4.8.26Remarkson FaultRelatedFrequenency spectra ofthe

StatorCurrent . 103

.5Fa ul tRecogniti onbyART2Neural Network 5.1 Introd uction•. •• . . ... • .. .. . . . .. 5.2 Training Data Set. . . ...•. . . . .•

5.2.1 Data. Reduction: ...

104 .104 ... . . . 105 . •106 5.3 Strur";ure ofthe Network.. . . . • . .. . . . • . . ... . . .108 5.4 Training of the Network .. ..•.. ... ... .. . .. ...109 5.5 Fault Recognitionbythe trainednetwork. ... 111

viii

(14)

6.1 Conclusions . 6.2 RecommendationsforFUtureWork

5.6 Mode lof ART2neural network based IncipientFaultDetec-.112 tion System

tl Conclusio nsand Re commendationsforFutureWork 116 .. . .116 .118

Appendix-A

ProllTlmlistingforART2NewalNetworJc. ... 12'

Appendix-B

Programlisting

for reel-ume frequency

spectraacquisition. 141

Appendix-C

ProllTlmlistingof/iP.plotterdriver. 150

(15)

List of Figures

1.1 (a ) Frequencysp ectra of input current of a healthymachine.(b ) Frequency spectra of inputcurrent of a faultymachine.(e).

Model of Neural Network baaedfault related spect ra identifi- cationsystem ....

2.1 Modelof aneuro n 15

2.2 Acti vationFunctions: (a)Threshold Function. (b) Piecewise- Linear FUnction(c )Sigmoidal Funct ion .• .. 18 2.3 The threelayer BPNarchitecture .. . ... 20

2.4 BAMarchitecture...•.•. 22

2.5 Discrete Hopfield Memory

.. .

25

2.6 A simple energylandscape with two minimal,a localminimu m

and a global minimum 26

2.7 The Hotzman Completion Architec t ure. . . . 26

2.8 Forward-mappingCP N.. . . 29

2.9 Layerorinputunits ofaePN..•.. 30

2.10 Thisfigureshows(a) the general form of the processingele- ments(b)the instarform of procellsingelements . 31

2.11 A layerof instarsarrangedincCPN 32

(16)

2,12Outstaranditsrelationshipto theePNarchitecture, (a).

Outstarstructures in CPN network(b).Asinglecutstarunit isshown. ,.

2.13 Grossber'sformal avalanchestructure...

2.14Power Spectra genera.tedfrom speech,.. .

2.15 ModelofARTsystem

33 36 37 40 2.16 A patternmatching cycle in an ART, (a)pattern-matching

attempt(b)resetin (c)finalrecognition(d) endof matching

cycle.. ... 41

3.1The overall structure of ART2 . 46

3.2 Structureofpeoceselngelementon F1layer. 47 3.3 Structure ofprocessing element onF'llayer . 50 3.4 Model of ART2 as an object... ... . .. 56 3.5 Flowchartof training algorithmofART2

. . .

57 3.6 Modified Trainingand RecallAlgorithm ofART2 58

4.1 Model ofFaultRelated SpectraCollection 64 4.2 Representation ofdiscrete-time system 65 4.3 Processingstepsin thediscrete-timeFourieranalysis of a continuous- time signal.. .. . .... . .. . . ... . . ... . . .. 68

xi

(17)

4.4 Illustrationofthe Fouriertransformsin the syst.em:(a) Fourier transformofcontinuous-time input signal.(b)Frequencyre- sponse of anti aliasing filter.(c) Fouriertransformoutput of anti aliasing filter.(d) Fourier tran sform ofsampled signal(e) Fouriertransform of window sequence(f) Fourier transform ofwindowed signalsegmentand frequency samplesobtained

usingDFT samples. .. 69

4.5 DSP56000Block Diagram .. .. .... 70

4.6 PC-56BlockDiagram 74

4.7 Flowchartof theapplicationprogram .• . 77 4.8 Blockdiagram ofthe experimentalsetup.. ... •.... 78 4.9 Frequencyspectra atnofaultcondition 80 4.10Frequencyspectra whenstato rphase1 is open... 81 4.11 Frequencyspectrawhenstatorphase 2is open .... 82 4.12 Frequencyspectra whenstato rone coilof phaseIhes been

externally replacedby&resistor 83

4.13Frequencyspectra whenstator onecoil of phase 2 heebeen

externallyreplaced by aresistor 84

4.14 Frequency spectrawhen statorone coil of phase 3 hasbeen externally replaced by a resistor .... .• . 85 4.15Frequency spectrawhen stator one coilof phase 1wasshort 86

xii

(18)

4.16Frequency spectra whenstator one coilof phase2 wasshort 87 4.17 Frequency spectra when sl,at oronecoilof phase 3 was short 88 4.18 Frequency spectraofthestator currentwhenrotor one coil

Mlwasopencircuit . . ..••.. . . .•... 89 4.19Frequencyspectra ofthestatorcurrent whenrotor one coil

M2 was open circui. . • ..•..•. . .•.. . • •... ... 90 4.20Frequency spect raof the stat orcurrent whenrotoronecoil

M3 was opencireui..•. ...

4.21Frequencyspectraof the statorcurrent when rotorphaseMl is shorted toneutral

91

92 4.22 Frequency spectraofthe stator current whenrotor phaseM2

ill shorted to neutral .... . . .... .. . .•.. 93 4.23 Frequencyspectra oftheetetor currentwhen rotor phaseM3

is shorted to neutral ... . . . .. • ....•.... . . . 94 4.24 Frequency spectra ofthesta torcurrent atrotorshort circui t

fault through resistan ce in phaseMl .. .. . . • . . . .. . 95 4.25 Frequency spect ra of thestator current atrotorshortcircuit

faultthroughresistancein phase M2 ...•...• .. . 96 4.26 Frequencyspectra ofthestator currentatrotorshortcircuit

fa ult through resistancein phase M3 ..•.. .. 97 4.27 Rotorphase 1isunbalanchedthrough an externalresistance. 98 4.28Rotorpbese2 is unbalanched through an externalresistance. 99 4.29Rotor phase3 isunbalanchedthrough an externalresistance .HIO 4.30 Simultaneousopen circuit fault inrotor phase 2as well as

stator phase 1 . . .•. .... ..•.. ...101

xiii

(19)

4.31Simultan eous open circuit faultin rotorphase 2as wellas stator phase2 •.. . .• • • . . . ... . • . .. ..•.. ...102 5.1 ModelofART2 neuralnetworkbaseden-lineincipient fa.ult

diagnosis system forind uct ion motors. • • ... . •. . .•. .115

(20)

List of Tables

2.1 Compara tivePerfonn attct! ofDifferen t ANNTechniques. ... 44 3.1 Valuesofparameters on F-l layer.•. . .... .... . .•. .fo8 3.2 Setof train ins examplesusedto trai nART2• •••• • •• ••59 3.3 Botto m-up weightmatrixaltertraining .••. . . ..• •.• 59 3.4 Top-downwei&htma.trixafter training ., ...•. .•• 59 3.5 Newtrainingvectorto train after neuronaddi tion.• .. ... 60 3.6 New Bottom- upweightmatrix aftertraining ...••. . 60

3.1 NewTcp-d owaweightmatrixaftertraining . 61

3.8 Pattern matchingresult ofthetrained network ... .•• •. 62 5.1 Tabl eofFault.withuD)quenumber. •••.•..• •.••. •lOS ';.2 Faultrelatedspedralcomponenuinmatrixform. •.... •106 5.3 Training Matrix of (ault related currentspect ra... •.••. .107 5.4 Fault! mappingofthetrained network in high precisiondom&inl08 50STop-downwei~tmatrix of thetrained network... .. . • . .110 5.6 Bottom-upweigbtmatri x ofthe trained network. .••••..110 5.7 Fault relatednois)'currentspect ra..•... . 113 5.8 Diagnostictestmultof thetrainednetworkin noleysituat ion.113 5.9 Faultsmappingin training phaseinlow precisiondomai n. ..114 5.10 Faultdiapos t.icperformanceofthenet worktrained inlow

precisiondomaininDOd )'situa tion . ....••..•...•.114

(21)

Abbreviations and Symbols:

The followinglists of termsand symbols appear th rougho utthe body of this document. They aredefinedhereapproximatelyinthe orderinwhich they appearinthe text :

Abbreviations:

I

Terms

I

Definition ~

ANN Artificialneuralnetwor k FNN Feedforward neurnlnetwork nsp Digitalsignal roceaeing BPN Back to aal ion neural networ k BAM Binaryusociativememory HM Hpfieldmemory VLSI Verylar e scaleintegrat ion ePN Counte r propagation net work SOM Self-organizing maps ST NS Satiotemaral networks ART Ada liveresonancetheory STM Shortterm memory

LTM Lon term memory

DFT DiscreteFouriertransform FFT FastFouriertransform RAM Random access memory ROM

ALU

AGU

Meu MPU MIP IlR

xvi

(22)

I

Terms

Abbreviat ions:

Definition SCI Serialcommunicationinterface SS! Synchronousserialinterface

PC Personalcomputer

ARIELMONArielmonitor DEGMON DebulS!: monitor IBM lnternationBuisiness Machine HP Hewlet&Packard AS Attentionalsubsystem PE Processingelement

Symbo ls:

I

Sym bols

I

Definition

;. Statorcurrent

.,

Rotorflux v, Stator voltageper phase R. Statorresistanceper phase R. Rotor resistanceper phase L. Stator self inductance erheae L, Rotorself inductanceper phase M Mutual inductance 6 Leakagecoefficient w, Angularvelocity v, Linearcombiner output

w,· Synaptic weip,:ht of synapsejbelonin.! to neuronk

xvii

(23)

Symbols:

I

Symbols

I

Definition

I

x, Inputeignalj.

-,!-

the outut sical of neuronk TheactivationJunction 8, Tnethreshold

v ,

The activitylevel

x Input vector

.ot Net inputvaluesto thehiddt;":\layerunits O. Calculatedoutout

E Measure of learnin

w Wei htvector

E System energy

AE. The energy difference 6; Thereflectance pattern

,.

Discret ein utsequence

T. Diecreteoperator y. Disaeteoutputsequence 5,, Band-u nlimitedcontinuous-t ime signal

" ,

BandUmited continuous-timesignal

Xk Discret efouriertransform

n,

NyquistsampUng frequency

J Excitor input

s:

Inhibitoryinout

z Weightvector in ART network M Numberof units in inputofARJ' network N Numher of outpu tunitsof ART network

xviii

(24)

Chapter 1 Introduction

Faultdiagnosishas been anactivearea of research in both the engineering and computer science communities.Inspite of the manyadvancesintbis area,fault diagnosis stillremainsch&llengingwithmanyquestionsunan- swered.With advances made in technology,complexity isa majorfactor that we have todeal with.Complexit yconfrontsus,perhaps,when some- thing breaks down.Weare forcedto come up with moreeffective techniques andanalysisfor detecting and diagnosingsuch anomali es.Diagnosticanaly- sis isnotalwaysquantitative .1Dfadmuchdetectionanddiagnos tic analysis is heuristic,and results from repeated andlongtime cognitiveexperiences.

In the lastdecades,a.<:driveinstallationhasshown tremendousgrowthboth insize and complexity.Theinterruptio n ofserv ice dueto faultsin a motor driveinstallat ion is often costlyandcould interferewithpublic safetyinsome installations .

(25)

These motors are exposedto .. widevariety of environmentsandcondi- tionswhich make themotorsubjecttoincipientfaults.Theseincipientfaults . ifleftundetected, contribu tetothe degradationandeventu alfailureo{ thl' motors. With propermonitoring and fault det.ect.ionechem ea,the incipi- entfaultscanbedetectedintheir early stages, andmaintenan ceanddown timeexpenses aLnbered uced whilealflOimprovi n~safet y.ThuI,themotor incipient faultdetectioncanbe usedin themotor prevent ivemain t enance programsalso.

1.1 Incipient Fault Detection of Induction Motors

1.1.1 ABr iefDescription

Although rotatingma.chiD~are wuallywellconstructed and robust ,the possibilit y ofincipientfaultsisinh erentin themachinesduetothe stresses involvedintheconversionof electri cal energytomechanical energyand vice veraa\ l,2].Incipi ent

r..

nlh withi n ..machinewillaffecttheperformanceof themachinebeforemajorfailuresocrur.Withpropers)'1Itemmonito ring and faultdetectionsche mes, maintenancecosts canbereducedandrelia bility of themachi nescan be improvedsignificantly.

An experiencedengineermaydetectanddiagoc»ethemotor faulhbycb- serving the motor'soper..ting perfo rmeaeee.However,expe rienced engineers are expensiveand difficult to train.Itis,thereforedesira bletoaut omate

(26)

the system monitor ingand fault detection schemes ratherthan torely on an experttoperform continuo us on-linemonitoring. Severalfault detection methodshave beendeveloped ,eachwith theirownprosp ects andconstraints.

Sometechniques requireexpensivediagnosticequipmentand/oroff·line fault analysis to determinethe motor condit ion.For instanc e ,thcradio frequency monitoring scheme injectsradio frequency signals to thestat or windingof a machineandmeasures the changes of thesignal waveformtodetermine whether the winding insulatio ncontainsfaults 12J. This techniquerequires expensiveequipment andisjustified onlyfor use with large andexpensive machines.Other populartechniques, suchasparticleanalysiswhichrequires bringingthe motoroil samplestoalaboratoryfor analysis[2Jto determine the motor condition,aremoresuitablefor overhaulor routine check-uprather than on-linemonitoring and faultdetection.

The parameterestimationapproach [3J isa non-invas ive fault detection scheme. Non-invasivefaultdetection schemes are based on easilyaccessi- ble andinexpensive measurementstopredict themotor conditionwithou t disintegrating the motorstruct ure. These schemes are suitableforon-line monitoring and fault detection purposes.Duetotheir economicalandnon- destructivefeatures,ncn-iavaeive techniques areoftenpreferredbymany engineers.However,theparameterestimationapproach requires an accurate mathematical model andan elaborate understanding of thesyst em dynam- ics based on a setofsystem parameters.The parametersarcusually chosen toreflect the motor conditions. For example,thebearingcon ditionwill affectthedamping coefficient of the motor'smechanicalequation .As the

(27)

bearingwearsout, thedam ping coefficient increases.Thus, the parameter estimationapproach can bebasedonthe motor'smechanical equationand measurementsto estimatethevalueofthe damping coefficien ts,Aftercatl- matingthenumerical valueofthe chosenparameter ,ameans to translate the estimated numerical values to qualitativedescriptionisrequired.Themajor difficultywith the parameteretimaticnapproach is that an accurate mathe- maticalmodelis required,and is usuallydifficult to obtain.OULcr techniques like non-parametr icsurfacefitting methodalso requirecase by case specific mathematical analysis. In addition,the interpretationof the faultcondi- tions,whichis a fuZ2lY conceptusing rigoro usmathematica l formulat ions,is generallyimpractical.~dinaccura te,

On the otherhand,useof anart ificial neuralnetworkforfault detection isalso a non-invasivetechn ique13, 4] .But, unlike the parameterestimation approach,neural networks an perform faultdetectionbased on measure- ment!and trainingwithouttheneed of complex and rigorousmathematical models.Inaddition,heur isticinte rpretation ofthemotor conditions,which sometimesonly humansare capab leof doing,can beeesllyimplement ed in the neural network throug hsupervised training.

1.1.2 MathematicalAnalysisof Induction Motor

Inorderto successfully perform fault detect ion,differentsets ofcriteriaare neededto definea moto r'sstatus at differentoperatingcon-ditions. The fault detectionof a 3·pha.se inductionmotor has been usedfOTillustration

(28)

pUrJlO'CS.Uisworthwhileto descr ibethe faul\detection problem inmathe- maticaJtermstofacilitat e futurediscu99ionsODthesubject .

MethematicalDescriptionofMotorDynamics

Aninduction motor unbedescribedbythefollowing9t.Ueequationsinthe RtaUc.nU Yrefere nceframe:

!.-

dt

[;

'

,]= [A

. Ann

."]

An

[;,

';,

]+ [S']••

0

=A%

+

Bu.

Where

StGlorCurrmti.

= [

i~.it .

r

R«...Fluz;.=[

'.'r l'

St.GfOl"Vol t age11.

=[

tt",ut •

r

•An=-(R. / (6L,J

+

(1-6)/(6T,1I1=.,,,1

(1.1) (1.2)

(29)

•D,~l/l IL.)I -b,1

R. and~are.t&torandrotor resistan ces,respectively, L.andf..,are,l.ltorandrotorselfin d uctances, respectively, Mismutua l inductance,

Loa....a>efficid>16=I -M'/(L.L-r), Rotortimeconlt Ultr..=L-rIR,and w.is motoraoguluvelocity inradian/seconds.

Inputcurrent depend s on tbemotor parametersR•• &,L••L ...M and6.

Aninternalfa ult in themachinewillbereSect edinthestetc r current of themachine.Itiepossibletherefore, to detect the faultIromthe analysis of etatcrcurrent.

(30)

1.2 Artificial Neural Network for Fault De- tection

1.2 .1 Expert Approachfor FaultDetect ion

As stated previously,theinterpret ation of a motor's condition based on nu- mericalvalue;s usually adifficulttaskbecause faultdetection is a fuzzy con- cept and usuallyrequires experience[5}. Therefore,in manyeases,heuristic interpre tat ionof the results,which only humansare capableof doing be- comes necessary. An experienced engineer can diagonose the motor's condi- tion based onits operating conditions an.-Imeesurementewithout knowing exactmathematkaJ modelof themotor.Theapproach is simple andreliable, andthf'comp licated mathematicalrelation is embeddedin the engineer's knowledgeaboutthe motor.However. an experiencedengineer may notbe ableto givedetail ed explanat ionsregarding his/ her reasoning and logic used to make thedecisions, simplybecause experiencebelongsto the fuzzy logic realm andisdifficult todescribe accuratelyin exact mathematical ter ms.

Asit turnsout, this humanexpertiseapproach hasmany advantages overthe parameter estimation approach . However, the major drawbackof the humanexpertiseapproachisthatexperience isdifficult totransfer and automate.Both reeeerchers and engineers usually transferexperience and knowledgetLrough languages andmat hematics, which are sometimes time consumingand inaccurate.In pract ice,theexperience andthe knowledge usedby expert engineersto perfonnmotor faultdetectionand or diagnosis

(31)

historicalfault det ectiondata gathered. bythe experts.

1.2.2 Learning Skills ofArtificialNeuralNetworks

With the emergin,; technologyofart ificialneural networks,thehuma n ex- pert ise approach eanbemimicked and auto mat ed {6,7,81.Artificialneural networks (ANN)canbetrainedto perform motorfaultdet ect ionbylearn - inc expert'sknowledge usingarepresentati ve setofmotor data.[91.Inthe case of an inductionmotor, incipientfaultscanbedetectedby analyzi ng thefrequencyspectrumoftbesta tor currentasshown in Fig. l.l(aHb).

Nowitis clear that thestatorcurrent spectrumc.a.rri~the signatu re of an inte rnal faul twithin the machine.Soby trainin ganANN with the values of spect ralcomponent relatedtoparticula rfaults without tbeneed of matbe- metical modeb,tbe complexity ofthepar ameterestima tion approach canbe avoided.Once the ANNistrainedapp ropria tely,thenetwork weightscon- tain theknowledge needed to perlonn fault detection ,whichisequivalentto the expert ise gainedbyan engineer over tbe yean inmachinefaultdiagnosis.

1.3 L it erat ure Review

ANN.havebeen proventobe capable ofsuccessfully performingmotor faul t detect ions19,10J.One of theadvantages ofthistype of patternrecognit ion techni ques isthatitcan SAvetime inia forrnatlcuprocessing in run time, whereall thecomput ation Alcomplexitiesare doneoff·linein the train ing

(32)

''' '"-- - ... -

l(n)

_ 0( 2)

~

. .1-__

0(m-1)

(oj

Figure1.1:(a) Frequencyspectra of input current ofII.healthy machine.(b) Frequencyspectra of input current ofII.faultymachine. (c). Modelof Neural Network based fault related spectra identificationsyst em.

(33)

period of the network.

When developingan ANN based systemto perform a particularpattern classificationproblem,typicallythe precessisto gather asetof examplesor trainingpatterns,thenusing theseexample sto train the underlyingANN.

Duringthe training, the information is coded in the syste mby theadjust- ments of the weightvalues. Once thetrainingisdeemed to beadequate, the system is ready tobe usedin thereal-lime sit uations,and usually no additionalweight modificationisrequired.

Thisoperatio nalsce nariois a.cceptable provided theproblemdomainhas well-defined boundaries andis sta ble. Undersucb conditionsit ispossi- ble to definean adequa tesetof training input s for wha tever problembe- ingsolved.Unfort unat ely,like many realistic sit uations involving incipient fault detectionofinduction motors,the environmentisneitherboundednor stable.To solvethisdynamicbehaviour convent ionalFeed-forwardNeural Network(FNN)suffersamajor setback.

Mo-YuenChowandothers13]-(5],(9]-[11] have donesign ificant worksin neuralnetwork based incipient fault diagnosisof induction motors. But they haveused FNNasfaultdiagnosis tool andintheirresearch workthey have neglectedthisdynamic operationa lscenario which is an indispensable part inreal-world environment. M.F.AbdelMageed and hiscolleagues

1121

have

usedHierarcbical neuralnetwork,buttbis neural networkalso suffersthe same limitatio n asFNN. Thesame limitatio nalsoprevails in the research works of F.Filippetti113,14], Chin-Teng [6Jand theircolleagues. Moreover,

10

(34)

the reporting abilityof the neural network, if itcannotdiagnoseapert h'liar faulthasnotbeenconsideredbythem.So,there is aneed tocarry outa researchwork to findsuitableneural networkwhichisnot onlycapableto diegcneee a faultbutalso can report ifitcannot ,so that preventive steps can betakento updatetheneural networkto adapttothisDewfault,while retainingthealready acquired knowledge witboutretraining of thealready trainedpatterns.

Mathematicalanalysisas mentionedinsection(1.1.2} makes sense that wave shape ofthe statorcurrentcarrie! thesignatureof internalcondition of the machine.R.Natarajan 1151 has usedthestatorcurrentto diego- niecthe faultby onlymeasuring if.9value,Dot though the spectralanalysis, whichis necessary for neural network based fault detect ionschemetoget better result.F.Filippet tiand M.Martelli(13)have consideredthe frequency spectraofthe stator currentas the keyfault related information carrierof the faultdiagnosisscheme, but they have notreportedadet ailedstudy of frequency spectraof thestator currentat differentfaultconditions.B.C.

Papadiaa [16}and othershave given an out lineto developan expert system fortroubleshootingofelectricalmachines,butthecollection of faultrelated informat ionisnot mentioned.Whilethe focus of research workofMo-Yuen Chowand others (3)-[5J,[9)-(11Jis towards the applicabilityof neural network intheincipient faultdiagnosisof inductionmotorsspeciallyin bearingfault, they have notconsidered the spectralanalysis ofthe statorcurrent.At this presentstate,it is evident thatit is important to payattention to collect the frequency spectraof the sta.torcurrentat differentfaultconditions.

11

(35)

1.4 Ob j ect ive of the Pres ent Work

The long-termobjectiveof this work is to improvethe state-or-the-artof in- cipient faultdiagnosisof induction motors.The specificshort-termobjectives are summarized in the followingthree points:

•Tofindaneuralnetworksuitableforincipientfaultsdetection ofelectric machines.This fault diagnosisscheme is not only capableof dttccting a fault but also can reportif it cannot identifya particular faultso thatnecessary preventivestepscan be taken to update theunderlying neural network to adapttothisundetected fault.

•To develop alaboratory set-upto collect frequency spectra of thestator current of induction motor in real-time using digitalsign al processing techniques, andto collectfrequency spectra of the statorcurrent at different faultconditions for the experimental ind uct ion motor.

•Totrain the selectedneuralnetworkwith fault related frequencyspec- traof the etetor curre ntand to studythe performance of the trained network to diagonisefa.ults innoisefreeaswellasnoisyconditions .

1.5 Overview of Thesis

The contents of thetheses can be summarizedin thefoll owing chapters:

12

(36)

Chapter2 coversa surveyof....v....i1....bleneuralnetworks.A comparative studyhas been done to select a neural network to satisfy one of the objectives.

Cha pt er 3discusses the softwareimplementationoftheselected neur....lnet- work. It'sperformance has been testedwith anumber of examples.

Cha pte r "explains therelated algorithmsandtechniques tocollectthe faultrelated frequency spectraofthestatorcurrent of a three phase inductionmotor.This chapter....lso gives ....brief outline of the digital signal processing (DSP) board as wellas the DSPlibrary. Salient featuresofthe softwaredevelopedas part of this workto acquirereal timefrequency spectra ofthestator currenthasbeen also described in thischapter.

Chapter 5 explainsthe performancesofthe selected neural network to clas- sifyfa ultsbased onfaultrelated frequencyspectra ofthestatorcurrent. A modelbased on the selectedneuralnetworkfor on-line incipient fault diagnosissyst em for theinduction motorhas also been reported in this chapter.

Cha pter6 containstheconclusions and recommenda tionsforfut urework.

It has been expleined that thereis a good prospecttodofurther work 00the irrespect iveofdesignparameter, typeof machineaswell as operat ing conditions fault diagnosissystem. Moreover,the scope of development ofmulti-machine faultdiagnosissystemto make itcost.

effective andu"ler·rriendly has also beenemphasized.

13

(37)

Chapter 2

An overview of Artificial Neural Networks

2.1 Introduct ion

A neu ralnetworkis..massively parallel distributedprDCeSSO-"tha thasnat- ural propdlllity for storing experimental koowledgeandmakiogitavailable foruse[17J.Itmimia the brainintwo respeds:

a). Knowledge isacquiredbythenetwork throughalaming process.

b).Inter- neuro n connect ionstrengths, usuallyknown&5synaptic weights,areusedtoetoretheknowledge.

Theprocedu reusedto performthe learning process iscalleda"lea rning algorithm" ,the(unction01whichis tomodifythe synapt ic weightsof the networkinanorderly fashionso asto attaina.desireddesignobj ective.The

"

(38)

·:::·1::,

~,

,,..,.. ...

....

,

Figure 2.1:Model ofaneuron

" -

'.

useofneuralnetworksoffen&numberof benefits,amongthem nonlinear ity, input -outputmapping and adaptivityare most important.

2.1.1 ModelsofaNeuron

Fig.2.1 shows themodel ofa neuron.Three basic elementsof theneuron areexplainedas fellows:

1. A set of eynepeea or connectinglinks, each of whichis characterizedby aweight orstrength ofitB own.

2. An Adderfor eummlng the input signals.

3.An a.ctivat ioofunction{or limitingthe amplitudeof the output ofthe

15

(39)

A neuronKcanbe explainedbythe followingequatio ns:

,

Uk=I :Wk jXj ';..1

(2.1)

(2.2 ) whereZllX 2,...,Z,are inputsignals;wu,wU ,...,Wk,are thesynapticweight s ofneuronk;Ukis the linear combinedoutp ut ;Okisthethreshold;'.pislilt' activationfunction;andYkisthe output signal of the neuron.Theuseof thresholdOA hasthe effect ofapplyingan affine transformat iontotheoutput UAofthe linear combiner in the modelasshownhy

(2.a) The output canbe representedby the following equation:

(2.4)

Types ofactivation function

Generallyused three different types ofactivation functionsare describedhe.

low{I7]:

L ThresholdEUnction:

For this typeor functionasshown in Fig.2.2(a)

( {

l i/ V>O

I(Jv)

=

0ifv:(0

16

(2.5)

(2.6)

(40)

,

Vk= LWkj%j-O.

j,.1 2.Piecewi se-L in e ar Function:

(2. 7)

(2.8)

(2.9) A Piecewise-linearfunctionas shown in Fig. 2.2(b )can be explained by the following equat ion:

{ I ,if v~

l

rp(v)

=

v, if -

i

>v

>-!

0, ifv:::;-t 3.Sigmo ida lFundion:

The Sigmoidal function aashown inFig.2.2(c)isby far themost com- mon form ofactivation functionused in theconstructionof artificial neural networksas explainedbythe following equation:

y>(v)=1

+ e%~(

-au)

whereais theslop e parameter.

2.1 .2 NetworkArchitecture

Learningalgorithmto traina.neuralnetworkdepends on the wayitis struc- tured.Ingeneral,there arefourdifferentclasses of networkarchit ectures:

1.Single-LayerFeedforwardNetworks.

2.Multilayer Feedforward Network.

3.RecurrentNetworks.

4.Latt ice Struct ures 17

(41)

' :[8

u

"

..

"i '

"

~l_l.il_I.., 0 ~ , IJ I

' :[E ::

~UG~

- - ,

!,.,.11_I-U

"'1 '

~

.

, O.l I 1J 1

' . :[B ::

~"""

'

2'0-I -I ~.t

''''., '

,,0

.

I I , 10

Figure2.2:ActivationFunctions: (a) ThreebcldFunction.(b) Piecewise- LinearFunction (c) Sigm oidal Function

2.1.3 Art ificialInt elli gen ce andNeuralNetworks

Theaim of artificialintelligence{AI)is the develop mentof paradigms or algorit hms thatrequiremachinestoperform tasks that app arently requir e cognit ion whenperformedby humans.AnAI systemmustbecapable of:

1. storeing knowledge ;

2. appling theknowledg e sto red to eclveproblems;

3.acquiring newknowledgethroughexperience;

18

(42)

AnAIsystem has three keycomponents:representation , reasoning,and learning.AI can bedescribed as the formalmanipulationof a language of al- gorithms and datarepresentations ina top-down fashion.Ontheotherhand , neur al network canbedescribedas parallel distributedprocessorswith anat- ural learning capability, and which usually operate in bottom-up fashion.For theimplementa tionofcognitivetasks, it therefore appears thatrather than seek solutionbasedonAl or neural net work alone ,a.morepot entially useful approach wouldbetobuildst ructured connectionistmodels thatincorporate both ofthem.

Someimportantneuralnetworks have beenexplainedbrieflyinthefolow.

ingsectionsendingwitba comparativeanalysisfor the select ion of proper neuralnetworkfor incipient fault detedion of an indudion motor.

2.2 Back pro p a ga t ion

Backpropagation neural network(BPN)as shown in Fig.2.3learns aprede- fined set of input-outputexamplepairs byusinga two-phase propagate-ad apt cycle118]. After aninput patternhas been appliedas st imulus to the first layer of thenetworkunits,itispropagated througheach upperlayer until an outputisgenerated. Thisoutputpattern isthencomparedtothede- sired output ,anderrorsignalis computedforeach output unit.The error signalsare then transmittedbackwardfrom theout put layer to eachnode inthe intermediatelayerthat contributes directly to the output.However, each nodein tbe intermedia telayerreceivesonlyaportion ofthe totaler- rorsignal,based roughly on therelative contributionthe unitmade tothe

19

(43)

Figure 2.3: The three la.yer BPN architecture originaloutput.This process repeats,layerbylayeruntileach node in the networkhas received an error signaltha tdescribesitsrelat ivecontribution tothe total error.Thetrainingprocedureofa BPN can summarized inthe following points:

1. The vector,x"=(3:,1':,,2 ,:1:)'is appliedto the inputunits.

2. The net-input values to thehiddenlayer unitsarccalculat ed:

nd~.i

= ?;

Nwt;Zpi

+

Dt (2.10) 3.The outputsfromthe hiddenlayer are calculated:

4.For outputlayer the net- input values to each unit is calcu lated : L

net;l:

= :L

w:ji"i

+ 0:

(2.12) I"

20

(44)

5.The outputsare calculated:

6.The errortermsofthe out putunitsAtecalculated:

6: ;

=

ft'<net:;)~6;'W:i 7.The weighton theoutput layer is updated :

8.Weights in thehiddenlayer are updated:

w~;(t

+

1)=wMt)

+

'16:{t;

(2.13)

(2.14)

(2.15)

(2.16)

Thefollowingequat ionisthemeasureofbow wellthenet work islear ning.

I"

E,~

2

~

6i.

(2.17)

When the errorisacceptablysmall for each of thetraini ng-vectorpairs, training anbediscontinued.

2.3 The Binary Associative Memory (BAM ) and t he Hopfield Memory

A typeof memorycalledanassociative memoryisasubject mat terof this section.Infact, theconcept of associat ivememory is afairly intuitive one:

uStlciativememory appearstobe one oftbe primaryfunct ions of tbe brain (181·

21

(45)

x layer

y laye r Figure2.4:BAMarchitecture 2.3.1 TheBA M

TheBAMconsists of twolayers ofprocessingelements that arefully inter- connectedbetween the layers.The unitsmay, ormaynot,havefeedback connection tothemselves.The generalcase isshown in Fig.2.4.For the Lvector pairs thatconstitutethe setof examplersshouldbestored,the following matrixcan be constructed:

w""YIX~+Y~X2

+

+ YLxi (2.18) This equat iongives thf' weightsontheconnectionsfromthexlayertothey layer.Toconstructtheweight sforthe x layer units,itisnecessary simply totake the transposeof theweightmatri x,w~

22

(46)

BAM Mathematics

011they layer

net"=wx (2.19)

wherenet~is the vector of thenet-inputvalues on theylayer.Interms of theindividualunits,Vi,

Onthexlayer

netf

=

EWjj:d

j_1

neti=EYjWj;

j..1

(2.20)

(2.21) (2.22)

(2.23) Thequant it iesn andm aredimensionsofthexandylayers,respectively.

The out putvaluesforeachprocessing element depends onthe net input value,andonthecurrentout putvalue of thelayer.Thenew valueofyat time stept

+

1, y(t

+

1)isrelated tothe val ueof yattime stept,y(t ) by

{

+1 netf >O y;(t

+

1)

=

Yi(;) , netf

=

0 -I, netT<0 Simi larly,X(Hl)isrelatedtox(t) by

{

+1 neti>0

%i(t

+

1)= Z'i(i),netf=0 -I, netf<0 BAM Processing

(2.24)

To recalltheinformation usingthe BAM, the following stepsshould beper- formed:

23

(47)

1.The initialvector pair,(XO .Yo)isappliedtothe input elementsofthe BAM.

2.Informationispropagatedfrom the x layer to they layer;and the valueson the y-layerareupdated.

3.The update dyinformationispropagated backtotheXlayer and the unitsareupdated.

4.Steps2 and3 arerepeated until there isnofurther changein theunits oneachlayer.

2.3.2 The Hopfield Memory

Hopfield memory(H M)candescribed as a derivative oftheBAM[18].There are two types of Hopfield memoryasdescribedbelow.

DiscreteHopfieldMemory

Fig.2.5illustratethestructure ofdiscreteHopfieldMemory.

ContinuousHopfieldMemory

Continuous HopfieldMemoryhas same useful properties ofassociativemem- ory but itcan accept analog input makingit closertonat ural neuron,More- over,itcanbe represented byanalogelet ronic circuitmakingitsuitable for VLSIimplement ation.

24

(48)

Figure2.~:DiscreteHopfleldMemory

2.4 Simulated Annealing

It itpossibletoextendthe analogybetween informationthoory and statis- tical mechanicsinordertoplace neuralnetwork(IS]in contactwitha. heat reservoiratscree, uyet undefined temperature.If110,tbcuitispossibleto perform a limulatedannea.lin~precesswhereby uadua1lyloweringthetem- peraturewhileprocessin~takesplace inthe network,inthehopesofavoiding

• )ocalminimumon theenersy landscape uSh OWLin Fig.2.6.Thissit- uationcanbe better explainedin the neuralnetwork known asBoltzma.n Machine.

2.4.1 TheBoltzmanMachine

Thebasicarchitecture of thistypeofneural networkcan be explainedby F;g.2.7[181.

(49)

SIBle

E, E.

! o ~--\---I----\---;;;;;:o

Figure 2.6:Aeimpieenergy landscape withtwominimal , alocalminimum anda globalminimum

V1dl1elayer Figure 2.7:TheBotzrnan Complet ionArchit ect ure

26

(50)

Asthediscrete 8M, thesystem energycanbecalculatedfrom

(2.25)

Wherenisthetotalnumb erofunits in thenetwork,and.:tkis theoutputof the kthunit.The energy differe ncebetweenthesyste mwith.:t

+

k

=

0and .:tk=Iis given by

(2.26) The recall procedureisdoneby the aimulatedIUlDealingprocedurewith x' as thestarting vector onthe visible units.Theprocedure isdescribed by the followingalgorithm:

1. Alltheout putsof allknown visibleunitsareforced tothevaluesspec- ifiedbythe initialinputvector, x'

2.All unknown visibleunits and all biddenuni tsareassigned random output values fromset1,0.

3.A unit,.:tk,atrenecmis selectedandits net-input,net kis calculated.

4. Steps3 and4 arerepeat eduntilallunitshave had. someprobability of being selectedforupdate.This numberof unit·upda tes definesa prcceealngcycle.

5.Step5isrepea ted(orseveral pcoCCllsin g cycle,until therm al equilibrium hILSbeen reached atgiven tempe rat ure,T.

6.Temperat ureTisloweredandstep3 through 7 arerep ea ted.

27

(51)

Learningin Boltzm anMachines

A reasonableapproachtotrainaBoltzman machinecan summarizedin tao followingway:

1.Artificiallythe temperatureoftheSoltzmanmachineisraisedto some finite value.

2.The system isannealeduntl!the equilibriumisreached atsomelow temperat ure.

3. The weightsofthenetworkisadjusted80that thedift'eJencebetween theobserved probability distributionand canonicaldistributionisre- duced.

4.Steps1through 3arerepeateduntil the weights nolongerchange.

2.5 The Counter Propagation Network

Fora givenset of vector pairs,(Zt'Yl)'(Z2,Y2)'.•.•••b·..,YII),thecounterprop·

agationnetwork(CPN) canlearnto associatean vectorxontheinputlayer witha Vector yatthe output layer[181.II therelationship betweenXand y can be describedbyacontinuousfunction~,suchthaty=tJ.(x),then CPNwilllearntoapproximatethismappingforanyvalueof x intherange specifiedbytbesetof training vectors. ThissituationisknownASforward mapping ofePNandits structureis showninFig.2.8.

28

(52)

xlnputvector

Figure 2.8:Forward-mapping CPN

2.5.1 CPN Building Blocks

Thebuilding blocksof CPN are explained infollowing section:

TheInputLaye r

Theinput layer of processing elementsis shown in Fig.2.9.The total input pattern intenBityisgiven by1=Ei Ii'Coue9ponding to eachIi,a quantity can be defined

(2.27)

The vector,(elt83 6,,)1 iscaneda reflectance pattern.Itshould be DOted that thispattern is normalizedin the sense that L;Gi

=

1.

29

(53)

"

'''.1 Jr"

Fi~ure2.9;Layerofinputunite of..ePN

The Instar

The inst aris a siosleprocessinS element as shownin Fig.2.10 Assuming theini\i~outputillzero,and that a uceaero input veeteeit present from timet=0untiltime,twhen the output can be defined

Theequilibriumvalue of1I(t)isdefinedby

y .'

=

~net.

30

(2.28)

(2.29)

(54)

,.)

lb)

Figure2.10:This figureshows(a)the generalformofthepro cessing elements (b) theinstarform ofprocessingelements

Compe titiveNetworks

Fig.2,11 illustratestheint ercona ect icn that implements competitionamong theinsian. The unit activationsare determinedby differential equations and simplestform is definedby

z,

=-A%,+ (B- z,)[!(%,) +

"",I- %; [~!(%.)+ ~n"']

k~; JI;ti (2.30)

TheDutat . r

Fig.2.12 showsanoutstar.It is composed of allof the units inePNouter layer and a singlehidden-layerunit.Duringthe trainingprecess,the output values of the cutstarcan be calculatedfcom

~;

=

-'Iii

+

byi

+

enet;.

31

(2.31)

(55)

I, 12 I~

Inpu1WClC1r,1

Figure2.11:AIa.yer ofinstars arrallged in cePN 2.5.2 "!rainingtheCP N

The tra.iniDgprocedureof CPNWIbeIUmmuizcdinthefollowingpoi nts:

1.ADinput vectori.selected fromallthe input'leetontobeusedforthe training.

2.lnputvectori.normalized andis a.ppliedto theCPNcompetitivelayer.

3.Thewinne rshouldbedetermined.

4.For thewiDninguait,onlYlo(x-w)shouldbecalculatedand unit's weightshouldbeupdatedaccor ding tothefollowing equat ion:

W e'

+ 1)

=

w(l)+Q(x-w ) 32

(2.32)

(56)

. --

~ _.~ .

... - I ,~ -

Figure2.12:Outstar and its relat ions hipto the CPNarchitecture, (a).

Outatat structuresinePNnetwork (b). A single outstarunit is shown 5.Steps1through4.hou ldberepe ateduntilallinputvectors havebeen

processedonce.

6.Step 5 .houldbe repeateduntilallinputvedo~havebeenclassified properly.

7.Thenetwork sho uldbe testedtoseethe effectivenes s.

2.5.3 Forward Mapping

It has been eeeumed that&lltrainingbaa occu rredend thatthe networki9

nowin&prcductica mode. Forthe inputvector[ itis necessaryto find

thecorrespond ingY vector. Therequired preceeing can bedone by the following algorithm:

33

(57)

I.Theinputvector ehculdbenormalized,Zi=:I;/(Jf:i!.)

2.Inp ut vectorshouldbe applied to the x-vectc r portio noflayer1anda zerovectorshouldbe appliedto they-vedor portKloofthesame layer.

3.Si ncetheinputvectorisalreadynormalized,theinputlayeronlydi,- tributes ittotheunitsonlaye r2.

4.Layer2i,awjnner-teke-allcom petitiv e layer. The outp utof eachunit canbecalculatedasfollow,

{

I,Unet;>IlndjOforallj .;i It

=

0,otherwi.,e

5.The,in&!ewinneronlayer2 excitesaDoutst.ar

2.6 Self-Or ganizing Maps

(2.33)

InSelf·O rganizing Maps(SOM) ,theePNnetworki.modifiedsucbth~, during the leamin g proau,thepositivefeed backwillextend from thecen- tral(the winning) unitto the otherunitsinsomefiniteneighborhood around thecent r alunit(18J.In thecompet itive layerof theePN,only the wi nni ng unitwasallowedtolearD. jinthe SO M, all theunit,intheneighbo rhoo dthat receivepositivefeedbackfrom thewinning unitpar ticipatein the learni ng precess.

34

(58)

2.6.1 UnitActivation s

Thefollowing equation define the activation of the precessingelements

iii

=

-r ; (Yi)

+

net;

+L ,

Zijyj (2.34) The functionTi(Yi)is a general formof a loss term.IfZijtakes the form of theMexicanhatfunction,thenthe networkwill exhibita bubbleof activity around theunitwith thelargest valueofnetinput.

2.6.2 The80M LearningAlgorithm

The learning processcan be definedby the followingequat ion

Wi

=

a(t)(x -WI)U (Yi) (2.35)

wherethew, is theweight vectoroftheithunit and xisthe inputvector.

Foran inputvectorX,thewinning unit can be determinedby

[x-

well =

min; lIx -

Will

(2.36)

where index crefersto the winningunit.This can be explainedas W,(t+l)~{w,(t) + o (t)(x-w,(. ) ;, N, . (2.37)

o

otbersuise

2.7 Spatiotemporal Pattern Classification

Neuralnetworks as describedpreviously are suitable for the recognit ionof spatialinformation patterns.Spatiotemporalpatternclassifiercan classify

35

(59)

Figure2.13: Grossberg'sformal avalanchestructure time-correlatedsequenceofspatialpatterns(181.

2.7 .1 The Formal Avalanche

The foundationfor the devei...pmcnt of the networkarchitecturesdiscussed in this section is the formal avalanche structureby Grossbergas shownin Fig. 2.13.

2.7.2 Architectur esofSpatiotemporalNetworks (STNS)

Fig. 2.14 showsan arrangement thatgeneratesthe spatiotemporalpatte rn"

(STPs)fromspokenword.At eachinstant of time, theoutput of the spec- trumanalyzerconsistsof avectorwhose componentsarethe powers inthe

36

(60)

--.-

· ~

Figure2.14:PowerSpectra genera tt:dfromspeech

various cha a uels.

2.8 The Neocognitron

Thisisaspec ial typeof neural networktailored(ortherecognit ion ofhan d writ le ncharacters118).The mainpathwaysfor neuron 1ea.ding fromthe retinaback toareaof thebrainas the visual,orstriate ,cortex.

2.8.1 Neo cogntt ron Arch itecture

Theprocessingelements(PEs)ofthe neocognitronareorganizedintomodules tha t aballrefer toASlevele.Eechlevel consistsoftwo layers:alayer of simp le cells,ors.ceUs,followed byalayer ofcomplex cella,orc-cells.

(61)

2.8. 2 Neoco gnltronDa t aProce ssing

S-cell Pro cessin g

Hereithasbeenconsidered thatthe indexkjrefers tothe kth planeon level I.Each cell on a planecan belabeled with&two-dimensional vectortwith nindicatin g its positionon the planeandvrefertotberela tive position of a cell in thepreviouslayerlying inthe receptive field of theunit n.The equationfor the S-cellcanbe writtenas :

wherethefunction4J is alinearfunction givenby

{ X :t>0

.« )=

0 « 0 C-cell Pr o cessing

(2.39)

Usually, unitson a given C-plane receive inputconnections from one, or at mosta.small number of S-planesonthe preceding layer.Theoutp ut ofB.

Ocellis given by

Uc,(k,,

n)= .,. [l+E~:-ljl(k"kl)r;"'Drdl(tJ).U.'(k"n + V) _ l]

(2.40)

'P l+ v",(n )

38

(62)

Thefunct.ion ¢ is definedby

(2.41) WherePis eonetant.

2.9 Adaptive Resonance Theory(ART)

Adapt iveresonance thoory( ART)isanextension ofthe competi tive-learn ing schemes(CPN)[19].A keytosolving the stab ility -plssticity dilemma is to addafeedbackmechanism betweenthe competitive layer andthe input layer of a network.Tbis feedbackmechanismfacilitates thelearning ofnew in- formation witbou t destroy ing old infonnation,automatic switchingbetween stableaDd pluticmodes,andstabilizat ionoftheencoding ofthe classee done by thenodes. Theresults from tbis approachare twoneuralnetworkarchi- tectureathatareparticularly suitedforthepattern classification problem in realistic environment.These networkarchitecturesare ART IandART2.

ARTI andAJIT2differinthe natureof tbe irinput patterns.ARTI networks requirethatthe inputvectors bebinary .ART2 networks are suitablefor processinganalog or gray-scale patt erns[20].

2.9.1 ART Network Description

Fig. 2.15shows the basic feat ures of the AlIT architecture.There are two types ofmemory,

39

(63)

- -Gal;; - AiieniiOnaT SllbSysiem...OrIeIl17"ng-

conlfol F.Layer lsubsys18m

.1 1 1"_

signal

InputVIlClOl' Figu re2.15:Modelof ART sy s tem

1.short termmemory(STM)thatdevelops overthenodesin the two layers.

2.longterm memory(LT M),top-down andbot t om-up weightbetweenPI andF2layers.

Patt ernMatch ing inART

Thepatternmatching cyclein ART canbe definedbyFig. 2,161'20).

40

(64)

&,~\ Q r.'Q ~~

, III I 0.1 I III I

.. "

~mb&

1 III

..

I 0. 1 ' III

,.

1 0 .1

Figure2.16:A patternmatching cycleinAnART.Ca)pattem.m;ltching attempt(b)reset.in(e)finu recognition(d)end of matching cycle 2.9.2 ARTl

Theinput vecLorto ARTlisbinaryand it sharesthe common architectu re oftheART.It.'.processing cansummarizedin thefollowingpoinu :

1.InputvectorJis appliedtoFl'FIactivit iesarecalculateda!follow!

2.The outputvectorforFIiscalculatedAS

h( ) {I Z<;>0

'i

I : ; Zi i= 0 ZIi :50

(2.42)

(2.43)

3.SillpropAgatedforward toF2andtheactivitiesare calculatedas

M

Tj= ~JiZi; (2.44)

(65)

4.Onlythe winning£1 node hasanonzero output : u._

{I

t,

=

m4%~T.'V

1 - 0otherwi"e (2.45)

5.Outpu tfromF1ispropagatedbacktoFl'NetinputsfromF2allthe units ofFIarecalculated as

N

V;=~::U.i%(i ,"'I

(2.46)

(2.46) 6.New activiti esarecalculatedas

Xli

=

1

+ ~17I;D:~~V;~~

C1 (2.47) 7.A~instep2out onvalues.!Iiiscalculated.

8.Thedegree ofmatcb between the inputpattern and the top-downtem- plateis givenbythefollowingequation

lSI E~I.!Ii m-l:,,",l;

9.1£I

s

1/ 111<p,thenVJis markedallinactive,zero theoutputsof Pl ' andit isnecessarytoreturn tostepl.Itnotthen we haveto continue.

10. Botto m-upweighthastobe updated onvJonly {

- L .

·r . ,.

._ L-1+1_1 t ViISacIve

ZJ, - 0 ifViisinactive (2.49)

11. Top-downweight isupdate dcoming fromVJonlytoallFlunits

z. _

{I ifVii"aaive (250)

,J - 0ifVii"inactive . 12. Input patternshould heremoved.All inactiveF2units should restored

42

(66)

2.9 .3 ART2

This neural networkis similartoART!.It only differsin thesense that its input patt ern is analogsignal.

2.10 Comparative analysis and selection of suitable Network

In case of incipientfaultdetectionof inductionmotor hu edonspectral recogni tion, anANNshouldhave the followingproperties:

1.Lowtraining time.

2. Abilitytolearnnew knowledge while retaingtheoldone wit houtany retr aining of pastpat tern

3.Traningprocessshou ldhave certainitytoreach globalminima.

4.Itshould acceptanalog input patt ern.

5.Inputpatt ern should hein spatial domain

6.Itshould have abilityto reportifit can not classify aparticularpatte rn.

7.Itshould be a general purposeANN,80that necessary modifications can be done to makeit suita bleto thepresent problem domain.

43

(67)

To seleetaproperANN for thispurpose salient reAtur es or differentANNs havebeensummarizedintheionow1n~Tahle.From the previousdiscussion itanbe concludedthat ART2 u.tidiestheneceu&l')'characteri stiC!tobe acceptedMsuitableneuralnetworkinincipentraultdetectionof an induc- tion motor.Detailedsoftware implementation and performanceorAJIT2are explained inthenextchapter.

Table-2.1

Comparative Perf orm ancesofDifferentANNTech niques.

(68)

Chapter 3

ART2 Neural Network

3.1 Int roduction

In responsetothe questions of dynamic updating and training time of conven- tional neuralnetwork,adaptiveresonance theory(ART)baa been proposed by Grossberg,Carpenterand ctbere[19].ART2 is aspecial version of ART havingthe property ofanalog input.Inthe implementation phase ithas been modified to have an additional property ofreport ingif it cannot find a match for an inputpattern.

3.2 ART2 Architecture

The structureo! AlIT2 can he representedby Fig.3.1.Itconsists of two sub- system known asattentiontli subsyslem(AS) and orientingBubsystem(OS).

The AS consists of two layers of processingelements (PEs).FlandF2and

.5

(69)

:

-=~-;-

-

:~~;;;.~

- - - -

~

I , I

I I

I I

I I

I I

I I

I I

I I

I I

I I I

: :1 :

~ ~l _ .=-=- =~ -=-=:-=J _:

Figure 3.1:The overallstructureof ART2 a gain-control sys t em G.

3.2 .1 The Attent io na lSubsystem

Thea.ctivitiesof processingelements on the layers PI andF2can be defined bythe following dynamicequ ...tion as

where

Jt

andJ;are theexcitoryandinhibitoryinputs tothekthunit, respectively. The precisedefinitionofA, BandCdependsuponthelayer and for this caseBandChave been considered to be zero. Hereit has been consideredthatZHandz~ireferto the a.ctiviti eson Fi andF2layers, respe ct ively.HereVirepresentsthenodeson FIand IJj those onF2.The

46

(70)

Figure3.2:Struc tureofprocessing element00F1layer constant.Edetermi neshowfu t.

% ,

reachestoequilibrium.

ProcessingonFl

Asiosleproces!ioSelement.on PIwithiLa~'liousinpuu and weishtvectors canberepresented byFiS.3.2(I9J.Theuniucalculate&net-input value coming fromF2inthe Ulualway:

(3.2)

Thevaluesoiindj'fHual quantitiesinthedefining equatio n!ofFIand 1'2vary accordingtothe sublayer beingconsidered.For thelake ofcon- venience,the appropri &tevalues ofthe paramet ersfor layerFIhave been

47

(71)

summarizedin theTable3.1.Basedon the table, the activities ODeachof thelixmhlayusODFlQDbesummarizedby thefollowinsequations:

Ta.ble3.1 V.luesof parameters on F·lIa.yer

Q....nI1t,

IA,"

r; J:

I,+au, 0

i , -' .g

/(zj)+&/14.) 0

"'+LJ(II,I'jJ 101

..,+cp. '11+ qll

w;=Ji+au;

z; =

c+~;WII

v,

=

f(.,)

+

bf(, ,)

u ,

=;:;:jjVff

v,

p,=u;+

L:.(,,)',; ,

48

(3.3) (3.4) (3.5) (3.6) (3.7)

(72)

P.

q.~<+lIpll (3.8)

Theformofthefunction/(%)determines the natu reofthe cont rasten- banchementthattues placeonFl'Asigmoidmi!h tbethe klgicalchoice forthisfundion,butCarpenter's(20]choiceis

(3.9) where9is a positive const antIcss than one.

Processingon F2

Fig.3.3shows&typicalPE onF2layer.Bottom-upweight sare calculated accordingto the following equati on

(3.10)

The outputon F2ispvenbythefunctio n

(.) _ { d Tj=maziT. VK

gy, - 0 olhenoi.sc

3.2.2 TheOrientingSubsyst e m

(3.11)

From theparame ter tableand the defining equation

or

ART2,tbe actiuvites

onthelayerrODtheorient ing subsystem canbedefined by

Ui+CP;

"=lIuli

+

11..11

'9

(3.12)

(73)

Figure3.3:Structureof processing eleme ntenF,layer Hereit hasbeen assumedthate

=

0and the conditionforresetis

(3.13)

Itshould he noted thattwo sublayers pandu participat e in thematching process, As top-downweightchangesontheplayer duringlearning,the activityof theunitson the player also changes.Theu layer remainsstable duringthisprocess,so includingit in the matchingprocesspreventsreset from occurringwhile learning ofa new patternis takingplace,

3.2.3 GainControlinART2

Thethreega incontrolunitson Fl ncnepecificallyinhibit the x,u , and q eublayere.Theinhibitorysignal is equalto the magnitudeof theinp utvector

50

(74)

tothoselayers.The effect isthatthe activiti es ofthesethree layers are normalized tounity by the gaincontrolsignals.

3.2.4 Le ast- m ean-sq u ar eEquat ions

Doth bottom-upand top-downLeast-mean-squareequatio nshavethesam e forma9show nbelow:

forthebott o m-up weightsfromVionFItoVionF2•and

ij;:::g(y;)(p>-Zj; )

(3. 14)

(3.15)

for top-downweight!fromVjonF2tov;onFt_IfVJis winningnode,the n from theprevi ousequationsit can beshownthat

and similiarly

ZjJ

=

d(Ui

+

dZiJ-zu)

(3.16)

(3.17) with all otheriii:::iji :::0fori

¥

J.Forthefast-learning casefor the equilibrium values of the weights:

U;

ZJi=ZiJ ::: -

l-d 51

(3.18)

Références

Documents relatifs

This model is closely based on the known anatomy and physiology of the basal ganglia and demonstrates a reasonable mechanism of network level action

A number of preliminary procedures were developed and implemented prior to conducting a comparative analysis of statistical models for natural gas composition determi- nation:

Each training example should include the following: the image obtained as a result of the Haar wavelet transform of the average cardiocycle, as well as the coordinates of the

In this study, artificial neural network model was developed for predicting the oral acute toxicity on rats of a series of 77 herbicides based on their molecular structure,

Increasing noise up to a certain level results in a shift of the onset of the tracking state, until the model reached the no tracking state. Our results in Figure 4 are similar to

After microwave dielectric heating, the crude reaction mixture was allowed to cool down at room temperature and the volatile compounds of the reaction mix- ture were eliminated in

Our Intelligent Internet Search Assistant reorders the retrieved Y set of M-tuples showing only to the user the first Z set of M-tuples based on the low- est distance (MD) between

RBF Basis function (RBF) neural network has the ability of expressing arbitrary nonlinear mapping, learning and self-adapting, and also has an effective control