• Aucun résultat trouvé

Weighted-feature and cost-sensitive regression model for component continuous degradation assessment

N/A
N/A
Protected

Academic year: 2021

Partager "Weighted-feature and cost-sensitive regression model for component continuous degradation assessment"

Copied!
9
0
0

Texte intégral

(1)

HAL Id: hal-01652194

https://hal.archives-ouvertes.fr/hal-01652194

Submitted on 30 Nov 2017

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Weighted-feature and cost-sensitive regression model for component continuous degradation assessment

Jie Liu, Enrico Zio

To cite this version:

Jie Liu, Enrico Zio. Weighted-feature and cost-sensitive regression model for component continuous degradation assessment. Reliability Engineering and System Safety, Elsevier, 2017, 168, pp.210 - 217.

�10.1016/j.ress.2017.03.012�. �hal-01652194�

(2)

ContentslistsavailableatScienceDirect

Reliability Engineering and System Safety

journalhomepage:www.elsevier.com/locate/ress

Weighted-feature and cost-sensitive regression model for component continuous degradation assessment

Jie Liu

a

, Enrico Zio

a,b,

aChair on System Science and the Energetic Challenge, EDF Foundation, Laboratoire Genie Industriel, CentraleSupélec, Université Paris-Saclay, Grande voie des Vignes, 92290 Chatenay-Malabry, France

bEnergy Department, Politecnico di Milano, Milano, Italy

a r t i c le i n f o

Keywords:

Condition-based maintenance Cost-sensitive

Continuous degradation assessment Feature vector regression Feature vector selection Weighted-feature

a b s t r a ct

Conventionaldata-drivenmodelsforcomponentdegradationassessmenttrytominimizetheaverageestimation accuracyontheentireavailabledataset.However,animbalancemayexistamongdifferentdegradationstates, becauseofthespecificdatasizeand/ortheinterestofthepractitionersonthedifferentdegradationstates.

Specifically,reliableequipmentmayexperiencelongperiodsinlow-leveldegradationstatesandsmalltimesin high-levelones.Then,theconventionaltrainedmodelsmayresultinoverfittingthelow-leveldegradationstates, astheirdatasizesoverwhelmthehigh-leveldegradationstates.Inpractice,itisusuallymoreinterestingtohave accurateresultsonthehigh-leveldegradationstates,astheyareclosertotheequipmentfailure.Thus,duringthe trainingofadata-drivenmodel,largererrorcostsshouldbeassignedtodatapointswithhigh-leveldegradation stateswhenthetrainingobjectiveminimizesthetotalcostsonthetrainingdataset.Inthispaper,anefficient methodisproposedforcalculatingthecostsforcontinuousdegradationdata.Consideringthedifferentinfluence ofthefeaturesontheoutput,aweighted-featurestrategyisintegratedforthedevelopmentofthedata-driven model.Realdataofleakageofareactorcoolantpumpisusedtoillustratetheapplicationandeffectivenessof theproposedapproach.

© 2017ElsevierLtd.Allrightsreserved.

1. Introduction

Condition-BasedMaintenance(CBM)hasgainedmuchattentionre- cently[23,32,8].Advancedsensorsimplementedinproductionsystems measurephysicalvariablesrelatedtotheequipmentdegradationand propermaintenanceisrecommendedbasedontheassesseddegradation oftheequipment.Comparedtocorrectiveandscheduledmaintenance, CBMcanreducethedirectandindirectcostsofmaintenanceandpre- ventcatastrophicfailure[1].

OneofthecornerstonesofCBMisthepreciseassessmentofthecur- rentdegradationstateoftheequipmentofinterest.Ifitismonitored directlybysensors,itisrelativelyeasytoidentifythesystemdegrada- tionstate.Otherwise,thedegradationstateoftheequipmentofinterest needstobeinformedfromtherelatedmonitoredvariables.Forthelatter case,physical[13,25],knowledge-based[12,26]ordata-drivenmodels [16,33]canbe integrateddependingontheavailableknowledge,in- formationanddataonthedegradation[36,46].Data-drivenmodels,in particular,benefitfromhighcomputationspeedandadvancedsensors.

Thesemodelscanbeclassifiedalsobasedonthetypeoftheunderlying

Corresponding author at: Chair on System Science and the Energetic Challenge, EDF Foundation, Laboratoire Genie Industriel, CentraleSupélec, Université Paris-Saclay, Grande voie des Vignes, 92290 Chatenay-Malabry, France.

E-mail address: enrico.zio@polimi.it (E. Zio).

degradationprocess,i.e.proportionalhazardmodels[30],discrete-state degradation[24],continuous-statedegradation[1].

Inthiswork,wefocusonthedata-drivenmodelswithsupervised learningmethodsforcontinuousdegradationassessment.Assumingthat apoolofdegradationpatternsofsimilarequipmentareavailable,adata- drivenmodelistrainedoff-linewiththerecordeddegradationdataand, then,itisusedtoassesson-linethedegradationstateofanequipment underoperation.Shenetal.[34]adoptafuzzysupportvectordatade- scription toconstructafuzzy-monitoringcoefficient,whichservesas amonotonicdamageindexofbearingdegradation.Logisticregression modelsandincrementalroughsupportvectordatadescriptionareused separatelyinCaesarendraetal.[6]andZhuetal.[45]forbearingdegra- dationassessment.Principalcomponentanalysisis usedinGómezet al.[10]toobtainsoildegradationindexesfordistinguishingbetween olivefarmswithlowsoil degradationandthoseinaseriousstateof degradation.Pengetal.[28]useaninverseGaussian processmodel fordegradation modelling.Nonhomogeneouscontinuous-timehidden semi-MarkovprocessisproposedinMoghaddassandZuo[24]formod- ellingthedegradationofdevicesundergoingdiscrete,multistatedegra- dation.ValeandLurdes[37]employastochasticmodelforcharacter-

http://dx.doi.org/10.1016/j.ress.2017.03.012 Available online 22 March 2017

0951-8320/© 2017 Elsevier Ltd. All rights reserved.

(3)

J. Liu, E. Zio Reliability Engineering and System Safety 168 (2017) 210–217

izingthetrackgeometrydegradationprocessinthePortugueserailway NorthernLine.

Forthedevelopment of theprevious data-drivenmodels,itisas- sumedthatthepoolofavailabledegradationdataislargeandrepresen- tativeofthedifferentdegradationstatesoftheequipment.However, inpracticalcases,especiallyforhighlyreliableequipment,aproblemof imbalanceoftenexistsbetweendifferentdegradationstateswithrespect totheknowledge,informationanddataavailabletocharacterizethem.

Forhighlyreliableequipment,typicallythereisalongperiodwithout degradationorwithlow-leveldegradationstates,andthedatarepresen- tativeofhigh-leveldegradationstatesarerelativelylimited.Traininga data-drivenmodelontheentiresetofrecordeddatawiththeobjective ofminimizingtheaverageestimationerroronthewholedegradation processmayleadtoarelativelyworseperformanceofthetrainedmodel forthedataofhighleveldegradationstates,asthedataonthelow-level degradationstatesoutnumberthoseonthehigh-leveldegradationstates andthetrainedmodeloverfitsonthelow-leveldegradationstates.On theotherhand,ifthedataonthehigh-leveldegradationstatesisnot sufficientlylargeandrepresentative,trainingadata-drivenmodelonly withthedataofhigh-leveldegradationstatesmayleadtooverfitting therecordeddataandlowgeneralizabilityontestdata.Furthermore, evenifthedataondifferentdegradationstatesarecomparableandrep- resentative,theindustrialpractitionermaybemoreinterestedinsome degradationstates,e.g.thehigh-leveldegradationstates,thepeakval- uesduringthedegradation process,astheyaremorecriticalforthe functioningoftheequipmentand,thus,requiremoreaccuracy.So,the data-drivenmodelshouldbetrainedonthewholedataset,buttheob- jectivecannotbethatofminimizingtheaverageestimationerroron thewholedataset.Inthiswork,anapproachisproposedbasedoncost- sensitiveregressionmodelsincombinationofaweighted-featurestrat- egyfortheimbalanceproblemincontinuousdegradationassessment.

Totheauthors’ knowledge,noworkhasbeenreportedonthisproblem forcontinuousdegradationassessment.

Cost-sensitive models are very popular for solving classification problemswithdifferentcostsfordifferentmisclassificationerrors[9]. Theyhavebeensuccessfullyappliedinmedicaldiagnosis[39],object detection[5],intrusiondetection[15],facerecognition[42],software defect prediction[17] etc.The objective of training acost-sensitive modelis tominimize thetotal coston themisclassification errorin awaytoassign alargercosttotheerror ontheminorityclassthan thatonthemajorityclass.Cost-sensitivemodelshavebeenintegrated inneuralnetworks[14],decisiontrees[41],supportvectormachines [38],Bayesiannetworks[11],etc.Differentmethodscanbeusedtoas- signthecostsfordifferentmisclassificationerrorsandthemostpopular oneistosetthecostsbasedonexpertiseontheproblem[40].Differ- entcost-assignmentmethodscanbeproposedconsideringthespecific requirementsandcharacteristicsoftheapplication.Inthementioned previousworks,discretecostsareassignedtodifferentclasses,assum- ingthatthecostofthemisclassificationerrorsonthedataofthesame classisthesame.Thisisreasonableforclassificationproblemswithfi- niteclasses.Onthecontrary,forregressionofcontinuousdegradation, itisinfeasibletoassigndiscretecostsforinfinitedegradation states.

Onewayistodiscretizethecontinuousdegradationstates,butthismay causelossofinformationonthedegradationstates.Thisworktriesto proposeamethodforassigningcontinuouscoststodegradationstates.

Theproposedmethodassignslargercoststotheregressionerrorson thehigh-leveldegradationstatesthanthosetothelow-leveldegrada- tionstatesandalsolargercoststopeakvaluesthantonormalvaluesin theirneighborhoods.

Thebasicdata-drivenmodelusedinthispaperforintegratingthe cost-sensitive strategy is Feature Vector Regression (FVR), a kernel methodproposedinLiuandZio[20].TrainingaFVRmodelrequiresfea- turevectorselectionandregressionmodelconstruction.Featurevector selectionproceedsbyselectingpartofthetrainingdatapointsasfea- turevectorsintheReproducingKernelHilbertSpace(RKHS),wherethe mappingofalltrainingdatapointscanbeexpressedasalinearcombi-

nationoftheselectedfeaturevectors.Differenttothetraditionalkernel methodsthatdefinetheestimationfunctionasakernelexpansiononall thetrainingdataset,theestimatefunctionofFVRisakernelexpansion only onthefeaturevectors. Theobjectiveistominimizetheregres- sionerroronthewholetrainingdataset.TheparametersinFVRcanbe calculatedanalytically,withoutusingasophisticatedmethodfortuning parametersinthemodel.FVRmodeliscombinedwithcost-sensitiveand weighted-featurestrategiesinthiswork.Thesestrategiesarenotsuit- ableonlyforFVR,butalsoforotherregressionmodelsfordegradation assessment.

Anotheroriginalcontributionofthisworkistheadoptedweighted- featurestrategy.Conventionally,aftertherawdataarepre-processed, somefeaturesareselectedasinputsandtheyformdirectlytheinput vectors whichare usedfor traininga supervised data-drivenmodel.

However,theselectedfeaturesmaystillhavedifferentlevelsofinflu- enceon theoutput. Inorder tocharacterize thisdifferenceinkernel methods,largerweightsareassignedtothefeatureswithahigherin- fluenceontheoutputinthekernelfunction.Weighted-featurestrategy hasbeenusedin AmuthaandKavitha[3],Pengetal.[29],Phanet al.[31],Liuetal.[18].Theweighted-featurestrategyisintegratedfor thefirsttimewiththebasicmodel(FVR)inthis workanditis also thefirsttimethat theweighted-featurestrategyisused fordegrada- tion assessment. Anefficient optimizationprocess is adoptedin this workforfindingtheweightsfordifferentfeaturesandtheparametersin FVR.

Todemonstratetheproposedapproach,wemakeuseofacasestudy concerningtheestimationofleakageofcoolantwaterfromthesealofa ReactorCoolantPump(RCP)inaNuclearPowerPlant(NPP).Thecon- troloftheleakageisveryimportantforsafetyreasons[21].Sensorsare installedtomonitorthetemperature,pressure,flowofcoolantwater.

Thesevariablesareinformativeforassessingtheleakageamount.When theamountofleakageislow,thereactorcanmakeuptheleakagebyus- ingauxiliaryandprotectionsystems.Butwhenitislarge,theNPPmust beshutdown.Thus,operatorsaremoreconcernedofcorrectlyidenti- fyingtheleakagesoflargemagnitudethansmall.Correspondingly,the accuracyismoreimportantincaseoflargeamountofleakagethanof small amount.However,astheNPP componentsarehighlyreliable, mostofthehistoricalrecordeddatarelatestolowamountsofleakage, thedataonlargeamountsofleakagebeingfarless.Inthiswork,thepro- posedweighted-featureandcost-sensitiveFVRisadoptedforestimating continuouslytheleakagefromtheRCP.

Theremainderofthepaperisstructuredasfollows.Section2recalls brieflytheFVRmethodandintroducestheapproachproposedforas- sessingcontinuousdegradationwithimbalance.Thecasestudyonthe leakagefromRCPispresentedinSection3withexperimentalresults.

SomeconclusionsandperspectivesaredrawninSection4.

2. Weighted-featureandcost-sensitiveFVRfordegradation assessment

FVRproposedinLiuandZio[20]isakernelmethod,whichallows easytuningofhyperparametersandoffersageometricalinterpretation.

FVRreducesthesizeoftheestimatefunctionbyselectinganumberof featurevectorsandkeepsitsgeneralizationpowerbyminimizingthe erroronthewholetrainingdataset.InthisSection,FVRisbrieflyre- viewedfirstlyand,then,theweighted-featureandcost-sensitiveFVRis introduced,withdetailsonthecalculationofthecostsandthetuning oftheweightofeachfeature.

Inthepaper,(𝒙,𝑦)representstheinputandcorrespondingoutputof anonlinearrelation.𝑘(𝒙1,𝒙2)isthekernelfunction,takenastheinner productof𝜑(𝒙1)and𝜑(𝒙2),i.e.⟨𝜑(𝒙1),𝜑(𝒙2)⟩,and𝜑(𝒙)isthefunction thatmapstheoriginaldataintoahighdimensionalspace(i.e.Repro- ducingKernelHilbertSpace(RKHS)),wheretherelationbetweenin- putandoutputbecomes linear.Thetrainingdatasetis 𝐓={(𝒙𝑖,𝑦𝑖)}, for𝑖=1,2,,𝑁.Withoutrestrictingthegeneralizabilityofthemodel, 211

(4)

Fig. 1. Pseudo-code of the feature vector selection process.

weassumethathigh-leveldegradationstatescorrespondtohighvalues of𝑦.

2.1. Featurevectorregression

Schölkopfetal.[35]proposenonparametricandsemi-parametric representertheorems demonstrating that kernel algorithmswith the minimalsumof anempirical riskterm anda regularizationterm in RKHSasoptimizationobjectiveachievetheiroptimalsolutionsforan estimatefunctionthatisakernelexpansiononthetrainingdatapoints.

Specifically,inmathematicalterms,suchestimatefunction𝑓(𝒙)canbe expressedas

𝑓(𝒙)=

𝑁 𝑖=1

𝛼𝑖𝑘(𝒙𝑖,𝒙)+𝑏=

𝑁 𝑖=1

𝛼𝑖⟨𝜑(𝒙𝑖), 𝜑(𝒙)⟩+𝑏, (1)

where𝛼𝑖,𝑖=1,2,, 𝑁aretheLagrangemultipliersand𝑏isacon- stant.

If there exists such a subset 𝑺={(𝒙𝑠𝑖,𝑦𝑠𝑖)},𝑖=1,2,,𝑀, with 𝑀<𝑁, such that the mapping of each training data point can be expressed as a linear combination of the mapping of 𝑺 RKHS, i.e. 𝝋(𝒙)=∑𝑀

𝑖=1𝛽𝑖(𝒙)𝝋(𝒙𝑠𝑖), after replacing 𝜑(𝒙) in Eq. (1) with

𝑀

𝑖=1𝛽𝑖(𝒙)𝝋(𝒙𝑠𝑖),theestimatefunction𝑓(𝒙)canberewrittenas 𝑓(𝒙)=

𝑁 𝑖=1

𝑀

𝑗=1𝛼𝑖𝛽𝑗(𝒙)⟨𝝋(𝒙𝑖),𝝋(𝒙𝑠𝑗)⟩+𝑏=

𝑀 𝑗=1𝛽𝑗(𝒙) (𝑁

𝑖=1

𝛼𝑖⟨𝝋(𝒙𝑖),𝝋(𝒙𝑠𝑗)⟩ )

+𝑏; thus,

𝑓(𝒙)=

𝑀 𝑖=1

𝛽𝑖(𝒙)(̂𝑦𝑠𝑖𝑏)+𝑏, (2)

witĥ𝑦𝑠𝑖=𝑓( 𝒙𝑠𝑖)

=

𝑁 𝑖=1

𝛼𝑖𝑘(𝒙𝑖,𝒙𝑠𝑖)+𝑏.

WithEq.(2),FVRformulatestheoptimizationproblemas

Minimizê𝑦𝑠

𝑗,𝑏𝑊 = 1

𝑁

𝑁 𝑖=1

(𝑓( 𝒙𝒊)

𝑦𝑖)2

Subjectto𝑓( 𝒙𝒊)

=

𝑀 𝑗=1

𝛽𝑗(𝒙𝒊)(̂𝑦𝑠𝑗𝑏)+𝑏

, (3)

wheretheunknownparameters(decisionvariables)tooptimizearê𝑦𝑠𝑗, 𝑗=1,2,,𝑀 and𝑏,andtheobjectiveoftheoptimizationproblemis theminimalestimationerroronthewholetrainingdataset.

Bysettingthepartialderivativesof𝑊 withrespectto ̂𝑦𝑠𝑗 and𝑏to zero,thepreviousparameterscanbecalculatedbysolvingthefollowing systemofequations:

[𝛀 𝐇 𝚪𝑇 𝑐

][

̂ 𝒚𝒔 𝑏

]

= [𝐏

𝑙 ]

, (4)

where 𝛀 is a 𝑀×𝑀 matrixwith 𝛀𝑚𝑛=∑𝑁

𝑖=1𝛽𝑚(𝒙𝒊)∗𝛽𝑛(𝒙𝒊), 𝐇 is a 𝑀× 1 vector with𝐇𝑚=∑𝑁

𝑖=1𝛽𝑚(𝒙𝒊)∗(1−∑𝑀

𝑗=1𝛽𝑗(𝒙𝒊)), 𝚪 is a 𝑀× 1 vectorwith𝚪𝑚=∑𝑁

𝑖=1𝛽𝑚(𝒙𝑖)∗(1−∑𝑀

𝑙=1𝛽𝑙(𝒙𝑖)),𝑐isaconstantand𝑐=

𝑁

𝑖=1(1−∑𝑀

𝑗=1𝛽𝑗(𝒙𝑖))2;𝒚̂𝒔=(̂𝑦𝑠𝑗), 𝑗=1,2,, 𝑀 and𝑏aretheun- knownsinEq.(3),𝐏isa𝑀× 1vectorwith𝐏𝑚=∑𝑁

𝑖=1𝛽𝑚(𝒙𝒊)∗𝑦𝑖,𝑙=

𝑁 𝑖=1(1−∑𝑀

𝑗=1𝛽𝑗(𝒙𝑖))∗𝑦𝑖.

Feature Vector Selection (FVS) method proposed in Baudat and Anouar[4]can findtheprevioussubset𝑺.Themainideais tofind anobliquecoordinatesystemintheRKHSof𝑘(𝒙1,𝒙2)and,then,each vectorinRKHScanbeexpressedasalinearcombinationofthecoor- dinatevectors.TheFVSmethodcanbuildsuchanobliquecoordinate systemwiththemappingofanumberofthetrainingdatapoints.Exper- imentalresultsshowthatthenumberoftheselecteddatapointsmaybe farlessthanthatofthetrainingdataset,i.e.𝑀≪𝑁.Amodifiedversion oftheFVSmethodisproposedinLiuandZio[20]tospeedtheselec- tionprocess.Thelocalfitnessofonedatapoint𝒙withrespecttothe current𝑺iscalculatedas𝐿𝐹(𝒙)=|1−𝐾𝑺𝑡,𝑥𝐾𝑺−1,𝑺𝐾𝑺,𝑥∕(𝑘(𝒙,𝒙))|.Details onthemethodcanbefoundinBaudatandAnouar[4]andLiuandZio [20].Forcompleteness,thepseudo-codeofthefeaturevectorselection processproposedinLiuandZio[20]isshowninFig.1.

Theparameters𝜷(𝒙)={𝛽𝑖(𝒙)},𝑖=1,2,,𝑀inEqs.(2)–(4)canbe calculatedwiththeselectedfeaturevectorsin𝑺asinEq.(5): 𝜷(𝒙)= 𝐾𝑺𝑡,𝑥𝐾𝑺−1,𝑺, (5)

where 𝐾𝒔,𝒔 is the kernel matrix of 𝑺 and 𝐾𝑺,𝑥={𝑘(𝒙𝑖, 𝒙)}, 𝑖=1,2,, 𝑀.

2.2. Weighted-featureandcost-sensitiveFVR

ForthesituationwithdataimbalancementionedintheIntroduction, theminimalaverageerrorasinEq.(3)maynotgivesatisfactoryresults.

Toavoidthis,differentcostsshouldbeassignedtotheerrorsondifferent datapointsduringthetrainingprocess,inordertosatisfythedesired accuracyonsomespecificdatapointsofinterest.

(5)

J. Liu, E. Zio Reliability Engineering and System Safety 168 (2017) 210–217

Anotherissueregardstheusefulnessofdifferentfeaturesfortheout- putassessment:thefeaturesneedtobetreateddifferentlyandlarger weightsshouldbeassignedtothemoreusefulones.Inkernelmethods, differentweightscanbeassignedtodifferentfeaturesbyusingkernel function𝑘(𝝎.𝒙1,𝝎.𝒙2)insteadof𝑘(𝒙1,𝒙2),where𝝎is theweight vector(of thesamesizeas 𝒙) ofthe usefulnessof each featureand operator.istheelement-wisemultiplication.

Initiatingthecostsofdifferentdatapointsas𝑪=[𝑐1,𝑐2,,𝑐𝑁],the optimizationprobleminEq.(3)canbereformulatedas

Minimizê𝑦𝑠

𝑗,𝑏𝑊 = 1

𝑁

𝑁 𝑖=1

𝑐𝑖( 𝑓(

𝒙𝒊)

𝑦𝑖)2

Subjectto𝑓( 𝒙𝒊)

=

𝑀 𝑗=1

𝛽𝑗(𝒙𝒊)(̂𝑦𝑠𝑗𝑏)+𝑏

. (6)

Theunknownparameters ̂𝑦𝑠𝑗,𝑗=1,2,,𝑀and𝑏inEq.(6)canbe calculatedbysolvingthefollowingequation:

[𝛀 𝐇 𝚪𝑇 𝑐

][

̂ 𝒚𝒔 𝑏

]

= [𝐏

𝑙 ]

, (7)

where𝛀 isa𝑀×𝑀 matrixwith𝛀𝑚𝑛=∑𝑁

𝑖=1𝑐𝑖𝛽𝑚(𝒙𝒊)∗𝛽𝑛(𝒙𝒊),𝐇 isa 𝑀× 1vectorwith𝐇𝑚=∑𝑁

𝑖=1𝑐𝑖𝛽𝑚(𝒙𝒊)∗(1−∑𝑀

𝑗=1𝛽𝑗(𝒙𝒊)),𝚪isa𝑀× 1 vectorwith𝚪𝑚=∑𝑁

𝑖=1𝑐𝑖𝛽𝑚(𝒙𝑖)∗(1−∑𝑀

𝑙=1𝛽𝑙(𝒙𝑖)),𝑐isaconstantand𝑐=

𝑁

𝑖=1𝑐𝑖(1−∑𝑀

𝑗=1𝛽𝑗(𝒙𝑖))2;𝐏isa𝑀× 1vectorwith𝐏𝑚=∑𝑁

𝑖=1𝑐𝑖𝛽𝑚(𝒙𝒊)∗ 𝑦𝑖,𝑙=∑𝑁

𝑖=1𝑐𝑖(1−∑𝑀

𝑗=1𝛽𝑗(𝒙𝑖))∗𝑦𝑖.

Theparameters𝜷(𝒙)arestillcalculatedwithEq.(5),butwiththe kernelfunction𝑘(𝝎.𝒙1,𝝎.𝒙2).

In this model, the unknown parameters include the cost vector 𝑪=[𝑐1,𝑐2,,𝑐𝑁],theweightvector𝝎=[𝜔1,𝜔2,,𝜔𝑁]andthepa- rametersrelatedtothekernelfunction.

Thecostvector𝑪 canbecalculatedanalytically.Inthiswork,we focusonthepeakvalues andhigh-leveldegradationstates.Thepeak valuesmayoccurabruptlyandcausethefailuresof thesystem.The high-leveldegradationstatesarereachedclosetothefailurestateofthe systemand,thus,itisevenmoreimportanttogetgoodaccuracy.Thus, inthiswork,thecostisdependentonthedegradationstateandpeak value.Alargecostis,thus,assignedtodatapointswhicharepeakvalues andhigh-leveldegradationstates.Specifically,foradatapoint(𝒙𝑡,𝑦𝑡) with𝑡=1,2,,𝑁,itscostiscalculatedas

𝑐𝑡 = 𝑒(𝑠𝑡+𝑑𝑡)∕𝜎, (8)

where𝑠𝑡isthescoreofthepointwithrespecttothepeakfunctionin Eq.(9)[27]and𝑑𝑡isthenormalizeddegradationvalueof𝑦𝑡asinEq.

(10):

𝑎𝑡=

⎧⎪

⎪⎨

⎪⎪

𝑦𝑡1

2𝑘

2𝑘+1

𝑖=1;𝑖≠𝑡𝑦𝑖, if𝑡𝑘 𝑦𝑡1

2𝑘

𝑡+𝑘

𝑖=𝑡𝑘;𝑖≠𝑡𝑦𝑖, if𝑘<𝑡<𝑁𝑘 𝑦𝑡1

2𝑘

𝑁

𝑖=𝑁−2𝑘;𝑖≠𝑡𝑦𝑖, if𝑡𝑁𝑘

, f𝑜𝑟𝑡=1, 2,, 𝑁,

𝑠𝑡= 𝑎𝑡−min(𝒂)

max(𝒂)−min(𝒂) ∗ 0.8+0.1,with𝒂=[𝑎1,𝑎2,,𝑎𝑁], (9) 𝑑𝑡= 𝑦𝑡−min(𝒚)

max(𝒚)−min(𝒚) ∗ 0.8+0.1,with𝒚=[𝑦1,𝑦2,,𝑦𝑁]. (10) Thecostiscomposedoftwoparts:𝑠𝑡and𝑑𝑡.Peakdegradationval- ueshavehighscores𝑠𝑡andhigh-leveldegradationstateshaverelatively highvaluesof𝑑𝑡.Thetwovalues,𝑠𝑡and𝑑𝑡arenormalizedintheinter- val[0.10.9]tohaveequalweightsonthecostofonedatapoint.The parameter𝜎 inEq.(8)isacase-specificvaluethatmustbesetbythe analyst:thesmallerthevalueof𝜎is,thelargerthedifferencebetween thedifferentcosts.

Fortheparametersinthekernelfunctionandfortheweightvec- torforthedifferentfeatures,aniterativeprocedureisproposed(Fig.2) forminimizingthetotalcostinEq.(6).Aninitialweightvectorforthe featuresissetto1.Withthefixedweightvector,theparametersinthe kernelfunctionaretunedand,then,withthetunedparametersvaluein thekernelfunction,theweightvectoristuned.Theprocessisrepeated untilafixednumberofiterationsmaxIterisreached.Fortuningthepa- rametersinthekernelfunctionortheweightvector,conventionalopti- mizationmethods,e.g.geneticalgorithm[43],gridsearch[19],parti- cleswarmoptimization[44],antcolonyoptimization[22]canbeused, withtheobjectiveofminimizingthetotalcostonthetrainingdataset asinEq.(6).Notethatduringtheoptimizationprocess,thesumofthe weightsforthefeaturesequalsalwaysthatoftheinitialweights.The convergenceoftheiterativeoptimizationprocessisshowninSection3. 3. Casestudy

Theimplementationoftheproposedapproachissketchedin Fig.3. Intheexperiment,theRadialBasiskernelFunction(RBF)isusedas kernelfunction,i.e.𝑘(𝒙1,𝒙2)=𝑒

‖𝒙1𝒙22

2𝛾2 .Thevalueof𝛾canbecalcu- latedanalyticallywithEq.(11)below,asproposedbyCherkasskyand Ma[7],whereasthevaluefor𝜇ischosenbetween0and1:bytrialand error,thevalueof𝜇issetto0.2.Thus,noexplicitoptimizationmethod, e.g.gridsearch,is usedfortuningtheparametersinRBF.Giventhe weightvector𝝎,thevalueof𝛾iscalculateddirectlywithEq.(11)after replacing𝒙by𝝎.𝒙.

𝛾2=𝜇m𝑎𝑥‖𝒙𝑖𝒙𝑗2, 𝑖, 𝑗=1,, 𝑁 (11) Withthecalculatedvalueof𝛾,agridsearchmethodisusedtotune theweightsofthedifferentfeatures.Supposethereare𝐾features:the possiblerelativeweight𝑟𝑖ofonefeaturecanbedrawnfromR=[123 456789]withequalprobability;foreachcombinationoftherelative weights[𝑟1,𝑟2,,𝑟𝐾],theweightvector𝝎iscalculatedas

𝜔𝑖 = 𝑁𝑓𝑟𝑖 /∑𝐾

𝑗=1

𝑟𝑗, 𝑖=1, 2,, 𝐾, (12)

tosatisfytheinitialcriterionthatthesumoftheweightsequalstothe sumoftheinitialweightsforallthefeatures,i.e.𝑁𝑓whichisthenumber offeatures.Risanauthor-specifiedvectorcontainingtherelativeweight valuesusedtodifferentiatethecontributionofdifferentinputvariables totheoutput.

Thenumberofmaximaliterationsissetto15.

3.1. Verificationoftheproposedapproach

Inordertoverifytheeffectivenessoftheproposedapproach,itis testedonseveralpublicimbalanceddatasets.Theseimbalanceddatasets aretakenfromKeeldatasetRepositoryforbinaryclassificationadre- gression[2].

ThedatasetforbinaryclassificationpresentdifferentImbalanceRa- tios(IRs).Thecharacteristicsofthebinaryclassificationdatasets are showninTable1.

A comparisonis carried outbetween theproposed approach, i.e.

weighted-featureandcost-sensitiveFVR(WF-CS-FVR),andastandard FVR.Anothermethodisaddedtothecomparison,i.e.cost-sensitiveFVR (CS-FVR)withoutusingtheweightedfeaturestrategy,i.e.solvingEq.

(6)withthekernelfunction𝑘(𝒙1,𝒙2)insteadoftheweighted-feature one𝑘(𝝎.𝒙1,𝝎.𝒙2).Theaccuracyofthebinaryclassificationischar- acterizedbytheTruePositiveRate(TPR)andTrueNegativeRate(TNR), whicharethepercentagesofthecorrectlyclassifieddatapointsinpos- itiveandnegativeclasses,respectively.

TheresultsareshowninTable2.Itisseenthatbyassigningalarger weighttotheerrorontheminorityclass,theTPRvalueincreasesand theTNRvaluedecreases.Inthecomparisonoftheresultsof CS-FVR 213

(6)

Fig. 2. Iterative optimization of the parameters in the kernel function and the weight vector.

Fig. 3. Proposed approach.

Table 1

Characteristics of the imbalanced binary classification datasets [2] . Dataset

Number of instances

Number of attributes

Imbalance Ratio (IR)

glass1 214 9 1.82

haberman 306 3 2.78

new-thyroid1 215 5 5.14

ecoli3 336 7 8.6

ecoli-0–6-7_vs_5 220 6 10

yeast-1_vs_7 459 7 14.3

ecoli4 336 7 15.8

abalone-9_vs_18 731 8 16.4

shuttle-6_vs_2-3 230 9 22

Table 2

Results of the proposed approach WF-CS-FVR on the public imbalanced datasets for bi- nary classification and comparisons with FVR and CS-FVR.

Datasets FVR CS-FVR WF-CS-FVR

TNR TPR TNR TPR TNR TPR

glass1 0.6296 0.5625 0.4815 0.6875 0.5926 0.6875 haberman 0.8444 0.1176 0.7556 0.3529 0.7556 0.7059 new-thyroid1 0.9167 0.7143 0.9167 0.7143 0.9722 0.8571 ecoli3 0.9508 0.7143 0.8689 0.8571 0.8689 1.0000 ecoli-0–6-7_vs_5 1.0000 0.5000 1.0000 0.7500 1.0000 0.7500 yeast-1_vs_7 1.0000 0.3333 0.7412 0.5000 0.7882 0.5000 ecoli4 1.0000 0.7500 1.0000 0.7500 1.0000 0.7500 abalone-9_vs_18 0.9928 0.7778 0.8841 1.0000 0.9203 1.0000 shuttle-6_vs_2-3 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

andWF-CS-FVR,wecanobservethatbyintegratingweighted-feature strategies,atleastoneofthevaluesofTPRandTNRcanbeimproved formostofthedatasets.TheresultsofFVRandWF-CS-FVRshowthat

for mostof thedatasets,bysacrificing theaccuracyonthemajority class, arelativelylarge gaincanbe obtained ontheaccuracyof the minorityclass.Forexample,fordataset‘galss1’,thedifferenceofTNR valuesbetweenFVRandWF-CS-FVRis0.0370andthedifferenceofTPR valuesis0.1250,whichismuchbiggerthanthedifferenceoftheTNR values.

Fivedatasetsareusedtotesttheeffectivenessoftheproposedframe- workforregression.Thecharacteristicsofthesedatasetsareshownin Table3.Theoutputsandinputsofthesedatasetsarenormalizedto[0.0 0.9].Thetypeoftheinputscanbenumericorinteger.Thedatapoints withoutputvaluesbiggerthan0.6arehigherdatapointsandtheothers arelowerdatapoints.TheMeanSquaredErrors(MSEs)oftheproposed frameworkandthebenchmarkmethods(i.e.FVRandCS-FVR)onhigher andlowerdatapointsareshowninTable4.

FromTable4,thefollowingconclusionscanbedrawn:1)themod- els(CS-FVRandWF-CS-FVR)withcost-sensitivestrategygivebetterre- sultsonhigherdatapoints;2)theweighted-featurestrategycanfurther improvetheaccuracyoftheCS-FVRmodelonthehigherdatapoints;

3)theweighted-featurestrategycangivebetterresultsevenonthelower data points thanthebenchmarkmethods (e.g.for datasets friedman andconcrete)whentheinputattributesarenotwellscaledduringthe

Table 3

Characteristics of the datasets for regression.

Name # of instances # of features Data type

laser 993 4 numeric

autoMPG8 392 7 numeric & integer

forestFires 517 12 numeric & integer

friedman 1200 5 numeric

concrete 1030 8 numeric & integer

(7)

J. Liu, E. Zio Reliability Engineering and System Safety 168 (2017) 210–217 Table 4

MSEs of the proposed framework and the benchmark methods on the datasets for regres- sion.

Datasets FVR CS-FVR WF-CS-FVR

MSE on higher data points

MSE on lower data points

MSE on higher data points

MSE on lower data points

MSE on higher data points

MSE on lower data points laser 3.69E 4 8.45E 4 3.30E 4 1.30E 3 1.70E 4 7.12E 4 autoMPG8 2.17E 2 2.20E 3 2.06E 2 2.70E 3 1.90E 3 3.20E 3 forestFires 3.50E 3 2.94E 4 2.30E 3 1.70E 3 2.20E 3 1.80E 3 friedman 1.90E 3 1.50E 2 2.30E 3 1.90E 2 9.10E 4 7.70E 4 concrete 5.02E 4 2.96E 4 2.22E 4 8.40E 4 2.20E 14 9.23E 14

preprocessingpart;4)theaccuracyonthelowerdatapointsisnormally somewhatgivenup.

3.2. DegradationstateestimationinaNPP

Theapplicationoftheproposedmethodisshownonarealcasecon- cerningtheamountofleakagefromthesealsofanRCPofanNPP.

Twentyvariablesrelatedtotheleakageprocessareavailable,e.g.

flowofsealinjectionsupply,temperatureofthewaterseal,tempera- tureof sealinjectionline, temperatureofby-passhot legloop,pres- sureinthepressureinjectionline,etc.TherecordeddataforeightRCPs fromdifferentNPPsareavailable,withthetrueleakageandallrelated variables.ThedatafromtheoneRCPareusedasthetestdatasetand thedatafromothersevenRCPsarecombinedinthetrainingdataset.

TwocasestudiesfromtwodifferentNPPsaregeneratedfromthedata, namedCase1andCase2separately.Forconfidentialityconsiderations, theleakagevaluesarenormalizedintheinterval[0.10.9].Asshownin Fig.4foroneRCP,theleakagecantakecontinuousvaluesintheinter- val[0.10.9]andthesizesoftherecordeddataondifferentdegradation statesareverydifferent.Consideringapartitionoftheentirerangein differentdegradationlevelstates,[0.10.3],[0.30.5],[0.50.7],[0.7 0.9],therecordeddatawithleakagevaluesbetween[0.10.3]aremuch morenumerousthanthoseforleakagevaluesintheinterval[0.30.5], [0.50.7]and[0.70.9].

3.2.1. Theproposedapproach

Inthenumericalexperiment,thecostsrelatedtotheestimationer- rorsondifferentdatapointsarecalculatedbyEq.(8),withthenormal- izedleakagevalues.Theunknown𝜎intheequationissettobe0.3to penalizetheimportanceoftheestimationerrorsonthedatapointswith aleakagelowerthan0.3,asshownin Fig.5forCase1.Forexample,

Fig. 4. Normalized leakage values in one RCP.

Fig. 5. Costs of different training data points for Case 1.

wecanobservethatthe1504thdatapointandthedatapointsnum- beredbetween2400and2600havesimilardegradationvalues,butas the1504thdatapointisapeakvalue,itsweight(i.e.10.04)ishigher thanthoseofthedatapointsnumberedbetweenthe2400and2600(i.e.

5.51–8.01).

3.2.2. Experimentresults

Fig.6showstheminimalcostofeachiterationofthetuningpro- cessduringthetrainingforCase1.Itisshownthattheminimalcost inEq.(6)convergestoaminimalvalue.Theminimalcostisobtained atthe11thiterationanditisstableafterthe11thiteration,withsmall perturbation.Theperturbationiscausedbytheparameteroptimizing methods,i.e.analyticalmethodfor𝛾andgridsearchfor𝝎whichde- rivesonlythesub-optimalresults.

Thecomparison results aregiveninTable 5,withregards tothe MSEs.TheMSEarecalculatedseparatelyonthewholetestdataset,the testdatapointswithsmallamountofleakagein[0.10.3](low-level degradationstates)andthetestdatapointswithlargeamountofleakage in[0.30.9](high-leveldegradationstates).

FromTable5,onecanobservethattheMSEofFVRonthelow-level degradationstatesismuchsmallerthanthatonthehigh-leveldegrada- tionstates.Themodeli.e.FVRisnotcapablegivingsatisfactoryresults onbothlow-levelandhigh-leveldegradationstates.TheMSEvaluesfor Case2aregreaterthanthoseforCase1.Thisisduetothefactthatthe

Fig. 6. Convergence of the minimal cost during the training process of Case 1.

215

(8)

Table 5

MSE given by the methods (FVR, CS-FVR, WF-CS-FVR) for the test dataset.

MSE Whole test dataset

Data with leakage in [0.1 0.3]

Data with leakage in [0.3 0.9]

Case 1 Case 2 Case 1 Case 2 Case 1 Case 2 FVR 0.0129 0.0216 0.0026 0.0193 0.0185 0.0305 CS-FVR 0.0077 0.0254 0.0046 0.0257 0.0095 0.0240 WF-CS-FVR 0.0064 0.0245 0.0019 0.0246 0.0088 0.0232

testdatasetinCase1isclosertothetrainingdatasetthanincase2,thus, supervised-learningmethodsgivebetterresults.

CS-FVRandWF-CS-FVRgivebetter resultsthanFVRon thedata pointsinhigh-leveldegradationstates,andworseresultsfordatapoints inlow-leveldegradationstates:thisshowsthatbysacrificingaccuracy onthedatapointsinlow-leveldegradationstates,theaccuracyonthe datapointswithhigh-leveldegradationstatescanbeimproved.

Compared to the results given by CS-FVR, by integrating the weighted-featurestrategyinCS-FVR,theresultsarefurtherimproved onthedatapointswithbothlow-levelandhigh-leveldegradationstates.

Thus,inthecasestudy,theweighted-featurestrategyworkswell.

4. Conclusion

Inpractice,consideringtheimbalanceofdataandthedifferentneeds ofthepractitionersontheaccuracyoftheestimatesofthedegradation states,thetraditionaldata-drivenmodelsfordegradation assessment aimingatminimizingtheaverageestimationerroronthewholetraining datasetmaygiveunsatisfactoryresults.Inthispaper,cost-sensitiveand weighted-featurestrategiesareintegratedinclassicaldata-drivenmod- els(specifically,FVRinthiswork)toimprovetheresultsonimbalanced datasets.Differentlyfromtheapproachofsettingdiscretecostvaluesin theliterature,continuouscostsareassignedtothedifferentdatapoints andanefficientmethodisproposedforcalculatingthecontinuouscosts withmorefocusonthepeakvaluesandhigh-leveldegradationstates.

Inordertofurtherimprovetheassessmentaccuracy,aweighted-feature strategyisalsointegratedtodifferentiatethecontributionsofdifferent features.Theapplicationoftheproposedapproachonrealdataofleak- agefromthesealsofRCPsinNPPshasbeenconsidered.Detailsforthe parameterstuningareprovided.Theresultsshowthatthecost-sensitive strategycanimprovetheaccuracyonthehigh-leveldegradationstates andtheweighted-featurestrategycanfurtherimprovetheaccuracyboth onhigh-levelandlow-leveldegradationstates.

Theproposed weighted-featureandcost-sensitiveframeworks are not only suitable forFVR,but alsoapplicable with otherregression models,e.g.supportvectormachine,artificialneuralnetwork,Bayesian methods,etc.Andtheregressionmodelscanalsobeusedfordegrada- tionassessment,degradationpredictionandfailureprediction,depen- dentontheinput-outputrelationofthedata.

References

[1] Alaswad S, Xiang Y. A review on condition-based maintenance optimization mod- els for stochastically deteriorating system. Reliab Eng Syst Saf 2017;157:54–63.

doi: 10.1016/j.ress.2016.08.009 .

[2] Alcala-Fdez J, Sanchez L, Garcia S, Del Jesus MJ, Ventura S, Garrell JM, Otero J, Romero C, Bacardit J, Rivas VM, Fernandez JC. KEEL: a software tool to assess evo- lutionary algorithms for data mining problems. Soft Comput 2009;13(3):307–18.

doi: 10.1007/s00500-008-0323-y .

[3] Amutha AL, Kavitha S. Features based classification of images using weighted feature support vector machines. Int J Comput Appl 2011;26(10):23–9.

doi: 10.5120/3141-4335 .

[4] Baudat G, Anouar F. Feature vector selection and projection using kernels. Neuro- computing 2003;55(1):21–38. doi: 10.1016/S0925-2312(03)00429-6 .

[5] Borji A, Ahmadabadi MN, Araabi BN. Cost-sensitive learning of top-down modulation for attentional control. Mach Vis Appl 2011;22(1):61–76.

doi: 10.1007/s00138-009-0192-0 .

[6] Caesarendra W, Widodo A, Yang BS. Application of relevance vector machine and logistic regression for machine degradation assessment. Mech Syst Signal Process 2010;24(4):1161–71. doi: 10.1016/j.ymssp.2009.10.011 .

[7] Cherkassky V, Ma Y. Practical selection of SVM parameters and noise estimation for SVM regression. Neural Netw 2004;17(1):113–26.

doi: 10.1016/S0893-6080(03)00169-2 .

[8] Do P, Voisin A, Levrat E, Iung B. A proactive condition-based maintenance strat- egy with both perfect and imperfect maintenance actions. Reliab Eng Syst Saf 2015;133:22–32. doi: 10.1016/j.ress.2014.08.011 .

[9] Elkan C. The foundations of cost-sensitive learning. In: Proceedings of the interna- tional joint conference on artificial intelligence. vol. 17, Issue 1. LAWRENCE ERL- BAUM ASSOCIATES LTD; 2001. p. 973–978.

[10] Gómez JA, Álvarez S, Soriano MA. Development of a soil degradation assess- ment tool for organic olive groves in southern Spain. Catena 2009;79(1):9–17.

doi: 10.1016/j.catena.2009.05.002 .

[11] Jiang L, Li C, Wang S. Cost-sensitive Bayesian network classifiers. Pattern Recognit Lett 2014;45:211–16. doi: 10.1016/j.patrec.2014.04.017 .

[12] Jin G, Matthews DE, Zhou Z. A Bayesian framework for on-line degradation assess- ment and residual life prediction of secondary batteries inspacecraft. Reliab Eng Syst Saf 2013;113:7–20. doi: 10.1016/j.ress.2012.12.011 .

[13] Jouin M , Gouriveau R , Hissel D , Péra MC , Zerhouni N . Joint Particle Filters Prognos- tics for Proton Exchange Membrane Fuel Cell Power Prediction at Constant Current Solicitation. IEEE Trans Reliab 2016;65(1):336–49 .

[14] Kukar M , Kononenko I . Cost-sensitive learning with neural networks. ECAI 1998:445–9 .

[15] Lee W, Fan W, Miller M, Stolfo SJ, Zadok E. Toward cost-sensitive model- ing for intrusion detection and response. J Comput Secur 2002;10(1–2):5–22.

doi: 10.3233/JCS-2002-101-202 .

[16] Lei Y , Jia F , Lin J , Xing S , Ding SX . An intelligent fault diagnosis method using unsupervised feature learning towards mechanical big data. IEEE Trans Ind Electron 2016;63(5):3137–47 .

[17] Liu M, Miao L, Zhang D. Two-stage cost-sensitive learning for software defect pre- diction. IEEE Trans Reliab 2014;63(2):676–86. doi: 10.1109/TR.2014.2316951 . [18] Liu J, Vitelli V, Zio E, Seraoui R. A novel dynamic-weighted probabilistic support

vector regression-based ensemble for prognostics of time series data. IEEE Trans Reliab 2015;64(4):1203–13. doi: 10.1109/TR.2015.2427156 .

[19] Liu J, Seraoui R, Vitelli V, Zio E. Nuclear power plant components condition mon- itoring by probabilistic support vector machine. Ann Nucl Energy 2013;56:23–33.

doi: 10.1016/j.anucene.2013.01.005 .

[20] Liu J, Zio E. Feature vector regression with efficient hyperparameters tuning and geometric interpretation. Neurocomputing 2016;218:411–22.

doi: 10.1016/j.neucom.2016.08.093 .

[21] Liu J, Zio E. A SVR-based ensemble approach for drifting data streams with recurring patterns. Appl Soft Comput 2016;47:553–64. doi: 10.1016/j.asoc.2016.06.030 . [22] Mahi M, Baykan ÖK, Kodaz H. A new hybrid method based on particle swarm opti-

mization, ant colony optimization and 3-opt algorithms for traveling salesman prob- lem. Appl Soft Comput 2015;30:484–90. doi: 10.1016/j.asoc.2015.01.068 . [23] Marseguerra M, Zio E, Podofillini L. Condition-based maintenance optimization

by means of genetic algorithms and Monte Carlo simulation. Reliab Eng Syst Saf 2002;77(2):151–65. doi: 10.1016/S0951-8320(02)00043-1 .

[24] Moghaddass R, Zuo MJ. Multistate degradation and supervised estimation methods for a condition-monitored device. IIE Trans 2014;46(2):131–48.

doi: 10.1080/0740817X.2013.770188 .

[25] Morshuis P. Assessment of dielectric degradation by ultrawide-band PD detection.

IEEE Trans Dielectr Electr Insul 1995;2(5):744–60. doi: 10.1109/94.469971 . [26] Oh H, Choi S, Kim K, Youn BD, Pecht M. An empirical model to describe performance

degradation for warranty abuse detection in portable electronics. Reliab Eng Syst Saf 2015;142:92–9. doi: 10.1016/j.ress.2015.04.019 .

[27] Palshikar G. Simple algorithms for peak detection in time-series. In: Proceedings of the 1st international conference on advanced data analysis. Business Analytics and Intelligence; 2009. p. 1–13.

[28] Peng W, Li YF, Yang YJ, Huang HZ, Zuo MJ. Inverse Gaussian process models for degradation analysis: a Bayesian perspective. Reliab Eng Syst Saf 2014;130:175–89.

doi: 10.1016/j.ress.2014.06.005 .

[29] Peng L, Zhang H, Zhang H, Yang B. A fast feature weighting algorithm of data grav- itation classification. Inf Sci 2017;375:54–78. doi: 10.1016/j.ins.2016.09.044 . [30] Pham HT, Yang BS, Nguyen TT. Machine performance degradation assess-

ment and remaining useful life prediction using proportional hazard model and support vector machine. Mech Syst Signal Process 2012;32:320–30.

doi: 10.1016/j.ymssp.2012.02.015 .

[31] Phan AV, Le Nguyen M, Bui LT. Feature weighting and SVM parameters optimiza- tion based on genetic algorithms for classification problems. Appl Intell 2016:1–15.

doi: 10.1007/s10489-016-0843-6 .

[32] Rasmekomen N, Parlikad AK. Condition-based maintenance of multi-component sys- tems with degradation state-rate interactions. Reliab Eng Syst Saf 2016;148:1–10.

doi: 10.1016/j.ress.2015.11.010 .

[33] Razavi-Far R , Davilu H , Palade V , Lucas C . Model-based fault detection and isolation of a steam generator using neuro-fuzzy networks. Neurocomputing 2009;72(13):2939–51 .

[34] Shen Z, He Z, Chen X, Sun C, Liu Z. A monotonic degradation assessment index of rolling bearings using fuzzy support vector data description and running time.

Sensors 2012;12(8):10109–35. doi: 10.3390/s120810109 .

[35] Schölkopf B, Herbrich R, Smola AJ. A generalized representer theorem. In: Proceed- ings of the international conference on computational learning theory. Berlin Hei- delberg: Springer; 2001. p. 416–426. DOI: 10.1007/3-540-44581-1_27.

Références

Documents relatifs

In order to test the versatility of the LIT technique to inves- tigate NP-cell association, di fferent cell lines were used, i.e., human epithelial type-II cells A549,

adenylate-forming enzyme; DSF, differential scanning fluorimetry; FAAL, fatty acyl-AMP ligase; FAAS, fatty acyl-ACP synthetase; FACL, fatty acyl-CoA ligases; MST,

Haplotype association mapping in the 23 inbred strains using high density SNP maps, together with genetic linkage analyses in informative crosses derived from discordant strains,

Last year [15], we reported the first demonstration of continuous-wave (CW) nonlinear optics in high index doped silica glass waveguides, by achieving

is evident that these estimates probably will be of limited use as we move on to assess options for REDD+ in the context of national political realities, focused at the

[r]

n -dof systems, lasting contact phases necessitate a purely inelastic impact law of the form Pu C D 0, leading to a loss of kinetic energy incompatible with the conservative

Also, when using our proposed cost-sensitive logistic regression, further savings are found, which confirms out belief that there is a need to use a model that not only incorporates