• Aucun résultat trouvé

Incremental dynamic mode decomposition: A reduced-model learner operating at the low-data limit

N/A
N/A
Protected

Academic year: 2021

Partager "Incremental dynamic mode decomposition: A reduced-model learner operating at the low-data limit"

Copied!
14
0
0

Texte intégral

(1)

Science Arts & Métiers (SAM)

is an open access repository that collects the work of Arts et Métiers Institute of

Technology researchers and makes it freely available over the web where possible.

This is an author-deposited version published in: https://sam.ensam.eu

Handle ID: .http://hdl.handle.net/10985/18539

To cite this version :

Agathe REILLE, Nicolas HASCOËT, Chady GHNATIOS, Amine AMMAR, Elías G. CUETO, Jean

Louis DUVAL, Francisco CHINESTA, Roland KEUNINGS - Incremental dynamic mode

decomposition: A reduced-model learner operating at the low-data limit - Comptes Rendus

Mécanique - Vol. 347, n°11, p.780-792 - 2019

Any correspondence concerning this service should be sent to the repository

Administrator : archiveouverte@ensam.eu

(2)

Incremental dynamic mode decomposition: A reduced-model

learner operating at the low-data limit

Agathe

Reille

a

,

Nicolas

Hascoet

a

,

Chady

Ghnatios

b

,

Amine

Ammar

c

,

Elias

Cueto

d

,

Jean

Louis

Duval

e

,

Francisco

Chinesta

a

,∗

,

Roland

Keunings

f

a ESIGroupChair@PIMM,ArtsetMétiersInstituteofTechnology,CNRS,CNAM,HESAMUniversity,151,boulevarddel’Hôpital,75013Paris, France

bNotreDameUniversityLouaize,P.O.Box72,ZoukMikael,ZoukMosbeh,Lebanon

cESIGroupChair@LAMPA,ArtsetMétiersParisTech,2,boulevardduRonceray,BP93525,49035Angerscedex01,France dESIGroupChair@I3A,UniversityofZaragoza,MariadeLuna,s.n.,50018Zaragoza,Spain

e ESIGroup,BâtimentSeville,3bis,rueSaarinen,50468Rungis,France

f ICTEAM,UniversitécatholiquedeLouvain,av.GeorgesLemaître,4,B-1348Louvain-la-Neuve,Belgium

a

b

s

t

r

a

c

t

Keywords: Machinelearning Advancedregression Tensorformats PGD Modedecomposition Nonlinearreducedmodeling

Thepresentworkaimsatproposinganewmethodologyforlearningreducedmodelsfrom asmall amountof data. It is based onthe fact that discretemodels, ortheir transfer functioncounterparts,have alow rankand then theycan beexpressedvery efficiently usingfewtermsofatensordecomposition.Anefficientprocedureisproposedaswellasa wayforextendingittononlinearsettingswhilekeepinglimitedtheimpactofdatanoise. Theproposed methodologyisthenvalidatedbyconsideringanonlinearelasticproblem andconstructingthemodelrelatingtractionsanddisplacementsattheobservationpoints.

1. Introduction

In physics in general andin engineering in particular, when addressing a generic problem, the first step consists in selectingthebestsuitedmodelforthedescriptionoftheevolutionofthesystemunderconsideration.Nowadays,avariety of models exists that are well established and validated, covering most of the domains of physics, e.g., solid andfluid mechanics,electromagnetism,heattransfer...

These models are based on the combination of two types of equations of very different nature. From one side, the so-calledbalance equations,whosevalidity intheframework ofclassicalphysics isout ofdiscussion.The second type of equations are the so-called constitutive models. These relate primal variables and dual variables, e.g., stress and strain, temperatureandheat flux.Theirexpression andvalidity dependon thenature oftheconsideredsystemandthe applied loading—inthemostgeneralsense.

*

Correspondingauthor.

E-mailaddresses:Agathe.REILLE@ensam.eu(A. Reille),Nicolas.Hascoet@ensam.eu(N. Hascoet),cghnatios@ndu.edu.lb(C. Ghnatios), Amine.AMMAR@ensam.eu (A. Ammar),ecueto@unizar.es (E. Cueto),Jean-Louis.Duval@esi-group.com (J.L. Duval),Francisco.CHINESTA@ensam.eu (F. Chinesta),roland.keunings@uclouvain.be (R. Keunings).

(3)

Constitutiveequationsareoftenphenomenological,andingeneralinvolvesomeparameters assumedtobeintrinsicto thesystemunderconsideration.

Forthisreason,andeveninthecontextofclassicalengineering,beforesolvingagivenproblem,onemustdecideonthe mostappropriatemodel.Thereobviouslyexistsariskonthepertinenceofthechosenmodelforaddressingthephenomena understudy.Byperforming some engineeredexperimentsinorderto calibratethe model,thatis,to extractthe valueof theparametersthatitinvolves,theseriskscanbealleviated.

Thecalibratedmodelisexpressedgenerallyasasystemofcouplednonlinearpartialdifferentialequationsandmustbe solvedforgivenloadingandboundaryconditions.

Veryoften,thesolutionoftheseproblemsis nottractable by ananalytical procedure.Numericalmethods were intro-ducedinthetwentiethcentury forthat purpose,andare nowadayswidelyemployed, inindustryaswellasinacademia. Thus, approximateprocedureswere proposed,mostofthem computationallymanipulable.Thefirst optionconsistsin as-suming the solution expressible as a combination of a reduced number of functions with a physical or mathematical foundation. The Ritz methodfollows thisrationale. Thus,if thesefunctionsare notedby

G

i

(

x

)

,i

=

1

,

. . . ,

G

, thesolution approximationreads u

(

x

,

t

)

G



i=1

α

i

(

t

)

G

i

(

x

)

(1)

wherethetime-dependentcoefficients

α

i

(t)

arecalculatedbyinvokingaprojectionmethod,liketheGalerkinmethod,for instance.

However, theknowledgeofthesefunctionsbecomes atrickyissueinmanycases.Whenthe choiceofan appropriate, physically soundbasis is unavailable, thebest option is considering a generalpurpose approximation basis, forexample consistingofpolynomials(Lagrange,Chebyshev,Legendre,Fourier...):

P

i

(

x

)

,i

=

1

,

. . . ,

P

,fromwhichtheapproximationnow reads u

(

x

,

t

)

P



i=1

β

i

(

t

)

P

i

(

x

)

(2)

andwhere,forensuringaccuracy,asufficientnumberoftermsmustbeconsidered,implyingaquitelargenumberofterms inthefinitesum(2).

Inordertobetteradapttocomplexdomains,globalfunctions

P

i

(

x

)

areoftenreplacedbycompactlysupportedfunctions

N

i

(

x

)

,leadingtotheso-calledFiniteElementMethod,FEM.Ifweconsider

N

nodes,theapproximationreads

u

(

x

,

t

)

N



i=1

Ui

(

t

)

N

i

(

x

)

(3)

wherenowUi

(t)

representsthesearchedsolutionatnodexi andattimet.

This formulationbecomes generalenough atthe price ofcomputing many—in some casestoo many—unknowns, the nodalvaluesUi

(t)

,

i.

Inthesequel,forthesakeofnotationalsimplicity,thedependenceontimeisnotmadeexplicit,anditisassumedthat the discrete systemthat results fromintroducing approximation (3) into theweak form ofthe problem(for thesake of simplicityassumedtobelinear)leadstothelinearsystem

K

U

=

F (4)

Here,matrix

K

representsthediscreteformofthemodel,whereasvectorsU andF containthenodalunknownsandloads, respectively.Theirrespectivesizesare

N

× N

,

N

×

1 and

N

×

1.

1.1. Reducedmodelling

Inmanyapplicationsofpracticalinterest,despitetherichnessofEq. (4) thatisabletoconsideranychoiceoftheloading inavectorspaceofdimension

N

,usualloadingscontaininpracticemanycorrelations.ThisresultsinthesolutionU being

definedinasubspaceofdimension

n

,with

n

 N

.

ProperOrthogonalDecompositionproceedsbyembeddingthesolutionontothatsubspaceofdimension

n

[1] [2] [3] [4]. Nonlinearmanifoldlearningconstitutesitsnonlinearcounterpart.Forthatpurpose,fromasetofsnapshotsofthesolution

U1

,

. . . ,

US,areducedbasis

R

i

(

x

)

,i

=

1

,

...,

n

,with

n

 N

isextracted.Thesolutionisfinallyexpressedby u

(

x

,

t

)

n



i=1

(4)

whosematrixformreads

U

= B

γ

(6)

The columns ofmatrix

B

contain the reducedfunctions at node locations. Thus, the component

B

i j

=

R

j

(

xi

)

with i

=

1

,

. . . ,

N

and j

=

1

,

...,

n

.

Now,byinjecting (6) into(4) andpremultiplying bythetransposeof

B

(somethingequivalent toaGalerkinprojection inthereducedbasis),itresults

(

B



KB)

γ

= B

F (7)

whoserespectivesizesare

n

× n

,

n

×

1 and

n

×

1.

Inthenonlinearcase,differentprocedureshavebeenproposedforalleviatingtheconstructionof

K

.Among them,one canfindtheDiscreteEmpiricalInterpolationMethod[5] ortheHyper-Reduction[6],tocitebutafew.

Inordertoavoidthenecessityofperformingan“apriori”learning,asimultaneoussearchingofspaceandtimefunctions was proposed in the context of the so-called ProperGeneralized Decomposition, PGD[7] [8] [9] [10] [11]. It was then extendedforaddressingspaceseparation[12] orparametricsolutions[13] [14] [15] [16] insomeofournumerousprevious works.Thus,thespace-timeseparatedrepresentationnowreads

u

(

x

,

t

)

D



i=1

T

i

(

t

)

X

i

(

x

)

(8)

wherebothgroupsoffunctions,

T

iand

X

i,areobtainedbyinjectingtheapproximation(8) intotheproblemweakformand then usingarank-one enrichmentforincrementallyconstructingtheseparatedrepresentation.Ateachenrichmentstepof thatgreedyalgorithm,analternateddirectionstrategyisconsideredforaddressingthenonlinearityarisingfromtheproduct ofbothunknownfunctions

T

i and

X

i.

Despiteitsgenerality,thecomputedsolution(8) dependsontheloadingterm.InthecaseofPOD-basedtechniques,the reducedalgebraicsystemcaneasilybeinverted,leadingtothereducedtransferfunction

(B



KB)

−1.Notethatthereduced modelmatrix(

B



KB

)doesnotdependexplicitlyontheloading.Thus,assoonasanewloadingF comesintoplay, itis tobeprojectedonthereducedbasis

B

TF

f.Then,thereducedsolutionisobtainedfrom

γ

= (B



KB)

−1f.Torecoverthe solutionintheoriginalfiniteelementbasis,onesimplyhastoperformtheprojectionU

= B

γ

.

PGD, on the contrary, does not compute such a reduced model (or its transfer function counterpart), but instead it proceedsbyeither

(i) calculatingthesolutionforeachloading,or

(ii) expressingtheloadinginanreducedbasis(e.g.,POD)accordingto

f

(

x

,

t

)

=

F



i=1

μ

i

F

i

(

x

,

t

)

(9)

thatallowstransformingtheinitialproblemintoaparametricone.WithinthePGDrationale,thesolutionissearched intheparametricformu(x,t,

μ

)

,with

μ

= (

μ

1

,

...,

μ

F

)

.

AcriticismthatisusuallyattributedtothePGDtechnique,ifcomparedtoPOD-basedprocedures,ispreciselythe neces-sityoftheloadtoberepresentableintermsofthereducedbasis

F

1

,

. . . ,

F

F.However,itisimportanttonotethatevenif

such aconstraintislessexplicitwhenusingthePODrationale,italsoappliesimplicitly[17].Thus,itisimportanttonote thatthePODrationaleisbasedontheexistenceofareducedbasis

B

,andthatthisbasiswasconstructedfromaparticular choiceofsnapshotsU1

,

. . . ,

US,associatedwithaparticularloadingF1

,

. . . ,

FS.Thus,ifaquitedifferentloadingcomesinto

play—one whosesolution cannot be accurately approximated by the reducedbasis

R

i involved in

B

—the transfer func-tioncanbeapplied,butnothingguaranteestheaccuracyoftheresultingsolution.Finally,theapplicabilityofModelOrder Reductiontechniquesremainssubordinatetothefulfillmentoftheconditionsassumedduringtheirconstruction.

All thejustdiscussedtechniquesbelong tothevast familyofModelOrderReductiontechniques.In fact,modelswere notreallyreduced,onlythesolutionprocedure.

1.2. Learningmodels

Wearguedinsomeofourformerworksthat,insomecircumstances,modelsaretoouncertaintorepresentthephysical system[18].Inthatcase,adata-drivenprocedurebecomesanappealingchoice[19] [20] [21] [22].Forthesakeofsimplicity, we assumetheproblemtobelinearandthat

N

couples

(

Ui

,

Fi

)

—assumedindependent,thatis,spanning thewholevector spaceofdimension

N

—areavailable.Wealsoassumethatnothingisknownonthemodelwhosediscreteexpressionconsists ofthematrix

K

.

(5)

Thus,withallthecouplesfulfillingthealgebraicrelationship

K

Ui

=

Fi

,

i (10)

onecould constructthevectorK (vector

˜

expressionofthematrix

K

),the extendedloadingvectorF that

˜

includesallthe loads,i.e.F

˜



= (

F1

,

. . . ,

FN

)

andfinally matrix

U

˜

fromthedifferent Ui organized inan adequatemanner soasto enable Eqs. (10) tobeexpressedas

˜U ˜

K

= ˜

F (11)

TheycanbesolvedtoobtainthemodelK,

˜

oritsmatrixcounterpart

K

.

There aremanywaysofperforming such an identification. Inthelinear orthenonlinearcases as,forexample, deep-learning(basedonneuralnetworks,NN),dynamicmodedecomposition,DMD...tocitebutafew.

It isimportantto note thatthe complexity ofstandard solutions involvingknownmodels

K

,scales withthe number ofunknowns

N

,however,whentrying toidentifythemodelthecomplexityscales with

N

2 (numberofcomponentof

K

), justifyingthat alotofdataismandatory forextractingthehiddenmodel,situationthat becomes evenmorestringentin thenonlinearcasethatrequiresidentifyingamodelK

|

UaroundanypossiblevalueofU.

Thisapparentcomplexityjustifiesthefactofassociatingthetermbig-datatomachinelearning.However,inengineering, the available data is insome circumstances very scarce: collecting datacould be technologically challenging, sometimes simplyimpossible,andinallcases,generallyexpensive.

Thus,thebig-dataisbeing,orshouldbe,transformedinasmartercounterpart,withsmart-dataparadigm rationalizing theamountofneededdata,whiledrivingitsacquisition(collection):whichdata,whereandwhen?

The presentwork tries to exploit the fact previously discussed, that despite the richness of loadings and responses (beingthemodeltheapplicationthatrelatesboth),ingeneralbotharelivinginlow-dimensionalmanifolds,andtherefore the model relating both is expectedto reflect this fact. This consideration was exploited when proposing the so-called hyper-reductiontechniques,howevertheywereappliedonpre-assumedmodels,whereasinwhatfollowsweapplyitwhile assumingthemodelunknown.

After thisintroduction,the next section proposesa procedureforextractingthe reducedmodelfrom asmallamount ofdata,then theprocedure willbe illustrated fromtheanalysisofa quitesimplenonlinear problem, beforefinishing by addressingsomegeneralconclusions.

2. Methods

Forthesakeofsimplicity,westartbyassumingthediscretelinearproblem

K

U

=

F (12)

whereF andU representtheinputandoutputvectorsrespectively.Inageneralcase,theycouldhavedifferentnatureand representthevaluesoftheirrespectivefieldsattheobservationpoints.Therespectivesizesare

N

×

1 and

N

×

1.

Asinthecaseofreduced-ordermodeling,weassumethatinputsandoutputsare(toacertaindegreeofapproximation) livinginasub-spaceofdimension

n

,muchsmallerthan

N

.Thus,therankof

K

isexpectedtobealso

n

,evenifapriori,it wasreadytooperateinalargerspaceofdimension

N

.

Thequestion isthereforehowto extractthereducedmodel? Amongthenumerouspossibilities weexplored,we com-menthereontwoofthem.

2.1. Rank-

n

constructor

Here,we considera setof

S

input-output couples

(

Fi

,

Ui

)

, andassume themodel tobe expressiblefromitslow-rank form

K

LR

K

≈ K

LR

=

n



j=1 Cj

Rj

=

n



j=1 CjRj (13)

where

denotesthetensorproduct,andCj andRjaretheso-calledcolumn androw vectors.Thisexpressionissomehow similartotheseparatedrepresentationusedinthePGD,theSVD(SingularValueDecomposition)ortheCURdecomposition [23] [24].

Remark1.Ifthemodelisexpectedtobesymmetric,onecouldenforcethatsymmetryinthesolutionrepresentation,i.e.

K

≈ K

LR

=

n



j=1

(6)

Letusdefinethefunctional

E(K

LR

)

accordingto

E

(

K

LR

)

=

S



i=1



Fi

− K

LRUi



p (15)

Thechoiceofmanydifferentnormscouldbeenvisagedhere.Forexploitingsparsity,forinstance,thechoiceshouldbethe

L1-norm.Inwhatfollows,weconsiderthestandard L2-norm,i.e.wetake p

=

2.

By consideringmatrices

F

and

U

containing intheir columnsvectors Fi andUi respectively,the previous expression canberewrittenusingtheFrobeniusnorm(relatedtop

=

2)as

E

(

K

LR

)

=

F

− K

LR

U



F (16)

whoseminimizationresultsin

K

LR

(

U U

T

)

= FU

 (17)

or,equivalently,

K

LR

= (FU



)(

U U

T

)

−1 (18)

Thisprovesthat

K

LRand,moreparticularly,itscolumnandrowvectors,correspondtotheSVD(orrank-

n

truncatedSVD)

decompositionof

(F U



)(U U

T

)

−1.

Remark2.WithinthestandardPGDrationale,onecouldcomputetheseparatedrepresentationof

K

LR bycomputing

pro-gressivelytheseparatedrepresentationusingthestandardgreedyalgorithmappliedonEq. (17) from

K

: K

LR

(

U U

T

)

= K

: (FU



)

(19)

with“

:

”thetwice-contractedtensorproduct,and

K

LRapproximatedby(13).Thetestmatrix

K

isobtainedfrom:

K

=

n



j=1

CRj (20)

oritssymmetrizedcounterpart.

Thisprocedureisspeciallysuitablewhen

N

becomeslarge,thusmakingdifficultthecalculationoftheSVDdecomposition of

(F U



)(U U

T

)

−1.

Remark3.Thisprocedureensuresthatwhenaddressingalinearproblem,

N

linearlyindependentloadingsFj

,

j

=

1

,

...,

N

, ensuretheconstructionofarank-

N

model

K

.Ifthemodeliscomputedbyincorporatingprogressivelytheavailabledata— from one loadingleading to model

K

1 (asingle data model),to

N

leading to

K

N—we can define themodel enrichment



j

K

= K

j

− K

j−1, j

=

2

,

...,

N

.

Note that addingmore data,Fj

,

j

> N

, tothe modeldoes notmake it evolve anymore, asexpected. Inother words:

K

j

= K

N, j

> N

.Inthenonlinearcase,whereamodelisaprioriexpectedtoexistaroundeachU,themodelcontinuesto

evolvewhen j

> N

.

Withthelow-rankmodelthuscalculated,assoonasanewdatumarrives,F,thesolutionisevaluatedfrom

U

=



K

LR



−1F (21)

Remark4.Insection2.3weproposeanalternativemethodologybasedonthecalculationoftransferfunctionsthat avoids modelinversion.

2.2. Progressivegreedyconstruction

Inthiscaseweproceedprogressively.Weconsiderthefirstavailabledatum,thepair(F1

,

U1).Thus,thefirst,one-rank, reducedmodelreads

K

1

= (

C1R1

)

(22)

ensuring

(7)

andonepossiblesolution,ensuringsymmetry,consistsofR1

=

F1 andC1

=

ϕF11,with

ϕ

i

=

F1U1.Manyothersolutionsexist, sincethereare

N

availabledataforcomputing2

N

unknowns(the componentsoftherowandcolumnvectors).Ofcourse, theproblemcanbewrittenasaminimizationproblemusinganadequatenorm.

Supposenowthataseconddatumarrives,(F2

,

U2),fromwhichwecanalsocomputeitsassociatedrank-one approxima-tion,andsoon,foranynewdatum(Fi

,

Ui).Thisleadsto

K

i

= (

CiRi

)

(24)

ensuringatitsturn

(

CiRi

)

Ui

=

Fi (25)

whereCi

=

ϕFii (with

ϕ

i

=

FiUi)andRi

=

Fi.

ForanyotherU,themodelcouldbeinterpolatedfromthejustdefinedrank-onemodels,

K

i,i

=

1

,

...,

S

,accordingto

K|

U

S



i=1

K

i

I

i

(

U

)

(26)

with

I

i

(

U

)

theinterpolationfunctionsoperatinginthespaceofthedataU.

Thisconstructorisnotappropriatewhenaddressinglinearbehaviors.Ifweassumeknowntwolocalrank-onebehaviors

K

1 and

K

2ensuringthefulfillmentofrelations

K

1U1

=

F1 and

K

2U2

=

F2,itfollowsthatifforexampleF

=

0

.

5

(

F1

+

F2

)

, byconsidering

K

=

0

.

5

(K

1

+ K

2

)

,theresultingoutputdoesnotsatisfylinearity,i.e.U

=

0

.

5

(

U1

+

U2

)

.

However,inthenonlinearcase,bydefiningthesecantbehavioratthemiddlepointassociatedwithF

=

0

.

5

(

F1

+

F2

)

as

˜K =

0

.

5

(K

2

− K

1

)

,wewillhave

K

= K

1

+ ˜K =

0

.

5

(K

1

+ K

2

)

,factthatallowsviewingtheprogressivegreedyconstruction anditsassociatedinterpolationasalinearizationprocedure.

2.3. Generalremarks

– Allthe previousanalysiswas basedon thecalculationof thereducedmodel

K

LR.However, thisprocedureneeds its

inversion

K

LR−1 forevaluatingthesolutionU

= K

LR−1F.Thisissuecouldbeeasilycircumventedasfollows.

The discretemodel

K

U

=

F canbe rewritten asU

= K

−1F andbyintroducing thediscrete transferfunction

T

(that representstheinverseof

K

),onecouldlearndirectly,byusingallthepreviousrationale,thereducedexpressionofthe transferfunction,i.e.

T

LR.

Theadvantageoflearningthereduceddiscretetransferfunction isthat, assoonasadatumF isavailable—andunder theassumptionthat itis livinginthesamesubspacethat servedforconstructing thereducedmodel—theevaluation becomesstraightforward:asimplematrix–vectorproduct.Itcanevenbemoresimplified,sincematrix

T

LRisexpressed

inaseparateformat,U

= T

LRF,with

T

≈ T

LR

=

n



j=1

ˆ

CjR

ˆ

j (27)

– Nothingreallychangeswhenconsideringnonlinearmodels,exceptthefactthatthemodels,

K

or

T

and,inturn,the vectors they involve, must be assigned to the neighborhood of the data U or F. In practice, data can be classified, creatinganumberclusterswherethemodelsareconstructed[25].

Thus,eachtimeadatumF arrives,theclustertowhichitbelongs—say,

κ

—isfirstidentified.Then,thelow-rankdiscrete transferfunctionsofthatcluster,

T

LR

κ ,ischosenandthesolutionevaluatedaccordingto

U

= T

κLRF (28)

– Again withinthePGDrationale, allthediscussion abovecan beextended toparametric settings.Thedescription and resultsconstituteaworkinprogressthatwillbereportedinongoingpublications.Anotherworkinprogressconcerns theobtentionoferrorbounds,neededforcertifyingtheconstructedmodelsandtheirpredictions.Thereisavastcorps ofliteratureonverificationandvalidation,buttheywhereassociatedwithmodel-basedsimulations.Here,forthebest identified model,theexisting errorestimationprocedures can be applied.However, theremaining question concerns theestimationoftheerrorassociatedwiththemodelitself,whosequalitydependsonthecollecteddata,i.e.validation, morethanverification.Thisissuealsoappliesinmodel-basedsimulations:howtobesurethattheconsideredmodels aretheappropriateones?

(8)

Fig. 1. Domain equipped with a finite-element mesh.

Despitetheapparent difficulty,sincethe modelisdefinedwiththeonlysupportofa discretebasis, someestimators couldbeeasilydefined,andwillbereportedinfutureworks.

– All thepreviousdevelopmentscouldbe accomplishedbyfirstextractingthereducedbasesassociatedwithinputand output vectors(actionandreaction)andthenlearningtheassociatedreducedmodelswithintheframeworkofa pro-gressive POD. However, the procedure hereconsidered seems more physicallysound. A fully reducedcounterpart of theprocedureshereproposedanddescribedconstitutesaworkinprogress,whoseresultswillbereportedinongoing publications.

– The present procedure becomes quite close to theso-called dynamic model decompositionas soon asthe model is assumedtobeexpressedinatensorformandextendedtononlinearsettingsfromanappropriateclustering.

– The main issuewhen considering data-driven modelsis the noise impacton the predictions.There are many possi-bilities, mostofthembased onthe useoffilters.Here,we considered, asprovedlater, a simpleprocedure.The data (Fi

,

Ui) is obtained with a resolution

N

, on which noise applies. A quite efficient filterconsists in simply consider-ing themodel,i.e.itsrowandcolumnvectors, described witha coarserresolution

M

< N

.In fact,inputsandoutputs couldexhibitfastfluctuations,butinmanyapplications,inwhichmodelorderreductionapplies,thesefastfluctuations are almost due to noise, andfor that reason, the model could be expected being described with a coarser resolu-tion.

3. Results

3.1. Anonlinearexample

Inordertoprovethepotentialinterestofthemethodologyheredescribed,we considera nonlinearelasticmodelwith Poissoncoefficient andYoung’s modulusgivenby respectivelyby

ν

=

0

.

3 and E

=

105

(

1

+

1000

ε

I I

)

,with

ε

I I thesecond invariantofthestraintensor

ε

.UnitsareMPaforstresses,N/mmforappliedtractionsandmmforlengths.

The mechanical problemis definedinthe two-dimensional domaindepicted inFig. 1, that alsoshowsthe finite ele-ment meshemployed forgenerating thesyntheticpseudo-experimental data.Displacementsare preventedon its bottom boundary. Left andright boundariesare traction-free anda distributedtension applies on its upper boundary, given by

t

= (

0

,

a0

+

a1x).

Fig.2depicts,fora0

=

100 anda1

=

0,thedisplacementfield andtheYoungmodulusdistribution,whileFig.3depicts thedisplacementalongtheupperboundary.

From thereference elastic modelwe explore theparameter space

(a

0

,

a1

)

by computing realizationsgenerated from

(9)

Fig. 2. Displacement field (left) and Young modulus distribution (right).

Fig. 3. Displacement on the upper boundary.

0

,

a1

=

0

)

,

(a

0

=

100

,

a1

=

0

)

,

(a

0

=

100

,

a1

=

10

)

and

(a

0

=

0

,

a1

=

10

)

, defining four loading clusters around those points.

Thus,thenonlinearelasticmodelwasemployedforgeneratingthesyntheticdata.Inthepresentcase,itconsistsof trac-tionanddisplacementontheupperboundaryofthedomain.Forexample,fortheloadingclusteraround

(a

0

=

100

,

a1

=

0

)

, theinput(loading)andoutput(verticaldisplacement)dataareshowninFig.4.

From all the available data, the discrete, low-rank transfer function

T

LR is calculated at each loading cluster. Fig. 5

depictsthe columnandrow vectorsinvolvedin the rank-two modelassociated withthe cluster

(a

0

=

100

,

a1

=

0

)

.The obtainedrank,two,correspondstotheexpectedone,sincetheloadingisdefinedinamanifold ofdimension2,consisting ofparametersa0anda1.

Whenevaluatingthemodelatanypointintheparametricdomain

(a

0

,

a1

)

,themodel

˜T

LRisinterpolatedfromthefour computedmodels

˜T

LR

=

4



κ=1

T

κLR

(

a0

,

a1

)

(29)

(10)

Fig. 4. Top: input (applied tension). Bottom: output (vertical displacement).

Fig.6comparesthereferencesolutionandtheoneobtainedbyinterpolatingmodelsatpoint

(a

0

=

50

,

a1

=

5

)

,

˜T

LR,and thencomputingtheresponsefromU

= ˜T

LRF.

3.2. Influenceofnoisydata

Inthepresentcasetheloadingwasrandomlyperturbed,aswellthecomputeddisplacementfield(usingtheunperturbed loading). AsmentionedinSection2,asimplefilteringprocedureconsistsintruncatingtheapproximation toanumberof

M

< N

terms. Our experience indicates that this simple procedure calculates first those modes with a lower frequency content,thusfilteringnoiseinpractice.

Fig. 7 compares column and row vectors involved in the discrete low-rank transfer function (whose dimensional-ity increased due to the noise but was truncated to rank 3) in absence of any noise filtering (i.e.

M

= N

) and when using

M

 N

. A performing filter capability is appreciated, with a beneficial effect on the accuracy of the predic-tions.

(11)

Fig. 5. Column and row vectors defining the discrete low-rank transfer functionTLR .

(12)

Fig. 7. Comparingcolumnsandrowvectorsinvolvedinthediscretelow-rankdiscretetransferfunctionwithM= N(top)and M Nenablingfiltering (bottom).

3.3. Equivalentneuralnetwork

It iseasy to transformthe proposed procedure into an equivalent artificial neural-network (ANN). Fig. 8compares a standard ANN with oneemulating the proposed reducedmodellearning, combined withthe modelinterpolation for in-ferring responses fromapplied inputs. The proposed NN tries to emulate a linear combinationof locally linear learned behaviors. AsdiscussedinSection2.2thelinearcombinationofbehaviorscanbeviewedasalinearizationofanonlinear behavior.

IntheproposedNNfourclusterswereconsidered(evenifforthesakeofclaritythefigureonlydepictstwo)representing the four locallylinear behaviors(two parameters –a0 anda1 –with toclasses each),while the output layerensures a bilinearinterpolationofthem.

ThestandardANNconsistsofthreeneuronlayers,theinputandoutputinvolving66andtheintermediatewith100.The intermediatelayerforthephysics-informedneuralnetworkcontained100 neuronspercluster.

BothNNweretrainedby300input/outputvectorsgeneratedt

= (

0

,

a0

+

a1x),witha0 anda1twouniformlydistributed randomnumbernumbersintheintervals

[

0

,

100

]

and

[−

10

,

10

]

respectively.

Fig. 9 compares the displacement predictions obtained by the trained neural networks, the standard one (ANN) and thephysics-informed onefora0

=

100 anda1

=

0,withthereferencesolution,fromwhichthesuperiorityofthe physics-informedNNcanbestressed.

(13)

Fig. 8. Standard one-layer NN (top) and the physics-informed one related to the low-rank reduced model (bottom).

4. Conclusions

The present work succeeded at proposing a newmethodology for learning reducedmodels froma smallamount of data by assuming a tensor decomposition ofthe discrete reduced model orits transfer function counterpart. It was ex-tendedtononlinearsettingsandprovedthattheassociatedphysics-informedNNexhibitsahigheraccuracythanastandard one.

Ourpresentworksinprogressconcernthevalidationandverificationaswellasitsextensiontoparametricsettings.

Acknowledgements

TheworkofE.C.hasbeenpartiallysupported bytheSpanishMinistryofEconomyandCompetitivenessthroughGrant numberDPI2017-85139-C2-1-RandbytheRegional GovernmentofAragon andtheEuropean SocialFund,research group T88. The other authors acknowledge the ANR (Agence nationale de la recherche, France) through its grant AAPG2018 DataBEST.

(14)

Fig. 9. Predictionsobtainedfromthestandardone-layerANNandthephysics-informedone(Low-RankReducedModel)relatedtothelow-rankreduced modelfora0=100 anda1=0.Twoclustersare depicted intheso-calledhiddenlayer,eveniftheNNcontainsfour.

References

[1]K. Karhunen, Über lineare methoden in der wahrscheinlichkeitsrechnung, Ann. Acad. Sci. Fennicae, Ser. Al. Math. Phys. 37 (1946). [2]M.M. Loève, Probability Theory, 3rd ed., The University Series in Higher Mathematics, Van Nostrand, Princeton, NJ, 1963.

[3]M. Meyer, H.G. Matthies, Efficient model reduction in non-linear dynamics using the Karhunen-Loève expansion and dual-weighted-residual methods, Comput. Mech. 31 (1–2) (2003) 179–191.

[4]D. Ryckelynck, F. Chinesta, E. Cueto, A. Ammar, On the a priori model reduction: overview and recent developments, Arch. Comput. Methods Eng. 12 (1) (2006) 91–128.

[5]S. Chaturantabut, D.C. Sorensen, Nonlinear model reduction via discrete empirical interpolation, SIAM J. Sci. Comput. 32 (September 2010) 2737–2764. [6]D. Ryckelynck, Hyper-reduction of mechanical models involving internal variables, Int. J. Numer. Methods Eng. 77 (1) (2008) 75–89.

[7]P. Ladeveze, Nonlinear Computational Structural Mechanics, Springer, N.Y., 1999.

[8]A. Ammar, B. Mokdad, F. Chinesta, R. Keunings, A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids, J. Non-Newton. Fluid Mech. 139 (2006) 153–176.

[9]A. Ammar, B. Mokdad, F. Chinesta, R. Keunings, A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids. Part II: transient simulation using space-time separated representations, J. Non-Newton. Fluid Mech. 144 (2007) 98–121.

[10]F. Chinesta, R. Keunings, A. Leygue, The Proper Generalized Decomposition for Advanced Numerical Simulations, Springer International Publishing, Switzerland, 2014.

[11]E. Cueto, D. González, I. Alfaro, Proper Generalized Decompositions: An Introduction to Computer Implementation with Matlab, Springer Briefs in Applied Sciences and Technology, Springer International Publishing, 2016.

[12]B. Bognet, A. Leygue, F. Chinesta, Separated representations of 3d elastic solutions in shell geometries, Adv. Model. Simul. Eng. Sci. 1 (4) (2014). [13]F. Chinesta, A. Ammar, E. Cueto, Recent advances in the use of the proper generalized decomposition for solving multidimensional models, Arch.

Comput. Methods Eng. 17 (4) (2010) 327–350.

[14]F. Chinesta, A. Leygue, F. Bordeu, J.V. Aguado, E. Cueto, D. Gonzalez, I. Alfaro, A. Ammar, A. Huerta, PGD-based computational vademecum for efficient design, optimization and control, Arch. Comput. Methods Eng. 20 (1) (2013) 31–59.

[15]P. Díez, S. Zlotnik, A. Huerta, Generalized parametric solutions in Stokes flow, Comput. Methods Appl. Mech. Eng. 326 (2017) 223–240.

[16]F. Chinesta, A. Ammar, E. Cueto, On the use of proper generalized decompositions for solving the multidimensional chemical master equation, Eur. J. Comput. Mech. (Rev. Eur. Méc. Numér.) 19 (1–3) (2010) 53–64.

[17]D. Gonzalez, E. Cueto, F. Chinesta, Real-time direct integration of reduced solid dynamics equations, Int. J. Numer. Methods Eng. 99 (9) (2014) 633–653. [18]R. Ibañez, D. Borzacchiello, J. Vicente Aguado, E. Abisset-Chavanne, E. Cueto, P. Ladeveze, F. Chinesta, Data-driven non-linear elasticity: constitutive

manifold construction and problem discretization, Comput. Mech. 60 (5) (Nov. 2017) 813–826.

[19]T. Kirchdoerfer, M. Ortiz, Data-driven computational mechanics, Comput. Methods Appl. Mech. Eng. 304 (2016) 81–101.

[20]R. Ibanez, E. Abisset-Chavanne, J. Vicente Aguado, D. Gonzalez, E. Cueto, F. Chinesta, A manifold learning approach to data-driven computational elasticity and inelasticity, Arch. Comput. Methods Eng. 25 (1) (2018) 47–57.

[21] E.Lopez,D.Gonzalez,J.V.Aguado,E.Abisset-Chavanne,E.Cueto,C.Binetruy,F.Chinesta,Amanifoldlearningapproachforintegratedcomputational materialsengineering,Arch.Comput.MethodsEng.25 (1)(January2018)59–68,https://doi .org /10 .1007 /s11831 -016 -9172 -5.

[22]Y. Kevrekidis, G. Samaey, Equation-free modeling, Scholarpedia 5 (9) (2010) 4847.

[23]Ch. Heyberger, P.-A. Boucard, D. Neron, A rational strategy for the resolution of parametrized problems in the {PGD} framework, Comput. Methods Appl. Mech. Eng. 259 (1) (2013) 40–49.

[24]G. Rozza, Fundamentals of reduced basis method for problems governed by parametrized pdes and applications, in: P. Ladeveze, F. Chinesta (Eds.), CISM Lectures Notes “Separated Representation and PGD Based Model Reduction: Fundamentals and Applications”, Springer Verlag, 2014.

Figure

Fig. 1. Domain equipped with a finite-element mesh.
Fig. 2. Displacement field (left) and Young modulus distribution (right).
Fig. 4. Top: input (applied tension). Bottom: output (vertical displacement).
Fig. 5. Column and row vectors defining the discrete low-rank transfer function T LR .
+4

Références

Documents relatifs

Moreover, using reduced laboratory samples of soil with periodically distributed holes or concrete inclusions, actual efficiency of the studied seismic metamaterials is tested

The simplest model is still advantageous : it shows a good accuracy to capture the averaged properties along the wave where the θ - ϕ model fail to predict the surface temperature

Das Kapitel 2.1 beschäftigt sich mit den politischen Voraussetzungen und Strukturen des Merowingerreiches, welche den Hintergrund für den Aufstieg der Arnulfinger und Pippiniden,

This technique with respect to other domain decomposition approach allows obtaining a solution with the same accuracy of underlying finite element formulation and to flexibly

Reduced Basis technique: Greedy algorithm using SLEPc.. Given tol and set of parameters {µ

We show how direct and maternal ge- netic e ffects can be estimated with a simpler analysis based on the reduced animal model and we illustrate the method using farrowing

Ce projet comprend 3 objectifs plus précis : (1) le développement d'une technique de priorisation des tests basée sur les cas d ' utilisation (CU), (2)

Scaling violations are seen in a wide variety of different lattice models that are related to this field theory (at n ¼ 2) and persist to very large length scales [63,64], so we