• Aucun résultat trouvé

Visual navigation of indoor/outdoor non-holonomic robots with wide field of view cameras

N/A
N/A
Protected

Academic year: 2021

Partager "Visual navigation of indoor/outdoor non-holonomic robots with wide field of view cameras"

Copied!
13
0
0

Texte intégral

(1)

HAL Id: hal-02468552

https://hal.inria.fr/hal-02468552

Submitted on 5 Feb 2020

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Visual navigation of indoor/outdoor non-holonomic

robots with wide field of view cameras

Jonathan Courbon, Youcef Mezouar, Laurent Eck, Philippe Martinet

To cite this version:

Jonathan Courbon, Youcef Mezouar, Laurent Eck, Philippe Martinet.

Visual navigation of

in-door/outdoor non-holonomic robots with wide field of view cameras. OmniRoboVis08 - Workshop

on Omnidirectional vision robotics, Nov 2008, Venice, Italy. �hal-02468552�

(2)

nonholonomi robots

with wide eld-of-view ameras

J.Courbon

12

and Y.Mezouar

1

andL.E k

2

andP.Martinet

1

1

LASMEA,AubiereF-63177,Fran e

2

CEA-List, Fontenay-Aux-RosesF-92265,Fran e

Abstra t. In this paper, we present results of a omplete framework fornon-holonomi robotnavigationinindoorandoutdoorenvironments usingasinglewideeld-of-view amera.Theproposednavigation frame-workforwheeledrobotsisbasedonavisualmemory ontainingkey im-agestorea h.Duringahuman-guidedlearningstep,therobotperforms pathswhi hare sampledandstoredasasetoforderedkeyimages, a -quiredbyanembedded amera.Thesetoftheseobtainedvisualpaths is topologi ally organized and provides a visual memory of the envi-ronment. Given an image of one of the visual paths as a target, the robotnavigationmissionisdenedasavisualroute.Whenrunning au-tonomously,the ontrolguidestherobotalongthereferen evisualroute withoutexpli itlyplanninganytraje tory.The ontrol onsistsina on-trollawadaptedtothenonholonomi onstraintanddire tly omputed from visual points. The proposed framework has been designed for a generi lassof ameras. In this paper, experiments have been arried onwith atadioptri and sheye ameras, in indoor environment with aAT3Pioneerrobot andinoutdoorenvironmentwith anautonomous urbanvehi le.

1 Introdu tion

Oftenusedamongmore"traditional"embeddedsensors-proprio eptivesensors likeodometersasextero eptiveoneslikesonars-visionsensorprovidesa urate lo alization methods. The authors of [1℄ a ounts of twenty years of works at themeetingpointofmobileroboti sand omputervision ommunities.Inmany works, a map of the environment and the robot lo alization in this absolute framearesimultaneouslyupdated.Bothmotionplanningandrobot ontrol an thenbedesignedin thisspa e.Theresultsobtainedbytheauthors of[2℄leave to befor astedthat su h a framework willberea hable usinga single amera. However,althoughana urategloballo alizationisunquestionablyuseful,our aim is to build a omplete vision-based framework without re overing the po-sitionof themobile robot withrespe t to a referen eframe. In[1℄this typeof framework isrankedamongqualitativeapproa hes.

(3)

(a) (b)

Fisheye Camera

( )

Fig.1.Threeappli ations:navigationofaPioneerAT3equippedwitha atadioptri amera(a)andwithasheye amera(b)andnavigationofanurbanvehi le( ).

boundedquantityofimagesgatheredin aset alled visualmemory.Inthe on-textofmobileroboti s,[3℄proposestouseasequen eofimages,re ordedduring ahumanteleoperatedmotion,and alledView-Sequen edRouteReferen e.This on eptunderlinesthe loselinkbetweenahuman-guidedlearningandthe per-formedpathsduringanautonomousrun.However,theautomati ontrolofthe robotin[3℄isnotformulatedasavisualservoingtask.In[4℄,homing strategyis used to ontrol a wheel hairfrom a memoryofomnidire tional imagesbut the ontrolofthis holonomi robotisnotpartofthepresentedframework.

In the proposed framework, the ontrol designdire tly takesinto a ount the non-holonomi modelof therobotandis omputedfromthefeature mat hing. Panorami viewsa quiredbylargeeld-of-view amerasarewelladaptedtothis approa h sin etheyprovidea largeamountofvisualfeatures whi h an be ex-ploitedaswellasforlo alizationthanforvisualservoing.

Theaimofthispaperisto presentdierentexperimentalvalidation. The on- ept ofvisualmemoryisbrieyexplainedin Se tion2 andmoredetails an be foundin [5℄.TheSe tion 3dealswith thevision-based ontrols hemedesigned to ontrol therobot motionsalong a visual route using largeeld-of-view im-ages. Finally, in Se tion 4, experiments on a Pioneer AT3 mobile robot using atadioptri andsheye ameras(refertoFig.1(a),(b))andonanurban vehi- le equipped witha sheye amera(refer to Fig.1 ( )) illustrate theproposed framework.

2 Vision-based memory navigation (VBMN) strategy Ourapproa h anbedividedinthreesteps1)visualmemorybuilding,2) lo al-ization into thevisual memory, 3) navigationinto the visualmemory (refer to Fig.2).

2.1 Visual Memory Stru ture

(4)

andformthevisualmemory.Thekeyimagesele tionisdoneasdetailledinthe sequel.The onsideredvisualfeaturesarepoints.Asproposedin[2℄and su ess-fully applied forthemetri lo alization of autonomousrobots in outdoor envi-ronment,interestpointsaredete tedinea h imagewithHarris ornerdete tor andmat hedby omputingaZeroNormalizedCrossCorrelations ore.Inour ex-periment,500pointsaredete tedinea himage.Therstimageofthesequen e a quired during the learning step is sele ted as the rst key frame

I

1

. A key frame

I

i+1

isthen hosensothatthereareasmanyframesaspossiblebetween

I

i

and

I

i+1

whilethereareatleast

M

ommoninterestpointstra kedbetween

I

i

and

I

i+1

. From thissele tion, it resultsa visualpath

Ψ

whi h isa dire ted graph omposedof

n

su essivekeyimages(verti es):

Ψ

= {I

i

|i = {1, 2, . . . , n}}

. For ontrol purpose, two hypothesis are supposed to beveried: 1) the autho-rizedmotionsduring thelearningstageareassumedto belimitedtothoseofa ar-likerobot,whi honlygoesforwardand2)twosu essivekeyimages

I

i

and

I

i+1

ontainaset

P

i

ofmat hedvisualfeatures,whi h anbeobservedalonga path performed between

R

F

i

and

R

F

i+1

and whi h allowsthe omputation of the ontrollaw.

(5)

(multi-Avisualroutedes ribestherobot'smissioninthesensorspa e.Giventheimages

I

s

and

I

g

,avisualrouteisa setofkeyimageswhi hdes ribesapathfrom

I

s

to

I

g

.

I

s

isthe losest keyimagetothe urrentimageof therobot determined duringalo alizationstep.Thisstep onsistsinndingtheimageofthememory whi h best ts the urrentimage a quired by the embedded amera. The last stepoftheframeworkistofollowthisvisualroute.

3 Routes following using an omnidire tional amera

I

c

isthe urrentimageand

I

i+1

isthenextkeyimageofthepathtorea h.The hand-eye parameters (i.e. the rigid transformation between

F

c

and the frame atta hedto the amera) aresupposed to beknown. Thevehi leis supposed to lo allynavigateinaplanarsurfa e.Letusnote

F

i+1

= (O

i+1

, X

i+1

, Y

i+1

, Z

i+1

)

theframeatta hedtotherobotwhen

I

i+1

wasstoredand

F

c

= (O

c

, X

c

, Y

c

, Z

c

)

aframeatta hedtotherobotinits urrentlo ation(refertoFig.3).Theorigin

O

c

of

F

c

istheorigin ofthe ontrol frameoftherobot.

y

θ

O

i+1

X

i+1

Y

i+1

Z

i+1

X

c

Y

c

Z

c

V

ω

F

c

F

i+1

(R, t)

(Γ)

O

c

Fig.3.Theframe

F

i+1

alongthetraje tory

(Γ )

istheframewherethedesiredimage

I

i+1

wasa quired.The urrentimage

I

c

issituatedattheframe

F

c

.

3.1 Prin iple and ontrol law design

The ontrolstrategy onsistsinguiding

I

c

to

I

i+1

byregulatingasymptoti ally theaxle

Y

c

onthestraightline

(Γ ) = (O

i+1

, Y

i+1

)

(refertoFig.3).The ontrol obje tiveisa hievedif

Y

c

isregulatedbeforetheoriginof

F

c

rea hestheorigin of

F

i+1

.Thelongitudinalvelo ity(respe tivelytheangularvelo ity)oftherobot is

V

(respe tively

ω

).Let

y

bethedistan ebetween

O

c

and

(Γ )

and

θ

theangular

(6)

systemapproa h anbedesignedto a hievethisgoal:

ω(y, θ) = −V cos

3

θK

p

y

− |V cos

3

θ|K

d

tan θ

(1)

As long as therobot longitudinal velo ity

V

is non zero, the performan es of pathfollowing anbedeterminedintermsofsettlingdistan e[7℄thatistosayin termsofthedistan etotravelbeforerea hing thedesiredposition.

K

p

and

K

d

aretwo positivegains whi hset theperforman esof the ontrol lawdepending onthe settlingdistan e. The lateralandangular deviationsof

F

c

with respe t to

(Γ )

an beobtained through partialEu lidean re onstru tions as des ribed in Se tion3.2.

For robot relying on theA kermann's model (bi y le model) as the RobuCab vehi le,the ontrollaw an beobtainedusingthesame approa h.

3.2 State estimation from the unied model of amera on the sphere

A lassi al model for entral atadioptri ameras is the unied model on the sphere[8℄.Ithasbeenshownin[9,10℄thatthismodelisalsosuitableforsheye ameras inroboti appli ations.

The point in theimage plane orresponding to the3D point

X

of oordinates

X

= [X Y Z ]

T

is obtained aftera proje tionona virtualunitsphere,followed byaperspe tiveproje tiononthenormalizedimageplane

Z

= 1−ξ

anda plane-to-plane ollineation[8℄(refer toFigure4)wheretheparameter

ξ

des ribesthe typeofsensor.Thehomogeneous oordinates

x

i

ofthisimagepointis

Image Plane

Image Plane

O

c

X

m

X

F

i+1

C

X

m

F

c

x

i

x

i

ξ

C

O

c

(R, t)

Fig.4.Geometryoftwoviewsusingtheuniedmodelonthesphere.

x

i

= K

c

M

·

X

Z

+ ξkXk

Y

Z

+ ξkXk

1

¸

(2)

(7)

K

c

ontains the usual intrinsi parameters of the perspe tive amera and

M

ontainstheparameters oftheframes hanges.

Wenoti e that the oordinates

X

m

of the proje tion on the sphere an be omputedasafun tionofthe oordinatesintheimage

x

andthesensor param-eter

ξ

:

X

m

= (η

−1

+ ξ)x

(3)

x

=

·

x

T

1

1 + ξη

¸

T

with:

η

=

−γ − ξ(x

2

+ y

2

)

ξ

2

(x

2

+ y

2

) − 1

γ

=

p1 + (1 − ξ

2

)(x

2

+ y

2

)

Depending on the ontext, a s ale eu lidean re onstru tion is omputed using thehomographymatrixwhen3Dpointsare oplanarorusingtheessential ma-trixotherwiseas detailledin thesequel.

Let

X

be a 3D pointwith oordinates

X

c

= [X

c

Y

c

Z

c

]

T

in the urrent frame

F

c

and

X

= [X

i+1

Y

i+1

Z

i+1

]

T

intheframe

F

i+1

.Thispointisproje tedonto theunitspheresintothepoints

X

m

and

X

m

(refer to Fig.4). Wesupposethat the amerais alibrated.

S aledEu lideanre onstru tion fromplanar3D pointsLet onsider that thepoint

X

belongs toa plane

(π)

. Aftersome algebri manipulation, we obtain:

x

∝ Hx

(4) where

H

istheEu lideanhomographymatrixrelativetotheplane

(π)

,fun tion ofthe ameradispla ementandoftheplane oordinatesin

F

i+1

.Asusual,the homographyrelated to

(π)

an be estimated upto a s ale fa torwith at least four ouplesofpoints.Fromthe

H

-matrix,the ameramotionsparameters(the rotationmatrix

R

andthes aledtranslation

t

d

∗ =

t

d

)andthestru tureofthe observeds ene anbeestimated(formore detailsreferto[11℄).

S aledEu lidean re onstru tion from nonplanar points When on-sideringnon-planar3Dpoints,theepipolargeometryisused.Theepipolarplane ontains the proje tion enters

O

c

and

O

i+1

and the 3D point

X

. The points of oordinates

X

m

and

X

m

learlybelongtothisplanewhi histradu ed bythe relation:

X

m

T

R

(t × X

m

T

) = X

m

T

R

[t]

×

X

m

T

= 0

(5)

where

R

and

t

representtherotational matrixandthetranslationalve tor be-tweenthe urrentandthedesiredframes.Similarlytothe aseofpinholemodel, therelation(5) anbewritten:

X

m

T

EX

m

T

= 0

(6) where

E

= R [t]

(8)

R

andthetranslation

t

upto as ale) anbedetermined.

Finally,theestimationoftheinputofthe ontrollaw(1),i.etheangular devia-tion

θ

andthelateraldeviation

y

,are omputedstraightforwardlyfrom

R

and

t

.

4 Experimentations

For our experiments, ameras have been alibrated using the Matlab toolbox presentedin[14℄. Theparameters oftherigidtransformationbetweenthe am-era and therobot ontrol frames areroughly estimated. Greylevelimages are a quired at a rate of 15 fps. A learning stage has been ondu ted o-line and imageshavebeenmemorizedas proposedin Se tion2.

4.1 Experimentation with a atadioptri amera and planar3D points in indoorenvironment

The proposed framework is implemented on an external standard PC whi h wireless ontrols a Pioneer AT3 robot. A atadioptri amera is embedded on therobot and itsprin ipal axis isapproximately onfounded withthe rotation axis of the robot (refer to Fig.1 (a)). Three key views (refer to Fig. 5) have beensele tedtodrivetherobotfromitsinitial ongurationtothedesiredone. Forthisexperiments,thepositionsoffourplanarpointsarememorizedandthen tra ked.The ontrolisrealizedusingthehomographymatrixfromtheproje tion ofthepatternsontotheequivalen esphere.Notethatdistan esfromtheopti al enter at the desired position to the referen e plane have been overestimated and thatthedire tions ofthenormalsoftheplane areroughly estimated.The results of the experimentation (refer to Fig.6) show that the lateral and the angularerrorsareregulatedtozerobeforerea hinga keyimage.

Fig.5.Initial image

I

0

c

and desired imagesthe robot hasto rea h

I

j

, j

= 1 : 3

(1st experimentation).

(9)

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

x 10

4

−0.5

0

0.5

Angular error (rad)

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

x 10

4

−0.5

0

0.5

Lateral error (m)

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

x 10

4

−5

0

5

10

Control input (deg/s)

Fig.6.Evolutionofthelateral(inm)andtheangular(inrad)errorsandofthe ontrol input(angularspeedindeg/s)duringtheexperimentation(1stexperimentation).

4.2 Experimentationwith a sheye amerain indoor environment

The Pioneer AT3 robot isnowequiped withthe Fujinon sheyelens mounted onto a MarlinF131B amera. The ameraproviding a eld ofview of

185 deg

andlooking forward,issituated atapproximately 30 mfromtheground. Sev-eralpathshavebeenmemorized(someof theimagesareshownin Fig.7).The robotstartsindoorandendsoutdoorandthe ameragrabsimageswithnatural landmarks.Givenagoalimage,avisualpathhasbeenextra ted.Atea hframe, pointsare extra tedfrom the urrent imageand mat hed with thedesiredkey image. A robust partial re onstru tion is then applied using the urrent, de-siredand theformer desiredimages ofthememory.Angular andlateralerrors areextra ted andallowthe omputation ofthe ontrollaw(2).Akeyimage is supposed to berea hedwhen one ofthe"image errors"issmaller thana xed threshold.Inourexperiment,wehave onsideredtwo"imageerrors":thelonger distan ebetweenanimagepointanditspositioninthedesiredkeyimage (errIm-ageMax) andthemean distan ebetweenthosepoints(errPoints),expressedin pixels.Thelongitudinalvelo ity

V

hasbeenxedto

200 mms

−1

.Thegains

K

p

and

K

d

havebeensetinorderthaterrorpresentsadoublepolelo atedatvalue 0.3.Forsafety,theabsolutevalueofthe ontrolinputisboundedto 10degrees by se ond. Lateral and angular errorsas well as ontrol inputare represented in Fig. 8.Red rosses are plotted whenkeyimages hange. As in the rst ex-perimentation, those errors are well regulated to zero for ea h key view. The image errors (expressed in pixels) are also de reasing before rea hing the key views (refer toFig.9). Errorsstill remaindierentto zerobe ausethe urrent imagedonotrea hedexa tlythedesiredimage.Asit anbenoti edinFig.10, our method is robust to hanges in the environment. A man was going in the dire tion of the robot (at the left) during the manually driven step whereasa man is walking at the rightof the Pioneer AT3 robot during the autonomous

(10)

(b)

Fig.7.Partsofthevisualpathtofollow(2ndexperimentation).

Fig.8.Lateral

y

andangular

θ

errorsand ontrolinput

δ

vstime(2ndexperimentation)

Fig.9.Imageerrors:(errImageMax)and(errPoints)vstime(2ndexperimentation)

4.3 Experimentationwith a sheye amerain outdoorenvironment Our framework is now applied to the navigation of an urban ele tri vehi le, named RobuCab. The same sheye amera as previously, looking forward, is situated at approximately 80 m from the ground. This vehi le is manually drivenalongthe800-meter-longpathshowninblueinFig.11.Thispath ontains importantturnsaswellaswaysdownand upanda omeba k.

Afterthesele tionstep,800keyimagesarekeptandformthevisualmemory ofthevehi le.Thelongitudinalvelo ity

V

isxedbetween

1 ms

−1

and

0.4 ms

(11)

Fig.10.Theimage(a) orrespondstotherea hedkeyimage(b)ofthevisualmemory (2ndexperimentation).

depending on theposition on the pathto follow (straight lines or turns). The experiment lasts 13 minutesfor a pathof 754 meters whi h results to a mean velo ity of

0.8 ms

−1

. A mean of 123 robust mat hing for ea h frame has been found.Themean omputationaltimeduringtheonlinenavigationwasof82ms byimage. Theerrorsintheimages de reaseto zerountilrea hing akeyimage (referto Fig.12).

Lateralandangularerrorsaswellas ontrolinputarerepresentedinFig.13. As it an benoti ed, those errors arewell regulatedto zero forea h keyview ex epted whenhigh turns o ur.Our ontrol law(line rea hing) isnotable to onvergequi kly inthose ases.Signi anterrorsarethus obtainedduring the largeturnsbut errorsarethende reasing. Infuture works, weplan toimprove our ontrollawtomanagemoree ientlythenavigationin largeturns.

Fig.11.Pathsintheuniversitary ampusexe utedduringthememorizationstep(in red)andtheautonomousstep(inblue)(3rdexperimentation).

(12)

Fig.13.Lateral

y

andangular

θ

errorsand ontrolinput

δ

vstime(3rd experimenta-tion).

5 Con lusion

Inthispaperanimage-basednavigationframework dedi atedtononholonomi mobilerobots hasbeenpresented.Theapproa his illustratedin the ontextof indoor/outdoor environmentusingasinglewideeldofview ameraand natu-rallandmarks.Weproposetolearntheenvironmentasagraphofvisualpaths, alledvisualmemory. Avisualrouteismadeofasequen eofkeyimagesofthe environment whi h des ribes, in the sensor spa e, an admissible path for the robot. This visual route an be performed thanks to a visual-servoing ontrol law,whi hisadaptedto therobot nonholonomy.

Futureworkswillbedevotedtorelaxthestati ity onstraintoftheenvironment. Wewilltrytoanalyseandtotakeintoa ountenvironmentmodi ations,whi h may o ur between learning steps and autonomous runs, in both visual route buildingandfollowing.

Referen es

(13)

2. Royer, E., Lhuillier, M., Dhome,M., Lavest,J.M.: Mono ular vision formobile robotlo alizationandautonomousnavigation.InternationalJournalofComputer Vision,spe ialjointissueonvisionandroboti s74(2007)237260

3. Matsumoto,Y.,Inaba,M.,Inoue,H.:Visualnavigationusingview-sequen edroute representation. In:IEEEInternationalConferen eonRoboti sandAutomation, ICRA'96.Volume1.,Minneapolis,Minnesota, USA(April1996)8388

4. Goedemé, T.,Tuytelaars,T., Vana ker, G.,Nuttin, M., Gool, L.V.,Gool,L.V.: Featurebasedomnidire tionalsparsevisualpathfollowing.In:IEEE/RSJ Interna-tionalConferen eonIntelligentRobotsandSystems,Edmonton,Canada(August 2005)18061811

5. Courbon,J.,Lequievre,L.,Mezouar,Y.,E k,L.: Navigationofurbanvehi le:An e ient visualmemorymanagement forlarge s aleenvironments. In:IEEE/RSJ InternationalConferen eonIntelligentRobotsandSystems,IROS'08,Ni e,Fran e (September2008)

6. Blan , G., Mezouar, Y., Martinet, P.: Indoor navigation of a wheeled mobile robot along visual routes. In: IEEE International Conferen e on Roboti s and Automation,ICRA'05,Bar elona,Spain(2005)33653370

7. Thuilot,B.,Bom, J.,Marmoiton,F.,Martinet,P.: A urateautomati guidan e ofanurbanele tri vehi lerelyingonakinemati GPSsensor. In:5thIFAC Sym-posium on Intelligent Autonomous Vehi les, IAV'04, InstitutoSuperiorTé ni o, Lisbon,Portugal(July5-7th2004)

8. Geyer,C.,Daniilidis,K.:Catadioptri proje tivegeometry.In:International Jour-nalofComputerVision.Volume43.(2001)223243

9. Barreto, J.: A unifyinggeometri representation for entralproje tion systems. ComputerVisionand ImageUnderstanding103(2006) 208217Spe ialissueon omnidire tionalvisionand ameranetworks.

10. Courbon, J.,Mezouar,Y.,E k,L.,Martinet,P.: Ageneri sheye ameramodel for roboti appli ations. In: IEEE/RSJ International Conferen e on Intelligent Robots and Systems,IROS'07,San Diego, CA, USA (29 O tober- 2November 2007)16831688

11. Faugeras, O., Lustman, F.: Motion and stru ture from motion in a pie ewise planar environment. International Journalof PatternRe ognition andArti ial Intelligen e2(3)(1988)485508

12. Svoboda,T.,Pajdla,T.: Epipolargeometryfor entral atadioptri ameras. In-ternationalJournalofComputerVision49(1)(2002)2337

13. Nistér, D.: Ane ientsolutiontotheve-pointrelativeposeproblem. Transa -tionsonPatternAnalysisandMa hineIntelligen e26(6)(2004)756770 14. Mei,C.,Benhimane,S.,Malis,E.,Rives,P.:Constrainedmultipleplanartemplate

tra kingfor entral atadioptri ameras. In:BritishMa hineVisionConferen e, BMVC'06,Edinburgh,UK(2006)

Figure

Fig. 1. Three appliations: navigation of a Pioneer AT3 equipped with a atadioptri
Fig. 3. The frame F i+1 along the trajetory (Γ ) is the frame where the desired image I i+1 was aquired
Fig. 4. Geometry of two views using the unied model on the sphere.
Fig. 5. Initial image I c 0 and desired images the robot has to reah I j ∗ , j = 1 : 3 (1st
+5

Références

Documents relatifs

Participation in project events has helped teachers to develop skills in areas such as syllabus and course design, textbook writing, classroom research, teacher training,

Addiction to online gaming showed a difference in impulsivity traits with substance dependence and healthy controls and subgroups of problem gamers has been characterized. These

Ainsi, nous pouvons tenir compte de l’analyse faite à propos des compétences sociales qui semblent aboutir à la même conclusion : à savoir que la taille et la

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Cependant cette population manifeste les mêmes soucis, les mêmes préoccupations et les mêmes aspirations liées aux conditions de vie (amélioration des conditions

significant in the policies of financialisation of the US economy beginning in the 1980s, with which he was closely associated.[14] Like Rand’s novel, Vidor’s The Fountainhead is

Le but de cette étude était de déterminer l ’agrément entre les propositions thérapeutiques issues de l ’évaluation hémodyna- mique par TDTP et par ETO au cours de la prise

The Means (and 95% Confidence Intervals) for Percentage of Total Details Recalled, Percentage of Details Recalled Correctly, Percentage of Details Recalled