• Aucun résultat trouvé

Automatic guidance of robotized 2D ultrasound probes with visual servoing based on image moments.

N/A
N/A
Protected

Academic year: 2021

Partager "Automatic guidance of robotized 2D ultrasound probes with visual servoing based on image moments."

Copied!
231
0
0

Texte intégral

(1)

HAL Id: tel-00476718

https://tel.archives-ouvertes.fr/tel-00476718v2

Submitted on 27 Apr 2010

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Rafik Mebarki

To cite this version:

Rafik Mebarki. Automatic guidance of robotized 2D ultrasound probes with visual servoing based on

image moments.. Automatic. Université Européenne de Bretagne, 2010. English. �tel-00476718v2�

(2)

sous le s eau de l'Université Européenne de Bretagne

pour le grade de

DOCTEUR DE L'UNIVERSITÉ DE RENNES 1 Mention : Traitement du Signal

E ole do torale Matisse présentée par

Rak Mebarki

préparée à l'IRISA. Équipe d'a ueil : LAGADIC Composante universitaire : IFSIC

Automati guidan e of

robotized 2D ultrasound

probes with visual

servoing based on

image moments

Thèse soutenue à Rennes le 25 Mars 2010

devant lejury omposé de : Christian Barillot

Dire teurdeRe herhe,CNRS/président Guillaume Morel

Professeur,ISIR, Paris/rapporteur Philippe Poignet

Professeur,LIRMM, Montpellier/rapporteur Pierre Dupont

Professor, Harvard Medi al S hool, Boston Univer-sity,USA/examinateur

Alexandre Krupa

ChargédeRe her he,INRIA/ o-dire teurdethèse François Chaumette

(3)
(4)

Thesissubmitted in partial satisfa tionof

the requirements for the degreeof

Doctor of Philosophy (Ph.D.)

in

Signal Pro essing

from

Université de Rennes 1

Thiswork hasbeen prepared in

IRISA/INRIA Rennes

Committee in harge: GuillaumeMorel Philippe Poignet Pierre Dupont ChristianBarillot Alexandre Krupa François Chaumette

March 2010

(5)
(6)
(7)
(8)
(9)
(10)

latter for the wealthydis ussionswehave hadand for dierent things.

Mywarmestthanksto my olleague andfriend Hamza Drid.

Mythanksto Boris D.

Tomybravefriendsof Toulouse, espe iallyYassineC.

Mywarmestthanksand gratitudeto Mr. Boudour,myformer tea her ofMathemati s.

Mywarmestthanksto RiadK. forhis valuable help.

TomybestfriendsSamir and Kamal.

Mythanksto RiadB. for allhis advi esand en ouragem ents.

(11)
(12)

2.3 X-ray-basedguidan e. . .

22

2.4 MRI-guided roboti s . . .

24

2.5 Ultrasound-based guidan e . . .

26

2.5.1 Ultrasound-based simulations . . .

26

2.5.2 3Dultrasound-guided roboti s . . .

27

2.5.3 2Dultrasound-guided position-basedvisual servoing . . .

30

2.5.4 2Dultrasound-guided image-basedvisual servoing . . .

38

2.6 Con lusion. . .

47

3 Modeling 49 3.1 Image moments: a briefstate-of-the-art . . .

50

3.2 Dis ussion with regards to image moments . . .

53

3.3 Image moments-based visual servoingwith opti al systems: state of the art

56

3.4 Modeling obje tives . . .

58

3.5 Image point velo itymodeling . . .

62

3.5.1 First onstraint . . .

67

3.5.2 Se ond onstraint . . .

69

3.5.3 Virtual point velo ity . . .

73

3.6 Image moments time variationmodeling . . .

75

3.7 Interpretationfor simple shapes . . .

77

(13)

3.7.2 Cylindri al obje ts . . .

84

3.7.3 Intera tionwith a3Dstraight line . . .

87

3.8 Con lusion. . .

88

4 Normal ve tor on-line estimation 91 4.1 On-line estimation methods based onlines . . .

91

4.1.1 Straight line-based estimation method . . .

93

4.1.2 Curvedline-based estimation method . . .

99

4.2 Quadri surfa e-based estimationmethod . . .

102

4.3 Slidingleast squaresestimation algorithm . . .

105

4.4 Simulation results. . .

108

4.4.1 Intera tionwith straight lines . . .

108

4.4.2 Intera tionwith urved lines . . .

110

4.4.3 Intera tionwith quadri surfa es . . .

111

4.4.4 Ellipsoid obje ts: perfe t andnoisy ases . . .

114

4.5 Dis ussion . . .

120

4.6 Con lusion. . .

123

5 Visual Servoing 125 5.1 Visual featuressele tion . . .

127

5.2 Simulation results with anellipsoidal obje t . . .

131

5.2.1 Model-basedvisual servoing . . .

132

5.2.2 Model-free visualservoingusing the urved line-based normal ve tor estimation . . .

139

5.3 Simulation results with realisti ultrasoundimages . . .

145

5.4 Simulation results with abinaryobje t . . .

153

5.5 Experimentalresults . . .

156

5.5.1 Experimentalresults with a spheri al obje t . . .

157

5.5.2 Exprimental resultswith an ultrasoundphantom . . .

159

5.5.3

Ex-vivo

experimental results with alamb kidney . . .

161

5.5.4 Experimentalresults with a motionlesssoft tissue . . .

161

5.5.5 Tra king twotargets . . .

164

5.6 Con lusion. . .

166

6 Con lusions 169

(14)

B.1 Integral oftrigonometr i fun tions . . .

180

B.2 Cal ulus of

n

ij

, spheri al ase . . .

182

185 C Supplementary simulation results of model-free visual servoing 185 C.1 Model-freeservoing onthe ellipsoid . . .

185

C.1.1 Using the straight line-based method . . .

185

C.1.2 Using the quadri surfa e-based method . . .

193

C.2 Simulationswith realisti ultrasound images . . .

198

C.2.1 Straight line-based estimation . . .

198

C.2.2 Quadri surfa e-based estimation . . .

198

C.3 Simulationswith the binaryvolume. . .

203

C.3.1 Straight line-based estimation . . .

203

(15)
(16)

an performana tiononlyifthelatterisorderedandwellformulateda ording torobots's

own language, provided of ourse that the required a tion ts and lies within the robot's

apabilities. This language is that the robot's a tuators understand and thus a ordingly

generate ana tion,thatwillbetransmittedtotherobot'sstru ture. Thea tionsseparately

generated by ea h of the a tuators will result in an a tion at the stru ture's end-element .

The robotis servoed to perform ataskin itsenvironment,and thereforeneedsinformation

about this latter in order to be able to intera t with it. Su h information are generally

aorded thanksto sensors atta hed to the stru ture of the robot. They an be either

pro-prio eptive or extero eptive allowing respe tively sensing the state of the robot or sensing

that ofthe environment. Thetasktobeperformedbythe robot is on eived in alanguage

dierent from that understandable by the robot's a tuators. Su h task orders an be

for-mulated, asfor examples,by: moveto positionA then to position B;perform motion with

a ertainvelo ityandthensmoothlystoprightarrivingto a ertainposition; grabthe door

and then orre tlyxitin the arbody;push the surfa e witha ertainfor eand perform

ba k-and-forth motionsforpolishing;performwelding byfollowing a ertainpath;et . The

task orders an not be dire tly ommuni ated to the robot sin e the latter's a tuators do

not understand the languagewith whi hthe ordered taskisformulated. Thea tuators an

perform a ording to orders formulated only in a tuator's language. A buer between the

twolanguagesis onsequently ru ialtotranslatetheorderstobethusunderstoodandthen

a omplished bythe robot. Thete hni al eldrelatedto su hbuersiswell known bythe

(17)

buffer

orders

command

state information

High−level Language

Low−level Language

robot

Figure 1.1: Sketch about robotics control

Robot Control

whendealingwithrobots. Thesensorsprovidewithrobot'sor environment's state information thatarefed ba ktothe buer,thatthen omputesthe ommands whi h

nally aresent to the robot. A sket h isgiven in Fig.1.1.

Depending on kind ofthe taskto be performed by the robot,dierent typesofsensors are

onsidered. Inthe aseonlytheproprio eptivesensors,astherobot'sen odersforexample,

areusedto onveythe informationrelativeto theposeofthe robot,the servoingte hnique

is known as

Position-based Servoing

. Su h te hniques require prior knowledge about the onsidered environment,asa CAD modelrepresentingits geometry for instan e. Theyare

proneto errorsin thetaska omplishment ifa hange haso urredin a onsidered partof

the environment. Analternative onsistsin usingextero eptivesensors,asvisiononesthat

anenabletherobotper eivingtheenvironment withwhi hitisintera ting. Thisapproa h

iswellknownas

Visual Servoing

(VS)te hnique,thatwedrawaglobals hemeonFig.1.2, grosslyrepresentingthe dierentinvolved steps with the orresponding data ow.

Visual sensorsprovide animage of the environment,thus ree tingits state. The

informa-tion ontained in the image is extra ted and then fed ba k for robot servoing. In the ase

the information isdire tlyusedto ompute the ommand to the robot, the visual servoing

te hniqueisreferredby

Image-based

visualservoing(IBVS)te hnique. Ifhoweverthe infor-mation ispro essedtobetransformedin 3Dposesinformation, thatisusedto omputethe

ommand, then the visual servoing te hnique is referred by

Position-based

visual servoing (PBVS) one. Otherwise, part of the information is transformed in poses inputs whi h are

then ompounded with otherimage information to ompute the ommand. Inthis asewe

(18)

Figure 1.2: A typical visual servoing scheme.

visual servoing,the feedba kinformationusedfor omputingthe ommandisreferredtoas

visual feature

.

Roboti shas ome into beingwith a main obje tive to enhan e the apabilities of humans

and to aord what the latter ould not. It was in fa t a follow-up of the development of

me hani alma hines,whi hatthattime alreadyaordedthehumanwith valuableservi es.

Su h ma hines were however restrained for performing a unique task and were limited in

autonomy. Thisfueledthedesireto make themversatilewith abroad rangeofservi esand

with ashigheraspossible autonomy. More, investigationshavealready been undertakento

make thesema hines smart,evenwith higherskillsthan human. Mu hofthe eorts

there-fore has been, and still are being in an in reasing rate, devoted for enhan ing the robots

autonomyand apabilities, aswe have takenpart throughthis thesis.

Roboti snds appli ationsin numerousareasranging from, butnot limited to, theeld of

automotiveindustry, aerospa e, under-water, nu lear,military, and re ently inthe medi al

intervention eld. The latterrepresentsthe eldthis thesis is mainly targeting. We

intro-du e thisarea in Chapter2. Visual sensorsaord roboti systemswith per eption oftheir

environment and onsequently with more abilities for autonomous a tions with enhan ed

(19)

of the medi al roboti seld, where the environment with whi h the robot is intera ting is

typi ally di ult to model. Possible ontinual environment's state hanges, that may

o - ur,makesu hdi ultiesstronger. Manyofthemedi alroboti systemsuse,indeed,visual

sensors, andtherefore areendowed with apabilities of intera ting with their environment.

Those sensorsaregenerallybased onmodalitiessu h asopti al, magneti resonan e(MR),

X-ray uoros opy or CT-s an, ultrasound, et . We provide in the next hapter a review

aboutroboti systemsguidedwiththeseimagingmodalities,thatwepresentinmoredetails

for the ase ofultrasound, sin e our work on erns this lattereld.

Agap,however,stillremainstobeaddressedbeforemedi alroboti sbe ome ommon pla e

for large appli ations range, due mainly to the fa tthat the information provided bymost

of su h sensors is not yet well exploited in servoing. Eorts are therefore needed to deal

with su hissueandinvestigatehowthosesensors ouldbeused,theirinformationexploited

andtranslated inalanguageunderstoodbythe robot(i.e.,newmodelingalongwithvisual

servoingte hniques needsto be developed), sothe latterbehavesa ordinglyand a hieves

the requiredmedi altask. Thisthesis on erns su hobje tives,andmore parti ularlyit

in-vestigateshow2Dultrasoundsensors,throughtheir valuable information, anbeexploited

in medi al roboti systemsin orderto aordthe latterwith enhan ed autonomyand

apa-bilities.

Contributions

Our work on erns the exploitation of 2D ultrasound images in the losed loop of visual

servoings hemefor automati guidan eofarobot arm,that arriesatits end-ee tora2D

ultrasound probe; we onsider in this work 6 degrees of freedom (DOFs) anthropomor phi

medi al robot arms. We develop a new visual servoing method that allows for automati

positioningof a robotized 2D ultrasoundprobe with respe t to anobserved softtissue[54℄

[57℄ [55℄, and [56℄. It allows to ontrol both the in-plane and out-of-plane motions of the

2Dultrasoundprobe. Thismethodmakesdire t useoftheobserved2Dultrasoundimages,

ontinuously provided by the probe transdu er, in the servoing loop (see Fig. 1.3). It

ex-ploits the shape of the ross-se tion lying in the 2D image, by translating it in feedba k

signals to the ontrol loop. This is a hieved bymaking use of image moments, that after

being extra ted are ompounded to build up the feedba kvisual features (an introdu tion

about image moments is given in Chapter 3). The hoi e of the omponents of the visual

features ve tor is also determinant. These features are transformed in a ommand signal

to the probe- arrier robot. To do so, we rst develop the intera tion matrix that relates

the image moments time variation to the probe velo ity. This intera tion matrix is

(20)

Figure 1.3: An overall scheme of the ultrasound (US) visual servoing method using

image moments, with the corresponding data flow.

in the design ofthe visual servos heme, sin e itisinvolved in the ontrollaw. We propose

six relevant visual features to ontrol the 6 DOFs of the robot. The method we develop

allows for automati rea hing a target image starting from one totally dierent, and does

not requireaprior alibrationstepwith regardtoparameters representingthe environment

with whi h the probe transdu er is intera ting. It is furthermore basedon visual features

that an be readily omputed after having segmented the ross-se tion of interest in the

image. Thesefeaturesdonotwarp buttrulyree t theinformation onveyedbythe image.

They are unlikely to misrepresenting the a tual information of an image from whi h they

are extra ted. These features are moreover relatively robust to image noise, whi h is of

greatinterestwhendealingwith theultrasoundmodalitywhoseimagesare,inherently,very

noisy. An image moments-based servoing system, namely the one presented in the present

(21)

The method weproposehasnumerouspotential medi al appli ations. First,it an beused

for diagnosis by providing an appropriate view of the organ of interest. As instan e, in

[1℄ only the probe in-planemotions areautomati al ly ompensated to keep tubes entered

in the image. However, if the tubes are for example urved, they may vanish from the

image while the robotized probe is manipulated by the operator. Indeed, ompensating

only in-plane motions is not enough to follow su h tubes. With the method we propose,

however, it would be possible that the probe automati al ly follows the tubes's urvatures

thanks to the ompensation of the out-of-plane motions. Another potential appli ations is

needle insertion. Sin e the method we propose allows to keep the a tuated probe on an

organ desired ross-se tion, it therefore would aord to stabilize an a tuated needle with

respe t to the targeted organ. This would prevent the needle from eventual bending or

breaking when the organ moves. The assumption and onstraint assumed for example in

[38℄,wheretheneedleisme hani ally onstrainedtolieintheprobeobservationplane,thus

wouldbeover ome sin e thesystemwouldautomati allystabilizethe needle inthe desired

plane (organ'ssli e). Another appli ation is image 3-Dregistration,where urrently in the

Lagadi group wehave a olleague whoworksto exploit this method for thattopi .

This thesis brings and states new modeling of the ultrasound visual information with

re-spe t to the environment with whi h the robot is intera ting. It isimportant to noti e the

dieren e from the modeling of opti alsystems visualinformation, for example, whi h an

befoundin dierentliteratureworks. In aseofopti alsystems,likea amera forexample,

the transmitted image onveys information of 3D world s enes that are proje ted on the

image plane. In ontrast, a 2Dultrasound transdu er transmits a 2Dimage ofthe se tion

resultingfromtheinterse tionoftheprobeobservationbeamwiththe onsideredobje t. In

pra ti e, theultrasoundbeamisapproximatedwithaperfe tplane. A2Dultrasoundprobe

thusprovidesinformationonlyinitsobservationplanebutnoneoutsideofit. Consequently,

the modeling in ase of opti al systems quite diers from that of 2D ultrasound systems

(this ontrast is sket hed in Fig. 1.4). Most of the visual intera tion modeling, and thus

visual servoing methods, are however devotedfor opti al systems. Therefore,they an not

be applied in ase of2D ultrasound due to the highlighted dieren e. New modeling need

therefore to be developed in order to design visual servoing systems using 2D ultrasound.

We rst derive the image velo ity of points of the ross-se tion ultrasound image. This

velo ity isanalyti ally modeled, and isrelated asfun tion of the probe velo ity. Itis then

usedforderivingtheanalyti alformoftheimagemomentstimevariationasfun tionofthe

probe velo ity. Thislatterformulaeweobtain isnothingbut the ru ial intera tionmatrix

required in the ontrol law of the visual servoing s heme. The modeling is developed and

presented in Chapter 3.

(22)

(a)

(b)

Figure 1.4:

Difference between an optical system and a 2D ultrasound one in the

man-ner they interact with their respective environments: (a) a 2D ultrasound probe

ob-serves an object, through the cross-section resulting from the intersection of its planar

beam with that object - (b) a perspective camera observes two 3D objects, which reflect

rays that are projected on the camera’s lens. (The camera picture, at the top, is from

http://www.irisa.fr/lagadic/).

ofthe softtissuewithwhi hthe roboti systemisintera ting, whenprobeout-of-plane

mo-tions areinvolved. Arstresolution that ouldbeproposedistheuseofapre-operative3D

model, ofthe onsideredsofttissue, thatwouldbe usedto derivethe intera tion. However,

doingsowouldarisedi ultiesalongwith more hallenges. Firstly,thepre-operativemodel

should be available. Thissuggest an o-line pro edure in order to obtain it. Furthermore,

it wouldalso require to register the pre-operative model with the urrent observed image.

The above issue is addressed in the present dissertation. Indeed, we develop an e ient

model-freevisualservoingmethodthatallowsthesystemforautomati positioningwithout

anypriorknowledgeoftheshape oftheobserved obje t,its3Dparameters, noritslo ation

in the 3D spa e. This model-free method e iently estimates the 3D parameters involved

in the ontrol law. Theestimationisperformedon-lineduring theservoingisapplied. This

is presented in Chapter4.

The developed methods have been validated from simulations and experiments, where

(23)

onsist in s enarios where a 2Dvirtual probe is intera ting with either a 3Dmathemati al

model,arealisti obje tre onstru tedfromasetofrealB-s anultrasoundimagespreviously

aptured, or a binary obje t re onstru ted from a set of binary images. The experiments

have been ondu ted using a 6 DOFs medi al robot arm arrying a 2D ultrasound probe

transdu er. Therobotarmwasintera tingwith anultrasoundphantomwhi h, inside,

on-tained asofttissueobje t, andalsowithsofttissueobje tsimmersedinawater-lled tank.

We nally on lude this do ument by providing some orientations for prospe tive

(24)

fe tive andbroad exploitation of animaging modality, namelythe ultrasound imaging, for

medi al roboti s ontrol. Consequently, it seemsfundamental to rst provide an overview

aboutmedi alroboti s,fromthepointofviewofroboti s ontrol,andtointrodu emedi al

robot guidan e performed with main imaging modalities. After doing so, we nally an

start dealing in more details with worksthat investigate the use of the ultrasound images

for robot ontrol.

The remainder of the hapter is organized as follows. We present in the next se tion a

shortintrodu tiontomedi alroboti s,alongtohuman-ma h ine interfa es. Theselatterare

ommonly used for the inter ommuni ation between the lini ian and the medi al roboti

systemforpro eduremonitoring. Wealsoprovidea lassi ation thatea hofwhi hree ts

aspe i mannerthat,a ordingto,the lini ianintera tsandorderstheroboti systemfor

taska hievements. Subsequently, weintrodu ethemostusedimagingmodalitiesasopti al,

X-ray and/orCT, MRI, and ultrasound. The ultrasound modality represents the imaging

whose employing, in guiding automati roboti pro edures, is investigated in the present

thesis. Therefore,those remaining imaging modalities arebrieypresented. The examples

of literature investigations related to those modalities areprovided only to illustrate their

orrespondingeld. Wethusgenerally iteonlyoneworkfor ea hofthoseelds,sin e they

are beyond the fo usof this thesis. Asfor works dealing with ultrasound-based automati

guidan e, we nallypresent and organize them a ording to a ertain lassi ation, as an

(25)

Figure 2.1: Da Vinci robot (Photo: www.intuitivesurgical.com)

eld of2Dultrasound-based roboti automati guidan e.

2.1

Medical robotics

Some partsof this se tionareinspired from[78℄.

Medi al roboti s has ome into being to enhan e and extend the lini ian apabilities in

order to perform medi al appli ations with better pre ision, dexterity, and speed leading

to medi al pro edures ofshortenedoperativetime, redu ederrorrate, and ofredu ed

mor-bidity (see [78℄); its goal is not to repla e the lini ian. As examples to illustrate su h

obje tives, roboti systems ould ompensate for the surgeon's hand tremors to remove

them during an intervention, or ould be used to arry heavy tools with are. These

sys-tems ould assist and provide the lini ian with valuable information whi h are organized

and displayed on s reensfor visualization. The lini ian ould intera t with the systemto

obtain desired information, on whi h orre t de isions an be made. The onveyed

infor-mation havethereforetobepertinentwith atthesametime notoverwhelmingthe lini ian.

Medi alrobots anbe lassieda ordingtodierentways[78℄: bymanipulatordesign(e.g.,

kinemati s, a tuation);levelofautonomy(e.g., programmed, teleoperated, onstrained

o-operative ontrol);targetedanatomyorte hnique(e.g., ardia ,intravas ularper utaneous,

laparos opi ,mi rosurgi al) ;intendedoperatingenvironment(e.g.,in-s anner, onventional

(26)

ultra-The robot should not be umbersome in order to allowthe lini al sta unimpeded a ess

to the patient, espe iallyfor the surgeon duringthe pro edure. It an beground-, eiling-,

or patient-mount ed. Su h hoi e is subje t to the tradeo between the robot size,

heavi-ness,and a essto the patient. Sterilization also mustbeaddressed, espe ially forsurgi al

pro edures. The patient an be in onta t with parts of the robot, and onsequently all

pre autions must be taken in order to prevent any possible ontaminati on of the surgi al

eld. The ommonpra ti efor sterilizationistheuseofbagsto overthe robot,andeither

gas, soak,or auto lavesteam tosterilize the end-ee tor holding the surgi al instrument.

Asintrodu edabove,medi alroboti systemsusemainlyvisualsensors,whosemodality

is hosendependingonthekind oftheappli ation toperform. Ea hmodalitypresents

spe- i advantages but alsosuersfrom drawba ks. Soft tissues,for example,arewell imaged

and their stru tureswell dis riminatedwith the Magneti Resonan e Imaging (MRI).This

modality is extensively used to dete t and then lo alizetumors for their treatment, and is

subje t to dierent investigations to exploit it for robotized tumor treatment, where the

robot ould assist needle insertion for better tumor targeting (e. g., [30℄). Su h imaging

is aorded bys anners ofhigh intensitymagneti eld. Therefore,ferromagneti materials

exposedtosu heldundergointensefor esand ouldbe amedangerousproje tiles.

Conse-quently, ommon roboti omponentsdonotapplysin e theyaregenerallymadefromsu h

materials, and are thereforepre luded for this imaging modality. Moreover, the streaming

rateatwhi htheimageareprovidedbythe urrentMRIsystemsisrelativelylowtoenvisage

real-time roboti appli ations. Asfor bones, theyarewell imaged with X-ray modality(or

CT). Su h imaging hasbeen therefore the subje t to investigations and hasfound its use,

for example,in roboti ally-assisted orthopedi surgery asspine surgery, joint repla ement ,

et . This modality an, however, be harmful to the patient bodydue to its radiation.

Op-ti alimaging sensorshavealsobeen onsidered. One ofthe mostmedi al appli ation using

su h sensors on erns endos opi surgery, where generally a small amera is arried and

passed insidethe patient's body through a small in ision port, while two or more surgi al

instruments are passed through separate other small in isions (see Fig. 2.2). The amera

(27)

Figure 2.2: Example of endoscopic surgery robot (Da Vinci robot) in action. (Photo:

http://biomed.brown.edu/.../Roboticsurgery.html)

surgeon thus an handle those surgi al instruments and an observe their intera tion with

soft tissues thanks to the onveyed images by the amera. Su h pro edures have already

been robotized, where ea h instrument is separately arried by a robot arm. Both

instru-ments are remotely operated by the surgeon through hapti devi es. This kind of roboti

systems is already ommer ial ized, asthe one shown in Fig. 2.2, and these robotized

pro- edures have be ome ommonpla e in some medi al enters. Resear h works are however

still being ondu ted in order to automati ally assist the surgeon,byvisually servoingthe

instrument-holder arms (e.g., [47℄,[60℄).

Another appli ation of opti al systems whi h new workshave started to investigate is the

mi rosurgery roboti s(e. g.,[31℄). Itisintrodu ed in Se tion 2.2. Otherappli ations ould

be onsidered but arehowever extremely invasive (e. g., [36℄, [7℄). Therefore, the range of

potential appli ationsbasedonopti alimagingsensorsseemstoberestrained tofew

appli- ationsasendos opi surgery,whereinatleasttwoin isionsarerequired,leadingtopossible

hemorrhage and trauma for the patient. Bleeding an also hinder and, perhaps, pre lude

visualizationifblooden ountersthe ameralens,thus ompromisingthepro edure. Opti al

sensorsrequire freespa eupto theregion tovisualize,whi hrepresentsastrong onstraint

thatgenerally ouldnotbesatisedwhendealingwith medi alpro edures;where the

am-era is inside the body and en ounters softtissue wallsfrom either sides. The amera also

needs to be passed inside the body up to the region to operate on, whi h is however not

alwayspossiblefor some regions. We an noteindeedthat, asinstan e, mostof endos opi

pro eduresarelaparos opi al lyperformed(i.e.,throughtheabdomen),andthusthe amera

(28)

Figure 2.3: An example of a typical robotic system teleoperated through a

human-machine interface: three medical slave robot arms (left) are teleoperated by a user

thanks to a master handle device, and the procedure is monitored by the user through

display screens (right). (Photo: http://www.dlr.de/).

ompli ated in term ofa ess sin e, for example, the fewer presen eof bones. In ontrast,

MR, X-ray, and ultrasound imaging modalities provide internal body images without any

in ision,andthus ir umventthe onstraintsimposedwhenusing opti alsystemsandtheir

ee ts. But as introdu ed above, MRI and X-ray present drawba ks. The former

modal-ity urrently does not provide images in real-time, and pre ludes ferromagneti materials.

The latter is harmful. Ultrasound modality, however, provides internal body images

non-invasively and is onsidered healthyfor patient. Moreparti ularly, 2Dultrasound provides

images with high streaming rate. This latter trait is of great interest when dealing with

robot servoingfor real-timeappli ations. Thisthesis on erns this modality, where itaims

at addressing the issue of exploiting 2D ultrasound images for automati al ly performing

robotized medi al appli ations.

During a medi al pro edure, it is ru ial that the lini ian is present to supervise and

monitortheappli ation. Thereforethe lini ianshouldbeabletoorderandintera twiththe

robot. Thisis performed through aninterfa e well known by the term of

Human-machine

interface

.

2.1.1

Human-machine interfaces

Human-ma hineinterfa es(HMI)playanimportantroleinmedi al roboti s, more

parti u-larlytheyallowthe lini ianforsupervisingthepro edure. AnHMIisgrossly omposedofa

displays reenon whi hdierentinformation aredisplayed,andahandledevi ewith whi h

(29)

sim-ply amousewith whi hhand li ksareperformedonthe displays reen. The lini ianthus

an intera tively send the orders to the robot through the HMI,and inversely, an re eive

information about the lini al eld'sstate (see Fig. 2.3). However, the lini ian should

re- eiveimportantandpre iseinformation,whileatthesametimenotbeoverwhelmedbysu h

data in orderto takede isions based onlyon pertinent information. An issueisthe ability

of the systemto estimatethe impre ision of the onveyed information, su hasregistration

errors, in order to prevent the lini ian making de isions based onwrong information [78℄.

An example of a human-ma h ine interfa e developed for roboti ally assisted laparos opi

surgery ispresented in [61℄.

2.1.2

Operator-robot interaction paradigms

Depending on the onguratio n ree ting the manner the operator ommands the roboti

system, dierent paradigms ouldbe onsidered, asthosepresented in the following.

Self-guided robotic system paradigm

Insu ha onguratio n,therobotautonomouslyperformsaseriesofa tionsaftera lini ian

had previously indi ated required obje tives. That operator is in fa tout-of-loop with

re-gard tothe intera tionofthe robot withits environment,ex eptforrestrained a tionssu h

asmonitoring the development of the pro edureand dening newobje tivesfor the robot,

or stopping the pro edure. Endowed with su h a paradigm, a roboti system ould aord

with valuable servi esthatotherwise ouldnotbeperformed. Su hasystemrequires

there-foreintelligent losed-loopservoingte hniquestoenablethe robot undertakingautonomous

a tions, espe ially when intera ting with omplex environments. The servoing te hniques

developed throughthis thesisare rangedmainly withinthis paradigm lass.

In ontrast to this onguration, the below presented paradigms onsist is the ase where

the operator is involved within the intera tion loop. Su h ongurations an therefore be

onsidered, with regardto thetasktoperform,belonging to theopen-loop servoing lasses.

Haptic interfaces: master-slave paradigm

Hapti interfa esystemshave broughtpertinentassistan efor medi alinterventions.

Typi- alsystems onsistofrobotarmsthat an arrydierentvarietyofmedi alinstruments(see

Fig.2.3top). Byhandlingmasterdevi es,the lini ian manipulatestheinstrument arried

bythe robot end-ee tor (seeFig.2.3 bottom). The lini ian an remotelymanipulate the

robot,and anfeelwhatisbeingdonethankstoree ted for esfromthe instrument(e. g.,

(30)

Figure 2.4:

Cooperative manipulation:

a microsurgical instrument held by

both an operator and a robot.

Device, developed by JHU robotics group,

aimed at injecting vision-saving drugs into tiny blood vessels in the eye (Photo:

http://www.sciencedaily.com).

for es applied on the manipulated patient's tissue. The for es en ountered by the

instru-mentaresensed,s aled,andthensentto themasterhandle. Thislattermovesa ordingto

these sent for es,and thus it ree tsthe sensed for es to the lini ian who is operating on

it. The lini ian therefore an feel the sensed for es and onsequently an be aware about

the ee ts of the intera tion between the instrument and the patient's tissue. Inversely,

the for es applied by the lini ian on the master handle are s aled, transmitted, and then

transformedinmotionsoftheslaveinstrument. Inter ommuni atingfor esassu hallowsto

ee tivelyslowingdownabruptmotions that ouldbe the resultfromba klashmovements

of the operator, and to attenuate hand tremor whi h an be of great interest for surgi al

pro edures. It however does not allow the operator dire t a essto the instrument, whi h

thus an not be freelymanipulated (see [78℄).

One known appli ation of the master-slave paradigm on erns endos opi surgery. Su h

pro edures (they have been introdu ed above), whetherroboti allyor freehand performed,

suer from low dexteritybe ause of the ee t of the entry portpla ement, through whi h

the surgi al instrument or the amera holder is passed. Another appli ation on erns

mi- rosurgery roboti s (it is introdu ed in Se tion 2.2). It suers however from the fa t that

(31)

Figure

2.5:

Hand-held

instrument

for

microsurgery.

(Photo:

http://www3.ntu.edu.sg/).

Cooperative manipulation

In this ase, both the lini ian and the robot hold the same instrument, e. g. [31℄, (see

Fig. 2.4). This paradigm keeps some advantages of the master-slave one, sin e it allows

ee tively slowing down abrupt surgeon's hand motions, and attenuating surgeon's hand

tremor. In ontrast to master-slave, this paradigm allows the surgeon to dire tly

manip-ulate the instrument, and be more loser to the patient, whi h is really appre iated by

surgeons [78℄.

Hand-held configuration

Another onguratio n onsistsin hand-held instruments (see Fig.2.5), thatnd su essin

hand tremor an ellatio n (e. g. [85℄). Embedded insidethe instrument areinertial sensors

that dete t tremor motions and speed whi h both, by low amplitude a tuators, are then

inertially an eled. The advantage of su h a onguratio n is that beyond of leaving the

surgeon ompletely unimpeded, it lets the operating room un umbersome, with less setup

hanges. However, heaviertools arenot supported and the instrument an not be left

sta-tionary in position [78℄.

After we have presented an introdu tion to the medi al roboti eld, we now survey

exploitation of main imaging modalities in guiding su h systems. We rst introdu e

med-i al roboti systems guided with opti al images. Then, we present roboti guidan e with

X-ray (or CT-s an) and MRI imaging modality, respe tively. They are dis ussed briey,

su h that we present only few examples for illustration, sin e they are beyond the s ope

of this thesis. Finally, we onsider guidan e using the ultrasound modality. We dis uss it

(32)

Figure 2.6: Microsurgery robotics: micro-surgical assistant workstation with

retinal-surgery model. (Photo: http://www.cs.jhu.edu/CIRL/).

detailed survey on worksthat are investigating the exploitation of 2D ultrasound imaging

forautomati guidan eofmedi alroboti systems,astheworkpresentedinthisdissertation.

2.2

Optical imaging-based guidance: microsurgery

robotics

Sin e endos opi roboti s, introdu ed above in Se tion 2.1, have be ome ommonpla e in

the medi al eld, only mi rosurgery roboti s is onsidered in this se tion. Mi rosurgi al

roboti s is nothing but surgi al roboti s related to tasks performed at a small s ale, e. g.

[31℄, (see Fig. 2.6). The typi al sensor used to provide visual information about the soft

tissue environment is the mi ros ope. In ontrast to free hand performed mi rosurgery,

robots enhan e the surgeon apabilities for performing tasks with ne ontrol and pre ise

positioning. In many ases, mi rosurgi alrobots are basedon for e-ree ting master-slave

paradigm. The lini ian remotelymovestheslavebymanipulatingthemasterandapplying

for es on it. Inversely, the for es en ountered by the slave are s aled, amplied, and sent

ba k to the master manipulator that moves a ordingly. The operator thus an feel the

en ountered for es,andthereforeisawareabout thefor es appliedon themanipulated soft

tissue. Furthermore, this onguratio n allows to produ e redu ed motions on the slave.

A ordingly, this paradigm onsiderably prevents the manipulated soft tissue from

possi-ble damages that an be the resultof abruptoperator's hand motion with/or highapplied

for es. This ongurationhowever suersfrom twomain disadvantages. Onedisadvantage

onsistsinthe omplexityandthe ostofsu hsystems,sin etheyare omposedoftwomain

me hani alsystems: the masterandtheslave. Also,su ha ongurationdoesnotallowthe

(33)

Figure 2.7: ACROBAT robot in orthopaedic surgery aimed at hip reparation. (Photo:

http://medgadget.com).

asinstan e, in the domain of ophthalmi surgery (e.g., [31℄).

2.3

X-ray-based guidance

A well-knownappli ation of X-rayimaging is orthopaedi surgery. In orthopaedi surgery

roboti s(seeFig.2.7),thesurgeonisassistedbytherobotinordertoenhan ethepro edure

performan e. As in knee or hip repla ement, rather than the bone is manually ut, it is

automati al ly performed by the robot, under the supervision of the surgeon. This allows

to ee tively ut the bone in su h a way to appropriatel y ma hine the desiredhole for the

implant. Preoperative x-ray images provide key 3D points used for planning a path that

the robot will then followduring the utting pro edure.

Sin e bones are easily well imaged with omputed X-ray tomography (CT) or X-ray

u-oros opy modalities, the employed visual sensors are based on these modalities. During

the surgi al pro edure, the patient's bones are atta hed rigidly to the robot's base with

spe ially designed xation tools. The image frame pose is estimated either by tou hing

dierent points on the surfa e of the patient's bones or bytou hing preimplante d du ial

markers. The surgeon manually brings and position the robot surgi al instrument at the

bone surfa e to operate on. Then, the robot automati al ly moves the instrument to ut

the desired shape, while in the same the robot omputer ontrols the traje tory and the

(34)

hospital enters, and over thousands of surgi al operations have been performedwith su h

systems. However, before a medi al robot system is lini ally used, battery of tests have

to be performed to validate the system and thus, ensure total se urity of the patient and

the lini ian sta during the surgi al operation. Of ourse, the system must demonstrate

enhan ement s in the surgi al pro edure performan e as pre ision, dexterity, et , to justify

its userather than the surgi al operation ismanuallyperformed.

X-rayimages havealso been onsidered forimage-based visualservoing. Aroboti

sys-tem for tra king stereota ti rode du ials within CT images is presented in [24℄. The

image onsists in a ross-se tion plane wherein the rods appear as spots. Those rods are

radiopaque in order to ease their visualization in the X-ray (CT) images. The obje tive is

to automati al ly positionthe robot in su ha waythe spotsarekept at desiredpositions in

the image. Todoso, animage-basedvisualservoingwasused, where thespotsimage

oor-dinates onstitutethefeedba kvisualfeatures. Fromea hnewa quiredimagethespotsare

extra ted to updatethe a tual visualfeatures, whi h then are ompared to thatof the

de-sired onguratio n. Thea ording inferrederroris usedto ompute the ontrol lawwhi h,

at its turn, is ordered to the robot in form of ontrol velo ity. Sin e the ja obian matrix

relating the hanges ofthe visual featuresto the probe velo ity isrequired, thatrelatedto

the spots image oordinates is presented in [24℄. To do so, the rodesare represented with

3D straight lines whose interse tion with the image plane is analyti ally formulated. The

systemhasbeen tested forsmall displa ementsfrom onguration wherethe desiredimage

relatedtodesiredspot's oordinatesis aptured. Theissueinvestigatedin[24℄,themodeling

aspe t morepre isely, in fa t anberanged withinthe s ope ofthis thesis. Indeed,in [24℄,

the image used in the servoing loop provides a ross-se tionsight of the environment with

whi h the robot is intera ting. Similarly, this thesisdeals with ross-se tion images in the

servoingloop,ex eptthattheseimagesareprovided bya 2Dultrasoundtransdu er. Abig

dieren e is that only simple geometri al primitives, namely straight lines, are onsidered

in [24℄, while this thesis deals with whatever-shaped volume obje ts. We present in this

(35)

(a)

(b)

Figure 2.8: MRI-based needle insertion robot (a) High field MRI scanner (Photo:

http://www.bvhealthsystem.org) - (b) MRI needle placement robot [30] (Photo:

www2.me.wpi.edu/AIM-lab/index.php/Research).

2.4

MRI-guided robotics

MR imaging systems, as X-ray ones, provide in-depth images of observed elements.

How-ever, MRI systems provide images non-invasively and thus are onsidered not harmful for

patient body. Moreover, they provide well ontrasted images of soft tissues. This

advan-tages stimulateddierent investigations in orderto exploit this modality for automati al ly

guidingrobotizedpro edures. In[30℄,forexample,apneumati ally-a t uat edroboti system

guided by MRIfor needle insertion in prostateinterventions ispresented. A2 DOFs robot

arm isused toautomati al lypositiona passivestage,on whi h amanually-inserted needle

isheld[seeFig.2.8(b)℄. Insidethe roomofaMRIs anner[e.g.,seeFig.2.8(a)℄,the patient

islyinginasemi-lithotomypositiononabed. Boththerobotarmholder,aneedleinsertion

stage, and the robot ontroller are also inside the s anner room, while the surgeon is in a

separated room to monitor the pro edure through a human ma hine interfa e. The main

issuewhiledealing withaMRIs anner onsistsinthedi ultyforthe hoi eof ompatible

devi es. Due to the highmagneti eld in the MRI s anners, ferromagneti or ondu tive

materials are pre luded. Su h materials an for instan e either be dangerously proje ted,

ause artifa ts and distortionin the MRIimage, or reate heating nearthe patient's body.

Most of the standard available devi es arehowever madefrom either materials, and

(36)

ordingly, the robot automati al ly brings the needle tip up to the entry point with a

or-responding orientation. Subsequently, through the sli ers of the human-ma h ine software

interfa e, the surgeon monitors the manual insertion of the needle,whi h then slides along

its holderaxisto rea hthe target. Theuseofthe MRimagesislimited todete t thetarget

and needle tip lo ations. The automati positioning of the robot up to the entry point is

aorded withaposition-basedvisualservoing. Su hanapproa hhoweveriswell-knownfor

its relatively low positioning a ura y, if ompared for example to the image-based visual

servoing. The main ontribution presented in [30℄ seems in fa t onsisting in the design of

a MRI- ompatibleroboti system.

The propulsion ee t that a magneti eld an apply on ferromagneti materials has

been exploited to perform automati positioning and tra king of untethered ferromagneti

obje t, using its MRI images in a visual servoing loop [28℄. The MR eld is used both to

measure the position of the obje t and to propel the latter to the desired lo ation. Prior

that the pro edure takes pla e, a path through whi h the obje t has to move is planned

o-line. It isrepresented by su essive waypoints to be followed bythe obje t. During the

pro edurethatisperformedunderaMReld,thea tualpositionismeasuredand ompared

to the desired one of the planned path, and the dieren e is sent to a ontroller that uses

itto ompute themagneti propulsioneldtobeapplied onthe obje t. Thatpropulsion is

expe tedto move the obje tfromthe a tualpositionto thatdesired. Experimentalresults

arereported, su hthatthesystemwastestedusing botha phantomand aliveswineunder

general anesthesia. The feedba k was updated at a rate of 24 Hz for the phantom ase.

The in-vivo obje tive was to ontinuously tra k and position the obje t in su h a way it

travels withinand alongthe swine's arotid arterybyfollowing the pre-plannedpath. The

obje t onsisted in a 1.5 mm diameter sphere made of hrome and steel. The proposed

visual servoingmethod onsistshowever in aposition-based one. Asmentionedjust above,

(37)

the low streaming rate at whi h the images are provided. This onsiderably hinders the

exploitation of su himages forreal-time roboti guidan e appli ations. Image-basedvisual

servoing,forexample,requiresthattheupdatealongwiththepro essingoftheimagehasto

beperformedwithintherateatwhi htherobotoperates. The2Dultrasoundmodality,

nev-ertheless, beyond of beingnon-invasive, providesthe images at a relatively highstreaming

rate. This makes su h a modality a relevant andidate for real-time roboti

automati -guidan e appli ations where in-depth imagesarerequired.

2.5

Ultrasound-based guidance

Ultrasound imagingrepresentsan importantmodalityof medi alpra ti e, andisbeingthe

subje tofdierentinvestigationsforenhan eduse. Tenyearsago,oneout offour

imaging-basedmedi alpro edureswasperformedwiththismodalityandtheproportionisin reasing

for dierentappli ations in the foreseeable future[84℄.

We report in this se tion investigations that deal with automati guidan e from the

ul-trasound imaging modality. In parti ular, we survey in more details works dealing with

the use of2Dultrasoundimages for automati al lyguiding roboti appli ations, asitisthe

s opeofourworkpresentedin thisdo ument. Theremainderofthisse tionisorganizedas

follows. First,in Se tion 2.5.1,wepresent an example of an investigationabout the use of

the ultrasoundmodalityto simulate and then to planthe insertion of needle in softtissue.

Then, wepresentinSe tion 2.5.2worksthatexploit3Dultrasoundimagestoguidesurgi al

instruments, where the obje tive waseither positioningor tra king. Afterwards, the works

thatdeal withguidan e using2Dultrasoundaresurveyed. We lassifytheminto twomain

ategories depending onwhetherthe 2Dultrasound image isonlyusedto extra t andthus

to estimate 3D poses of features used in position-based visual servoing, or the 2D

ultra-sound image isdire tlyused in the ontrol law. Theformer, namely2Dultrasound-guided

position-based visual servoing, is presented in se tion 2.5.3, while the latter, namely 2D

ultrasound-guided image-based visualservoing, is presented in Se tion 2.5.4.

2.5.1

Ultrasound-based simulations

In[23℄,asimulatorofstineedleinsertionfor2Dultrasound-guidedprostatebra hytherapy

(38)

for esarettedwithapie e-wiselinearmodelofthreeparameters,thatareidentiedusing

Nelder-Mead sear h algorithm [3℄. Whenthe needleintera ts with the tissue, the

displa e-mentsofthis latteraremeasuredfrom the imagesprovided bythe ultrasoundprobe, using

time delay estimator with prior estimates

(TDPE)[87,88℄,withoutanypriormarkersinside the tissue. Thesemeasurementstogether with the probe positions andthe measuredfor es

are used to estimatethe Young's moduli and the for e model parameters. The soft tissue

displa ementsarethen simulated bymaking upa meshof 4453linear tetrahedral elements

and 991 nodes, using the linearnite element method [89℄ with linearstrain.

2.5.2

3D ultrasound-guided robotics

In the ultrasoundmodality, in fa t, we distinguishtwo main modalities,that are3D

ultra-soundand2Dultrasoundmodalities. Worksrelatedtotheformer modalityarepresentedin

this se tion, whilethose relatedtothe latter aresubsequently onsidered. Inthe following,

wepresentworkswhere3Dultrasoundimageshavebeenexploitedforautomati positioning

of surgi al instrumentsor for tra king moving target.

3D ultrasound-based positioning of surgical instrument

Subsequently in [75℄ and [62℄, a 3D ultrasound-guided robot arm-a tuated system for

au-tomati positioning of surgi al instrument is presented (see Fig. 2.9). The se ond work

follows-up and improvesthe systemstreaming speed ofthe rst work,where 25 Hz rate is

obtained instead of 1 Hz streaming rate at whi h the rst prototype operated. The

pre-sentedsystem onsistsofasurgi alinstrumentsleevea tuatedbyarobotarm, amotionless

3Dultrasoundtransdu er,and ahost omputerfor 3Dultrasoundmonitoring withthe

or-respondingimage pro essingand for robot ontrolling. Theobje tivewasto automati al ly

(39)

(a)

(b)

Figure 2.9: 3D ultrasound-guided robot. (a) Experimental setup for robot tests - (b)

Marker attached to the instrument tip. (Photos: (a) taken from [62], and (b) from

http://biorobotics.bu.edu/CurrentProjects.html).

ta hedtothetipofthe instrumentinordertodete tits3Dposewithrespe tto a artesian

frame atta hed to the 3D ultrasound image volume. This marker onsists of three ridges

of same size surrounding asheath thatts over the instrument sleeve [see Fig.2.9(b)℄. An

e hogeni material isusedto oatthemarkerinordertoimprovethevisibilityofthislatter,

and thus to fa ilitate its dete tion. The ridgesare oiled on the sleeve in su h a way they

formsu essivesinusoidslaggedby

2π/3

rad. Fromthe 3Dultrasoundvolume,alengthwise ross-se tion 2D image of the instrument shaft along with the marker is sought to then

be extra ted. In su h 2D image, the ridges appear as su essive rests whose respe tive

distan es from a referen e point lying on the shaft are used to determine the instrument

sleeve 3D pose. For image dete tion of the rest, the extra ted image is rotated in su h a

way the instrument appears horizontal, and then a sub-image entered on the instrument

is extra ted to be super-sampled bya fa tor of 2 using linear interpolatio n. The error

be-tween the estimated instrument position and the target one is fed ba k, through the host

omputer, to a position-based servo s heme based on a proportional-der ivat ive (PD) law,

with whi h the robot arm isservoed to positionthe instrument tipto the spe ied target.

Experimentshavebeen arriedout usingasti kimmersedin a water-lled tank. Thesti k

passes through aspheri al bearing to mimi the physi al onstraints of minimallyinvasive

surgi alpro edures,where theinstrumentpassesthroughanin isionportand onsequently

its movements are onstrained a ordingly[see Fig. 2.9(a)℄. Witha motion range of about

20 mm ofthe instrument, itisreported thatthe systemperformedwith lessthan 2 mm of

(40)

Figure 2.10: An estimator model [86] for synchronization with beating hear

mo-tions using 3D ultrasound is tested with the above photographed experimental setup.

(Photo: taken from [86]).

Synchronization with beating heart motions

In[86℄,anestimatormodelforsyn hronizationtobeatingheartmotionsusing3Dultrasound

imaging is presented. The obje tive is to predi t mitral valve motions, and then use that

estimationtofeed-forwardthe ontrollerofarobot a tuatinganinstrument,whose motions

are to be syn hronized with the heart beatings. This ould allow the surgeon to operate

on the beating heart as on a motionless organ. Moreover, su h a system ould over ome,

for example, the requirements of using a ardiopulmonary bypass, and thus would spare

patients its adverse ee ts. It was assumed that the mitral valve periodi ally translates

alongone axis,whileitsrotational motionshavebeennegle ted. Thetranslational motions

arethenrepresented withatime varyingFourierseriesmodelthatallowsfor rateandsignal

morphologyevolvingovertime[63℄. Fortheidenti ationofthemodelparameters,three

es-timatorshavebeentested: anExtendedKalmanlter(EKF), anautoregressivemodelwith

least squares (AR), and an auto regressive model with fading memory estimator. Their

performan es are assessed with regards to predi tion a ura y of time- hangi ng motions.

From ondu ted simulations, it was notedthat the EKF outperformed the two other

esti-mators, bymore mitigatingthe estimation error espe iallyfor motions with rate hanging.

Experiments have been ondu ted on an arti ial target immersed in a water-lled tank

(see Fig.2.10). The targetwas ontinuouslya tuated insu hawayto mimi the heart

mi-tral valve beating motions, at 60 beating per minute average rate for onstant motions. A

position-based proportional-derivative (PD) ontroller isemployed for robot servoing. The

systemwassubmitted to both onstant and hangingratemotions. As on luded fromthe

(41)

the beatingheartmotions ompared to theothers estimationapproa hes,with anobtained

predi tion errorof lessthan 2mm. Thiserror isofabout 30%lessthanthatobtained with

the two other estimators. In other but separate works, [36℄ and [7℄, low tra king errors

havebeenobtainedbut,however,thatwasa hievedusingextremlyinvasivesystems. Inthe

former work,du ial markersatta hed to the heartare tra ked byemployinga highspeed

eye-to-hand amera of 500 Hz streaming rate; the hest isbeing opened in su h a waythe

du ial points an be viewed by that external amera. The information onveyed bythis

latter areused to visually servo a robot arm that a ordinglyhas to ompensate for heart

motions. As for the latter work, sonomi rometry sensors operating at 257 Hz streaming

rate have been sutured to a por ine heart. Currently, 3Dultrasound modalitysuers from

lowimaging qualityalong with time delayed streaming of the order of 60 ms, whi h ould

a ount for the relatively lower obtained performan es ompared to those two works(i. e.,

[36℄ and [7℄).

2.5.3

2D ultrasound-guided position-based visual servoing

As has been already highlighted in this do ument, the 2D ultrasound imaging systems

provide imagesat asu ientrate toenvisagereal-time automati roboti guidan e. Inthe

following, we present a surveyof worksthat investigated the usethis imagingmodality in

guidingautomati medi alpro edures. Inparti ular,thisse tionisdedi atedtoworkswhere

the image is used only in position-based visual servoing s hemes. We lassify these works

a ording to the targeted medi al pro edure. We distinguish: kidney stones treatment;

bra hytherapy treatment; andtumor biopsyand ablationpro edure.

Kidney stones treatment

An ultrasound-basedimage-guidedsystemfor kidneystonelithotripsytherapyispresented

in [48℄. The lithotripsy therapy aims to erode the kidney stones, while preventing

ollat-eral damages of organs and soft tissue of the vi inity. The stones are fragmented thanks

to high intensity fo used ultrasound(HIFU). The HIFUtransdu er extra orporeal ly emits

high intensive ultrasound waves thatstrike the stones. The rushed stones arethen

natu-rally eva uatedbythe patient throughurination.

Forthesu essandee tivenessofthepro edure,that anleadtoshortenedtimeofpatient

treatmentandtosparethe organsofthevi inityfrombeingharmed,itisimportanttokeep

the stone under the pulse of the HIFU throughout the pro edure. However, the kidney is

(42)

B-s an images of the stone in the kidney. By image pro essing on both the two images,

the stone isidentied andits position inthe 3Dspa eis determined. Theinferredlo ation

represents the target 3Dposition on whi h the HIFU fo al hasto be. The error, between

the desired position and the urrent position of the HIFU transdu er, is fed ba k to the

host omputer that derives the ontrol law. The ommand is sent to the artesian robot

thatmovesa ordinglyalongitsthreeaxesinordertokeepthekidneystoneunder itsfo us

(i. e., thus the fo usof the HIFU).

Ultrasound-guided brachytherapy treatment

A robot manipulator guided by 2D ultrasound for per utaneous needle insertion is

pre-sentedin [6℄. Theobje tive istoautomati allypositionthe needletipat aprostatedesired

lo ation in order to inje t the radioa tive therapy seeds. The target is manually sele ted

from a preoperative image volume. It is hosen in su h a way (whi h is the goal of the

bra hytherapy) the seeds have as important as possible ee t on the lesion while at the

same time not harming thesurrounding tissues. Theroboti systemis mainly omposed of

tworoboti parts orrespondingrespe tivelyto ama roandami roroboti system,and of

a 2D ultrasoundprobe for the imaging. The ma ro robot allows to bring and position the

needletipattheskinentrypoint,whilesubsequentlythemi rorobot performsnemotions

to insertand thenpositionthe needletipatthedesiredlo ation. Byvisualizingthevolume

image oftheprostate,displayedonahuman-ma hineinterfa e,the surgeonindi atestothe

robot the target lo ation where the seeds have to be dropped (see Fig. 2.11). Before that,

the volume isrst madeupfromsu essive ross-se tionimagesofthe prostate. Whilethe

robot'sendee torisrotatingthe 2Dultrasoundprobe,thelatters anstheregion

ontain-ing the prostatebya quiring su essive 2Dultrasoundimages at 0.7degree intervals. The

needle target position is expressed with respe t to the robot frame, thanks to a previous

registration of the volume image. A position-based proportional-int egral-derivative (PID)

(43)

Figure 2.11: Ultrasound volume visualization through a graphical interface. Three

sights (bottom) of an ultrasound volume are respectively provided by three slicer

planes (top). (Photo: taken from [6]).

movesa ordinglytopositiontheneedletipatthetargetlo ation. Theproposedte hnique

however is position-based, where the image is only used to determine the target lo ation.

Compared therefore to image-basedservoingte hniques, this method an be onsidered as

an open-loop servoingmethod. Assu h, ithasthedrawba kofnot ompensating

displa e-mentsofthe target that ano urduring the servoing. Su h displa ements an be aused,

as instan e, by patient's body motion resulting from breathing, or by the prostate tissue

shifting due to the for es it undergoes from the needle during the insertion. This la k of

observed images in the servoing s heme ould a ount for the errors obtained in the

on-du ted experiments. The needle dee tion isalso not addressed. The dee tion is mainly

due to the for es enduredbythe needle duringthe insertion.

Ultrasound-guided procedures for tumor biopsy and ablation

A 2D ultrasound-guided omputer-assisted roboti systemfor needle positioning in biopsy

pro edure ispresented in [58℄. Theobje tive isto assistthe surgeonin orienting theneedle

for the insertion. The systemismainly omposedof arobot arm, aneedle holder mounted

on the robot's end-ee tor, a 2D ultrasound probe, and a host omputer. The needle an

linearly slide on its holder. Firstly, the eye-to-hand 2D ultrasound probe is manually

(44)

ends at this stage, where the surgeon then manually inserts the needle by sliding it down

to rea hthe target,while in the sametime observing the orrespondingimage displayedin

the interfa es reen. Experimentshavebeen ondu tedinideal onditions,wherethetarget

onsistsofawoodensti kimmersedinwater-lled tank. Theultrasoundimageisonlyused

to determine the two target points, but is not involved in the servoing s heme. Errors of

a millimeter order had been reported. Sin e the experiments are ondu ted in water, the

needle doesnot undergofor es,whi h ishowever not the asein lini al onditions,due as

instan e to the intera tion with softtissue. Su h for es an ause dee tion ofthe needle,

whi h had alsobeen highlighted in thatwork.

Combining 2Dultrasound images to other imaging modalities ould enhan e the

qual-ityof the obtained images. In[29℄, an X-ray-assistedultrasound-based imaging systemfor

breast biopsy is presented. The prin iple onsists in ombining stereota ti X-ray

mam-mography (SM) with ultrasound imaging in order to dete t aswell as possible the lesions

lo ation, andthen beableof harvestingrelevantsamples for the biopsy. TheX-ray

modal-ityprovidesimages with highsensitivityfor most lesions,but is not assafeand fastas2D

ultrasound. The presented pro edure begins by rst keeping motionless the patient tissue

for diagnosis, by using a spe ial apparatus. A 2D ultrasound probe s ans that region of

interest with onstant velo itybya quiring su essive 2Dultrasoundimages atsimilar

dis-tan e intervals. A orresponding 3D volume is made up from those a quired images, and

intera tively displayed through a human-ma h ine interfa e. A lini ian an then inspe t

the volume,by ontinuouslyvisualizingits ross-se tion2Dultrasoundimages. Thisis

per-formed bysliding a ross-se tional plane. Anydete ted lesion an be indi ated to the host

omputer by mouse hand li king (a prior registration of the 3D volume and the tissue is

assumed to be already performed). Then, both the 2D ultrasound probe and the needle

guidearepositionedinsu hawaytheyarealigned onthe indi atedlesiontobiopsy.

Subse-quently, the needleisautomati al lyinsertedtrough the tissueto targetthe lesion, whileat

(45)

ultra-needlehaswellandtrulytargetedthelesion, bymeansofasimilar

a quisition- onstru tion-visualization pro ess detailed above. Combining the SM modality to the ultrasound one,

the systempre ision is laimedto bein reased.

Anultrasound-guided roboti ally-assistedsystemfor ablative treatment ispresented in

[11℄. Theobje tiveistoassistthesurgeon forsu hamedi alpro edure, byrstlyaording

a relevant view ofthe lesionwithin the soft tissueto fa ilitate its dete tionwith enhan ed

pre ision. Then, it would onsist in robotizing the needle insertion for a urate targeting,

rather than doing itmanually. The setupis omposedof a freehand-a tua ted onventional

2Dultrasoundprobe,aneedlefortheinsertion a tuatedbya5DOFsrobotarm,andahost

omputer for the monitoring of the appli ation. The 2Dultrasound probe is handled by a

lini ian andsweptto takea 3Ds anoftheregionofinterest,by ontinuallya quiring

su - essive2Dultrasoundimages. Thankstoamarker atta hedto theprobe,thepathfollowed

by this latter along with the re orded images is intra-operativel y registered to re onstru t

a orresponding 3Dultrasound volume. Thisvolume is then intera tively exploredand

vi-sualized by the lini ian for inspe tion of the region of interest, and thus dete tion of any

possibletumors. Theimagepointpositionofadete tedlesiona ompanie dwithapatient's

skin entry point is manually indi ated by the lini ian, and then transmitted to the host

omputer. An algorithm was developed for aligning the dire tion of the needle, in su h a

way it has to perform a 3Dstraight line to rea h the target tumor lo ation from the skin

entrypoint. Therobotthenautomati allybringsthetipoftheneedleuptotheentrypoint,

whilein the sametime performingthe alignment, andnallytheneedle isinsertedtorea h

the target lo ation. Experimentshavebeen arriedout both on a alfliver embedded with

anolivefortumormimi kingandonasetof8mmofdiameterpinsimmersedinwater-lled

tank. A ording to the pin experiments, it isreported that the systemperformed with an

a ura y ofabout 2.45mm with 100%of su essrate.

Similarly, butwith improvementswith respe t tothe manner the su essive 2Dultrasound

imagesarea quiredthen registered,anotherworkispresentedin[10℄. Itisproposedtohold

the 2Dultrasound probe bya se ondrobot arm, rather thandoing itbyfree-hand. A s an

performed roboti ally is expe ted to result in a more better 3D volume image quality, in

alignment of the su essive sli es and in onsisten y of distan es between su essive sli es,

than ifit would has been done free-hand. To ompare the s anperforman e whether it is

roboti ally or free-hand performed, experiments have been ondu ted using a me hani al

phantom omposed of four pins. An ele tromagneti tra ker has been atta hed to ea h of

the ultrasound probe and the needle guide robot tip, for extra tion of their respe tive 3D

(46)

op-Using2Dultrasoundimagingmodalitytopositioninstrumenttipatdesiredtarget

lo a-tionhasbeen onsideredin[74℄,where a2Dultrasound-guidedroboti ally-a tuat edsystem

is presented. The system onsists of two personal omputers, a 2D ultrasound probe, an

ele tromagneti tra king devi e, and a robot arm. One omputermonitors ultrasound

im-age a quisition and pro essing, whereas the latter omputer insures robot ontrol. This

ontrol omputer onveys the dierent data, onsisting of the target and urrent ontrol

features with orresponding variables of the ontrol servoing s heme, through a serial link

running at 155.200 bps. Image a quisition isperformedat a rate of 30 framesper se ond.

The ele tromagneti tra king devi e onsists of a xed base transmitter and two remote

tra king re eivers. Ea h re eiver providesits orresponding 3D spa e pose with respe tto

the transmitter base, by transmitting its six degrees of freedom to the omputer through

a serial line onne tion. One re eiver is mounted on the ultrasound s an head, while the

se ond was initiallyused for alibration and then is atta hed to the robot for registration

and tra king. The target to be rea hed bythe robot tip onsistsin the enterof an obje t

of interest. It is dete ted using the 2Dultrasoundprobe. Firstly, a s anof the region

on-taining the targetobje tisperformedbya quiringsu essive2Dultrasoundimages. Then,

ea h a quired image is segmented to extra t the orresponding obje t ross-se tion. From

the set of all those segmented ross-se tions, the enter of the target obje t is estimated.

The enter3D oordinatesrepresentthe target3Dlo ationat whi hthe robottiphastobe

positioned. Forimage segmentation,ea h2Dultrasoundimage isrstsegmented a ording

to an empiri ally hosenthreshold, then subsampled by

1/4

fa tor to redu e the omputa-tional time ofthe next step, whereinthe image is onvolved bya2DGaussian kernel of10

radius andof5pixelsdeviation,andnallyanautomati identi atio noftheimage se tion

of interest is applied by sear hing pixels of high intensity. The target is assumed roughly

spheri al. The robot is servoed in position by a proportional derivative (PD) ontrol law,

with an error limit-based rule is added in order to prevent possible velo ityex ess relative

to important displa ements orders.

(47)

Figure 2.12: A biopsy robot. (Photo: taken from [64]).

spheri al target. It wasput between thesetwo layers ofrespe tively oiland water. Thanks

to gravity and buoyan y for es and the immis ibility between the two liquids, the grape

oated withinthe plane delineating the watersurfa e from theoil one,and an freelyslide

alongthisplane. Todete tthetargetlo ation,as an enteredonthegrapeisperformedby

taking su essive ross-se tion ultrasound images as des ribed above. In onditions where

the grape is maintained xed, the robot tip tou hed the target with arate of 53 out of60

trials.

For needle pla ement in prostate biopsy pro edure, a 2D ultrasound-guided roboti

systemispresentedin [64℄(seeFig.2.12). Theobje tive istoperform needlepositioningof

enhan ed a ura y. The system onsistsof a biopsyneedlegun, arobot holder platform, a

host omputer,anda2Dultrasoundprobe. Thefun tionsofthe omputer onsistmainlyin

the monitoringofthepro edure. Thisrangesfromultrasoundimagea quisition,pro essing

along with registration, s reen-displaying for visualization, needle motion planning, and

robot motion ontrol. Therobot an be moved and thus positioned appropriately nearthe

patient'sperinealwall,priortoanintervention,thanksto4wheelsonwhi hit antranslate.

It an subsequently be maintained motionless with enhan ed stability, after the operator

haddepressedafootpedal,whi h ausestherobottobeslightlyraisedandbesupportedby

4 rubber-padded legs in pla e of the wheels. Therobot an be furtheradjusted, by tuning

the height and tiltof its operating table. This will allowto position the ultrasoundprobe

Figure

Figure 1.1: Sketch about robotics control
Figure 1.3: An overall scheme of the ultrasound (US) visual servoing method using image moments, with the corresponding data flow.
Figure 1.4: Difference between an optical system and a 2D ultrasound one in the man- man-ner they interact with their respective environments: (a) a 2D ultrasound probe  ob-serves an object, through the cross-section resulting from the intersection of its
Figure 2.3: An example of a typical robotic system teleoperated through a human- human-machine interface: three medical slave robot arms (left) are teleoperated by a user thanks to a master handle device, and the procedure is monitored by the user through
+7

Références

Documents relatifs

Gruber, Bounding the feedback vertex number of digraphs in terms of vertex degrees, Discrete Applied Mathematics, 2011.. Reducibility among combinatorial problems,

La possibilité pour une cellule souche de s’écarter de son chemin tra- ditionnel - celui d’une progression unidirectionnelle irréversible vers un statut de cellules différenciées

L’expression anormale du facteur de transcription appelé stem cell leukemia (SCL) ou T-cell acute leukemia 1 (TAL1) dans le lignage T est l’anomalie la plus récurrente dans

Figure 10: Comparaison entre les modules des déplacements mesurés pour le repère topographique TP11 et ceux des déplacements calculés avec les modèles de Burger avec failles

Il semblerait alors que l’un des rôles physiologiques de la protéase NS3/4A dans des cellules infectées soit de prévenir l’induction d’une réponse inflammatoire déclenchée

:صخلملا ىلع نيعتي ، ةيييميلعتلا تاييسسؤملا يف بيوييلا تاييقيبطتل ريييبكلا و رمتسملا ديازتلل ا ًرظن .(مدختسم باسح هيدل) هيلا لوصولا قح هيدل

C’est le nombre de communes qui considèrent les sentiers, circuits sportifs et randonnées comme un motif d’attraction touristique sur leur territoire ramené au nombre total

l’Égypte ancienne a progressivement vu ses spécialistes se replier sur eux-mêmes, prétextant un terrain d’études particulier aux problématiques et méthodes