• Aucun résultat trouvé

Towards unified dataset for Modeling and Monitoring of Computer Assisted Medical Interventions

N/A
N/A
Protected

Academic year: 2021

Partager "Towards unified dataset for Modeling and Monitoring of Computer Assisted Medical Interventions"

Copied!
4
0
0

Texte intégral

(1)

HAL Id: lirmm-02105827

https://hal-lirmm.ccsd.cnrs.fr/lirmm-02105827

Submitted on 21 Apr 2019

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Towards unified dataset for Modeling and Monitoring of

Computer Assisted Medical Interventions

Fabien Despinoy, Sandrine Voros, Nabil Zemiti, Nicolas Padoy, Pierre Jannin

To cite this version:

Fabien Despinoy, Sandrine Voros, Nabil Zemiti, Nicolas Padoy, Pierre Jannin. Towards unified dataset

for Modeling and Monitoring of Computer Assisted Medical Interventions. SURGETICA, Nov 2017,

Strasbourg, France. �lirmm-02105827�

(2)

F. DESPINOY et al. Surgetica 2017 Strasbourg, France – 20 - 22 Nov.

Towards unified dataset for

Modeling and Monitoring of

Computer Assisted Medical

Interventions

Fabien DESPINOY

1

, Sandrine VOROS

2

, Nabil ZEMITI

3

, Nicolas PADOY

4

and Pierre

JANNIN

1

1 : LTSI, UMR 1099, University of Rennes 1, Inserm, F-35043, France

2 : TIMC-IMAG, UMR 5525, University of Grenoble-Alpes, CNRS, F-38706, France

3 : LIRMM, UMR 5506, University of Montpellier, Inserm, F-34095, France

4 : ICube, UMR 7357, University of Strasbourg, CNRS, F-67000, France

Contact: fabien.despinoy@univ-rennes1.fr

M

2CAMI is a working group dedicated to Modeling and Monitoring of Computer Assisted Medical Interventions within the CAMI Labex. It aims at unifying data ac-quired from different surgical trainers and pro-cedures for collaborative research. In this pa-per, we propose a generic structure for multi-modal dataset that allows faster development and easier processing. With such formalization, our objective is to go beyond the state of the art by sharing various types of data between inter-national institutions and merge methodological approaches for better detection and understand-ing of surgical workflows.

1

Introduction

For the past decade, the medical community has shown a growing interest in biomedical international compe-titions, named challenges. Since 2007, more than 120 challenges have been proposed on the grand-challenge platform1 where various research fields are addressed such as image segmentation and registration, anatomi-cal structure loanatomi-calization and tracking or surgianatomi-cal work-flow detection and understanding. Through such plat-form, multiple types of data are shared going from point cloud depth image acquisition to CT or MRI segmented images. These challenges share the same goal: use pre-clinical and clinical data in order to validate and evaluate methodologies that are developed for biomedi-cal analysis purposes.

1https://grand-challenge.org/ Images Videos Kinematics Transcriptions Clinical scores Con ver si on fr ame w or k Standardized output Visualization Processing Analysis Storage

Pre-clinical/clinical data Applications

Figure 1: Pipeline for data conversion into a standardized output for M2CAMI applications.

Focusing on surgical workflow detection and under-standing, various datasets have been released over the last years, where multiple types of data are accessible. For each dataset, data representation and storage are unique because of the acquisition setup and the per-sonal requirements. However, this sharing model is not suitable for collaborative research due to the heteroge-neous representation of the same data type. Moreover, such approach does not promote the use of multimodal data for multiscale processing because of the complex integration of various data types and formats.

In this paper, we address the need of a standard for-malism for multimodal data representation and storage. By taking into account the available datasets in the M2CAMI community, we propose a generic structure and a first implementation for data conversion to a generic format that is readable and usable in the dif-ferent applications developed into the M2CAMI group (fig. 1).

(3)

F. DESPINOY et al. Surgetica 2017 Strasbourg, France – 20 - 22 Nov.

(a) Video data. (b) Kinematic data. (c) Transcription data. (d) Evaluation data.

Figure 2: Multimodal data of surgical workflow available into the M2CAMI collaborative research group.

2

Methods

In the M2CAMI group, data acquisition and analysis are oriented towards surgical workflow detection and understanding. In order to build a unified framework to convert data into a generic representation, we pro-posed a two-step approach. The first step was the inventory of the different available data into the work-ing group. Then, in the second step, we described a common formalism for data representation and storage. From the M2CAMI community, two datasets are pub-licly available: NeuroSurgicalTools [1] and Cholec80 [2] where the latter has been already used for a previous scientific challenge2. To generalize our work, we used also three other available datasets: JIGSAWS [3] as well as acquisitions performed during previous CAMI works [4][5].

2.1

Data inventory

These datasets contain various data types that can be sorted into four distinct categories. Videos are (fig. 2a) automatically captured from the surgical field by record-ing the endoscopic point of view, or from the entire operating room with additional cameras. They can also be expressed as a set of images. Kinematic data (fig. 2b) is automatically captured from robotic systems or additional tracking devices to describe the surgical instrument motions over the time. Transcription data (fig. 2c), or annotations, is most of the time manually generated by human observers and describes the differ-ent actions, instrumdiffer-ents or evdiffer-ents taking place during the procedure. Evaluation data (fig. 2d) quantifies per-formance of the surgeon to achieve a clinical task or procedure. It can be automatically calculated from sur-gical trainers or manually generated by human experts and is necessary for surgical workflow referenced-based evaluations. As an extension to these four categories, unusual clinical data types are emerging and combine annotations with kinematics such as human body pose annotations for medical staff analysis [6]. To cover the full spectrum from training to interventional clinical data available into the M2CAMI group, we developed an inventory form based on a previous formalism [7] to make a list of the different data and structures available.

2http://camma.u-strasbg.fr/m2cai2016/

2.2

Generic data representation

As previously mentioned, a same data type can be rep-resented in different ways. For instance, video is a set of images that is embedded into a streaming container. However, depending on the format, the encoder and the output container it can be difficult to use it for process-ing purpose. Thus, our objective is to define a standard representation that rely on open source protocols (e.g. x265 for video codec, Boost.PropertyTree for configura-tion file) to ease broadcast of our developments. This standard representation can be developed at two differ-ent scales. At the global level, we provide guidelines and tools to uniformly structure dataset in order to ease sharing, processing and hard data storage. Additionally, standardized header files are integrated into datasets to regroup data properties for conversion management. At a second level, we propose a C++ language-based container that stores data synchronously into memory for processing (e.g. as an extended map for data series). The main advantage of the presented structure is to be easily extended to add new types of data. The only as-sumption we rely on is the data synchronization where for each timestamp in the timeline there is at least one sample from each data type available in the dataset.

3

Conclusion and Perspectives

At the current stage, data conversion seems feasible to provide larger datasets for each partner of the M2CAMI group. Relying on open source protocols, the current implementation is shareable between members and im-provements can be easily undertaken. The added value of the current work will be emphasized by the con-nection of various developments coming from different teams, including surgical context detection [2] that will provide entry frame for instrument tracking [8]. The output information from the tool tracking will be used for gesture recognition [9] in order to, at the end, ob-tain a complete comprehension of the surgical context. These are the first steps towards generation of new decision-making systems where multimodal data allows to capture and understand surgical events for training, teaching and surgical assistance purposes.

(4)

F. DESPINOY et al. Surgetica 2017 Strasbourg, France – 20 - 22 Nov.

References

[1] D. Bouget, R. Benenson, M. Omran, L. Riffaud, B. Schiele and P. Jannin (2015). Detecting Surgical Tools by Modelling Local Appearance and Global Shape. IEEE Transactions on Medical Imaging, 34(12):2603–2617.

[2] A.P. Twinanda, S. Shehata, D. Mutter, J. Marescaux, M. de Mathelin and N. Padoy (2016). EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Video. IEEE Transactions on Medical Imaging, 36(1):86–97.

[3] Y. Gao, S. S. Vedula, C. E. Reiley, N. Ahmidi, B. Varadarajan, H. C. Lin, L. Tao, L. Zappella, B.B´ejar, D. D. Yuh, C. C. G. Chen, R. Vidal, S. Khudanpur and G. D. Hager (2014). The JHU-ISI Gesture and Skill Assessment Working Set (JIG-SAWS): A Surgical Activity Dataset for Human Motion Modeling. Modeling and Monitoring of Computer Assisted Interventions (M2CAI) - MIC-CAI Workshop.

[4] R. Wolf (2014). Quantification de la qualit´e d'un geste chirurgical `a partir de connaissances a priori. Universit´e Grenoble Alpes.

[5] F. Despinoy (2016). Analyse, reconnaissance et r´ealisation des gestes pour l’entraˆınement en chirurgie laparoscopique robotis´ee. Universit´e de Montpellier.

[6] A. Kadkhodamohammadi, A. Gangi, M. De Math-elin and N. Padoy (2017). Articulated clinician detection using 3D pictorial structures on RGB-D data. Medical Image Analysis 35:215–224.

[7] P. Jannin, C. Grova and C. R. Maurer Jr. (2006). Model for defining and reporting reference-based validation protocols in medical image processing. International Journal of Computer Assisted Radi-ology and Surgery, 1(2):63–73.

[8] A. Agustinos and S. Voros (2015). 2D/3D Real-Time Tracking of Surgical Instruments Based on Endoscopic Image Processing. Computer-Assisted and Robotic Endoscopy, 9515:90–100.

[9] F. Despinoy, D. Bouget, G. Forestier, C. Penet, N. Zemiti, P. Poignet and P. Jannin (2016). Unsuper-vised Trajectory Segmentation for Surgical Gesture Recognition in Robotic Training. IEEE Transac-tions on Biomedical Engineering, 63(6):1280–1291.

Figure

Figure 1: Pipeline for data conversion into a standardized output for M2CAMI applications.
Figure 2: Multimodal data of surgical workflow available into the M2CAMI collaborative research group.

Références

Documents relatifs

Although it is possible to conceive of the computer without any petitio principii under a double viewpoint, the first one being material (the computer is a collection of plastic

We have added into the new MIRS a possibility for the user to generate a list of symmetry allowed terms involved in the ITO normal mode expansion of either an effective

Depending on the results of the investigations and community response, these groundwater resources could be used to supplement the actual water supply to South Tarawa.. However,

During the registration phase of the protocol described in the previous section 3.1 the user must provide his secret to the trusted tool (for MAC computation). Here, the idea is

After applying a budget closure constraint at the basin scale, the integrated components were compared to various in situ observations, showing good per- formances of the method..

Experiments are carried out as follows: determine data weighting (output item is uniformly discretized into three labels when needing labels), add data weighting to the

Therefore, this paper presents the software architecture of CrizLAB TM , a middleware based on Event- Driven Architecture that proposes a unified approach

However, this closure is obtained at the global and annual scale only, and residuals are still significant at regional and monthly scales (Rodell et al., 2015) uses then