• Aucun résultat trouvé

73 2.2.2 Développement d’un classificateur capable de détecter les périodes hallucinatoires : utilisation

du linear Support Vector Machine

Le principe du machine learning ne se limite pas à une utilisation sur des données d’imagerie anatomique et il est tout à fait possible d’appliquer les stratégies de lSVM à des données fonctionnelles. Ceci permet de développer des classificateurs capables de labelliser chaque volume temporel d’une session d’IRMf selon deux classes préalablement établies (et pour lesquelles le classificateur a été entrainé), i.e., les périodes ON (hallucinatoires) et les périodes OFF (non-hallucinatoires).

Ce type de classificateur peut être développé (phase d’entrainement et phase de test) avec des données fonctionnelles issues d’un seul et même sujet, on parle alors de classification « intra-sujet ». Toutefois, cela implique le développement de classificateurs spécifiques à chaque sujet, ce qui s’avère extrêmement chronophage, nécessitant une session d’enregistrement spécifique.

Dans ce contexte, des approches « inter-sujets » permettant de développer des algorithmes avec des données fonctionnelles issues de plusieurs sujets ont été proposées. L’objectif est le développement de classificateurs dont l’utilisation peut être généralisée à une population, chez des sujets indépendants (on parle de « subject-independant classifier »). C’est la question du développement possible de ce type de classificateur pour la détection des hallucinations auditives qui est au centre de ce travail de thèse.

Ainsi, l’Article 2 présente le développement d’un classificateur (« inter-sujets ») capable de détecter les périodes hallucinatoires grâce au lSVM, l’étape préalable de labelling ayant été réalisé grâce à la technique décrite dans l’Article 1.

74

ARTICLE 2

Multivariate auditory-verbal hallucinations detection in schizophrenia

Thomas FOVET1,2*, Pierre YGER3, Renaud LOPES4, Pierre THOMAS1,2, Philippe DOMENECH5,6, Renaud JARDRI1,2

1. Univ. Lille, CNRS UMR 9193, Laboratoire de Sciences Cognitives et Sciences Affectives (SCALab-PsyCHIC), F-59000 Lille, France

2. CHU Lille, Pôle de Psychiatrie, Unité CURE, F-59000 Lille, France

3. Institut de la Vision, INSERM UMRS 968, UPMC UM 80, CNRS UMR 7210, Paris

4. Univ. Lille, INSERM, CHU Lille, U1171 - Degenerative and Vascular Cognitive Disorders, F-59000 Lille, France

5. AP-HP, Hôpital Henri-Mondor, DHU Pepsy, Neurochirurgie / Université Paris Est-Créteil, France 6. Behavior, Emotion & Basal Ganglia (BEBG) team, INSERM U1127- CNRS U 7225, Institut du Cerveau et de la Moelle épinière, Paris, France

75

ABSTRACT

Auditory-verbal hallucinations (AVH) can be defined as auditory percepts in the absence of corresponding external stimuli. AVH are frequent experiences in schizophrenia (60 to 80% of patients) which may cause long-term disability. Recent functional Magnetic Resonance Imaging (fMRI) developments allowed for the objective “capture” of AVH’ occurrences while scanning a participant. Our team developed a semi-automatized procedure combining data-driven analysis of resting-state fMRI data with a post-fMRI interview (i.e. the patient is asked to report hallucinations’ occurrences and main clinical features of these experiences after acquisition). This “two-steps method” allows for the identification and distinction of fMRI periods with AVH (ON) from periods without (OFF). However, this detection scheme, notably the a posteriori labelling procedure, stays very time-consuming. Multicentric and multi-subject generalization would clearly benefit from an automatization of this fMRI capture method using machine-learning. Multi-Voxel Pattern Analysis applied to fMRI data, notably linear Support Vector Machine (lSVM), a supervised classification algorithm, is gaining recognition in accurately discriminating between complex cognitive states. Here, we present a validated fully-automated reliable procedure to detect AVH’ occurrences when applying lSVM to a per-AVH fMRI dataset. We demonstrated good between-subjects cross-validity, especially because contributing voxels are localized in a restraint set of brain regions. Adapting this method for real-time decoding will pave the way for innovative brain-based treatment of AVH, such as fMRI-neurofeedback.

KEY-WORDS

hallucination; automated; schizophrenia; generalization; between-subject; linear support vector machine; capture fMRI

76

INTRODUCTION

Auditory-verbal hallucinations (AVH), defined as the experience of hearing voices or sounds in the absence of appropriate external stimuli, are core symptoms of schizophrenia. Approximately 60 to 80% of patients with schizophrenia exhibit AVH (1), an experience also commonly associated with depressive disorders (2,3), poor quality of life (4) and increased risk of suicide (5). Moreover, the need for therapeutic innovation is high since, for 25 to 30% of patients with AVH, only a partial remission can be reached with antipsychotic drugs (6).

In this context, a better understanding of AVH’ pathophysiology may constitute a precious way for the development of new treatments able to relieve patients from their voices. Especially, over the last decade, neuroimaging techniques provided significant advances in the knowing of AVH’ neural underpinnings (7,8). On the one hand, functional brain imaging provided information about the neural bases of the susceptibility to hallucinate in studies measuring brain activity during specific tasks in patients who hallucinate and those who don’t (i.e. “trait” studies) (9,10) Theyrevealed altered functional activity in the temporal lobes of patients with AVH (probably emerging from a competition between AVH and normal external speech for processing sites within the temporal cortex), but also a decrease in the functional activity of the rostral dorsal anterior cingulate cortex, a structure known to be involved in the allocation of an internal or external origin for a given stimulus (11). On the other hand, functional brain imaging allowed for directly measuring brain activation associated with the occurrence of AVHs in “state” (or “capture” studies). Recent meta-analyses of such studies showed increased activity within a complex and distributed network including temporal, parietal, frontal and subcortical regions during AVH (12,13). Particularly, speech production and comprehension areas (i.e. Broca's and Wernicke's areas) have been shown to be involved, but in addition to this network, brain areas involved in contextual memory (i.e. hippocampal complex) seem to play a role (13–15).

Interestingly, these findings have strengthened transdiagnostic neurocognitive models that characterize AVH, but more specifically they built the bases for new therapeutic strategies. For example, rTMS protocols for AVH focus on reducing excitability of the left temporoparietal junction, judged to be a key-region in AVH’ pathophysiology (13,16). In addition to optimizing neuronavigation protocols (e.g. for rTMS target location), the development of "capture" neuroimaging of AVH could enable the development of intermediate therapeutic strategies between neuromodulation and psychotherapy such as fMRI-guided neurofeedback (fMRI-NF), by providing reliable tools to detect in real-time the occurrence of AVH.

The current paper is an effort toward validating a reliable method for fMRI detection of AVH. The ideal detection method should be: (i) fully automated; (ii) accurate (i.e. with an area under the Receiver operating characteristic (ROC) curve close to 1); (iii) characterized by high inter-subject generalization properties and (iv) with a low-calculation-cost to be easily implemented in a closed-brain computer interface loop.

77