• Aucun résultat trouvé

The neural correlates of the perception, production, and regulation of facial expressions

N/A
N/A
Protected

Academic year: 2022

Partager "The neural correlates of the perception, production, and regulation of facial expressions"

Copied!
234
0
0

Texte intégral

(1)

Thesis

Reference

The neural correlates of the perception, production, and regulation of facial expressions

KORB, Sebastian

Abstract

Ce travail se base sur quatre études empiriques portant sur les mécanismes neuronaux sous-jacents à la production, la régulation, et à la perception d'expressions faciales. La littérature scientifique suggère que la perception et la production d'expressions faciales sont étroitement liées. Néanmoins, la majorité des chercheurs s'est penchée sur l'étude des corrélats neuronaux de la perception des visages, en négligeant celle des activités cérébrales permettant la production de mouvements faciaux. En utilisant les techniques de l'électroencéphalographie (pour enregistrer les ondes cérébrales) et de l'électromyographie (mouvements des muscles du visage), nous avons étudié 1) les potentiels de préparation motrice précédant la production de sourires volontaires ; 2) la suppression des expressions faciales en réponse à des images amusantes ; 3) la suppression du sourire dans un contexte d'imitation faciale ; 4) les effets de l'identité et de l'expression faciale sur l'activité du cerveau et sur les réponses du visage. Comme stimuli furent utilisés des photos de visages humains ou d'avatars [...]

KORB, Sebastian. The neural correlates of the perception, production, and regulation of facial expressions. Thèse de doctorat : Univ. Genève, 2010, no. FPSE 465

URN : urn:nbn:ch:unige-145417

DOI : 10.13097/archive-ouverte/unige:14541

Available at:

http://archive-ouverte.unige.ch/unige:14541

(2)

Section de Psychologie    

 

Sous la direction de Didier Grandjean et Klaus R. Scherer    

 

   

 

THE NEURAL CORRELATES OF THE PERCEPTION, PRODUCTION, AND REGULATION OF FACIAL EXPRESSIONS

 

THESE

    Présentée à la  

Faculté de psychologie et des sciences de l’éducation   de l’Université de Genève  

pour obtenir le grade de Docteur en Psychologie    

  par  

 

Sebastian KORB    

  De  Treviso (I)  

  Thèse No 465  

GENEVE    

Octobre 2010  

   

N° étudiant: 06-318-18    

(3)

           

“The more we learn about the world, and the deeper our learning, the more conscious, clear, and well-defined will be our

knowledge of what we do not know, our knowledge of our ignorance.

The main source of our ignorance lies in the fact that our knowledge can only be finite, while our ignorance must

necessarily be infinite.”

Karl Popper (1963).

“A thesis is never finished, it is just abandoned at the least damaging point.”

Philip Race (1999).

“There are random things, which perfectly make sense.”

Didier Grandjean (2010).

 

(4)

TABLE  OF  CONTENTS  

1.   ACKNOWLEDGEMENTS   5  

2.   FRENCH  SUMMARY   6  

2.1.   Etude  n.  1   7  

2.2.   Etude  n.  2   8  

2.3.   Etude  n.  3   11  

2.4.   Etude  n.  4   12  

3.   INTRODUCTION   16  

3.1.   Definition  of  emotion   26  

4.   THE  PERCEPTION  OF  FACES   30  

4.1.   Face  perception  model  by  Bruce  and  Young   32  

4.2.   Face  perception  model  by  Haxby  and  colleagues   35  

4.3.   Criticisms  of  Haxby  et  al.’s  model  and  alternative  accounts   41  

4.4.   Face  perception  and  embodiment   47  

4.5.   Mirror  neurons   49  

4.6.   Model  of  emotion  recognition  by  Adolphs   55  

4.7.   The  temporal  dynamic  of  face  perception   58  

4.8.   Summary  face  perception   60  

5.   THE  PRODUCTION  OF  FACIAL  MOVEMENTS   63  

5.1.   Spontaneous  vs.  voluntary  expressions   64  

5.2.   Neural  bases  of  facial  movements   66  

5.3.   Neural  correlates  of  the  production  of  facial  expressions  –  brain  imaging   73  

5.4.   Neural  correlates  of  the  production  of  facial  expressions  –  EEG   75  

5.5.   Facial  mimicry  as  a  form  of  spontaneous  facial  expressions   78  

5.6.   Summary  face  production   79  

6.   EMOTION  REGULATION   81  

6.1.   Definition  of  emotion  regulation   82  

(5)

6.2.   Why  Regulate  Emotions?   85  

6.3.   Types  of  Emotion  Regulation   87  

6.4.   Model  by  James  Gross   89  

6.5.   Neural  correlates  of  emotion  regulation  –  brain  imaging   92  

6.5.1.   Summary  brain  imaging   100  

6.6.   Neural  correlates  of  emotion  regulation  -­  EEG   101  

6.6.1.   Summary  EEG   110  

6.7.   Summary  emotion  regulation   112  

7.   SUMMARY  INTRODUCTION   114  

8.   EXPERIMENTAL  PART   116  

8.1.   Study  n.1  –  motor  preparation  of  voluntary  smiles   117  

8.2.   Study  n.2  –  suppression  of  smiling  to  humorous  stimuli   125  

8.3.   Study  n.3  –  facial  mimicry  during  production  and  inhibition  of  voluntary  smiling  in  

the  context  of  an  emotional  Go/NoGo  task   148  

8.4.   Study  n.4  –  investigating  effects  of  facial  expression  and  identity  upon  brain  activity  

and  facial  mimicry   153  

9.   DISCUSSION   184  

9.1.   Summary  of  the  experimental  findings   184  

9.1.1.   Study  n.  1   185  

9.1.2.   Study  n.  2   186  

9.1.3.   Study  n.  3   188  

9.1.4.   Study  n.  4   191  

9.2.   Major  contributions  of  the  thesis   196  

9.3.   General  discussion  of  the  experimental  results   202  

9.4.   Final  words   214  

10.   REFERENCES   215  

(6)

1. Acknowledgements  

It is almost impossible to remember and thank all the fantastic people who contributed, in one way or another, to this thesis. What follows is certainly not an exhaustive list, and I hope that those who were not mentionned, will forgive me. Anyway, they can be certain to have left a beautiful trace somewhere in my subconscious.

My greatest gratitude goes to my supervisors Didier Grandjean and Klaus Scherer, who made it possible that I could join the Swiss Center for Affective Sciences (CISA) as a trainee in October 2005, and then carry on as a Ph.D. student in 2006. Both gave me an amazing amount of freedom in the choice of the topics I wanted to explore, and in the methods I was going to use in my research. At the same time they were supportive and helpful when I needed their help. I am thankful for that.

I would like to thank Patrick Vuilleumier, Claude-Alain Hauert, and Paula Niedenthal for accepting to be part of my thesis committee and/or jury. They have provided me with helpful feedback on my work, and contributed to it through inspiring inputs.

Many thanks also to the many colleagues at the CISA, the Centre Medical Universitaire (CMU), and the faculty of psychology in Geneva. Many of them were willing to sacrifice some of their precious time so that I could learn from them, or that I could use their brains and bodies to pilot some of my experiments. Many of them have become friends, with many I have shared not only work-related moments but also had fun traveling, snowboarding, eating, partying, and doing many other nice things. Some of them were office neighbors, others even shared rooms with me during conferences. They are, in a totally random order, Tobias Brosch (my first office neighbor at the CISA!), Cristina Soriano (who is also my official Spanish teacher), Nele Dael, Corrado Corradi dell’Acqua, Benoit Bediou, Sylvain Delplanque, Marc Mehu, Marcello Mortillaro (a great friend, colleague, and longtime former roommate), Jerome Glauser, Leonie Koban; Amal Achaibou, Karim N’Diaye, Yann Cojan, Sophie Jarlier and Lucas Tamarit (these last five people taught me how to program in Matlab); David, Sander, Katja Schlegel, Kornelia Gentsch, Pascal Vrticka, Ralph Schmidt, Jean-Marc Gomez, Karsten Rauss, Etienne Roesch, Andres Posada, Valerie Milesi, Vera Sacharin, Sascha Frühholz, Arnaud Saj, Thomas Ethofer, Martin Desseilles, Markus Gschwind, Wiebke Trost, Ruthger Righart, Ullrich Wagner, Marius Peelen, Christian Mumenthaler, Alison Montagrin, Leonardo Ceravolo, Geraldine Coppin, Sylvia Kreibig, Daniela Sauge (the supermanaging manager of the CISA), Antoinette Schneider, Carole Varone.

Special thanks to Jacobien van Peer, for reading and correcting parts of this thesis; to Jerome Glauser for correcting my French summary; to Annekathrin Schacht for help on my last paper and for many fun moments; to Gilles Pourtois for answering my many questions when he was still at the CISA, and for giving me moral support in the last phase of the thesis;

to my new flatmates for standing my tenseness in the last weeks before handing in the thesis;

to Honza Catchpole for providing an awesome USB-powered fan that acquired vital importance during some very hot weeks in the office in July; to Karine Carlen for giving me a cactus and a magic stone to get over the last weeks…

Last but not least, I would like to express the greatest possible gratitude to my international family, and most of all to my father Wolfgang and my mother Maria. This thesis would not have been possible without their unconditional love and support.

(7)

2. French  summary  

 

Ce travail se base sur quatre études empiriques portant sur les mécanismes neuronaux sous-jacents à la production, la régulation, et à la perception d’expressions faciales. La littérature scientifique suggère que la perception et la production d’expressions faciales sont étroitement liées. Par exemple, il a été démontré à plusieurs reprises que les gens ont tendance à imiter de manière automatique et involontaire les expressions faciales perçues chez autrui (Dimberg, 1982; Dimberg, Thunberg, & Elmehed, 2000;

Hess & Blairy, 2001). Néanmoins, la majorité des chercheurs s’est penchée sur l’étude des corrélats neuronaux de la perception des visages, en négligeant celle des activités cérébrales permettant la production de mouvements faciaux. Il reste ainsi beaucoup d’inconnues dans nos connaissances des activités cérébrales nous permettant de bouger les muscles du visage, de manière volontaire ou spontanée.

Toutes les études de ce travail de doctorat ont porté sur la production et/ou la perception des expressions de sourire. Nous avons choisi d’étudier le sourire car il est facilement repérable en plaçant des électrodes EMG sur les joues, apparaît relativement souvent de manière spontanée, et peut facilement être produit de manière volontaire (au moins dans la partie basse du visage). Dans les études 1, 2, et 4 nous avons mesuré/enregistré l’activité cérébrale des participants avec l’électroencéphalographie (EEG), et l’activité de certains muscles faciaux avec l’électromyographie (EMG). Dans l’étude 3 seul l’EMG a été enregistré. Les résultats principaux de ces études ainsi que

(8)

le contexte théorique dans le quel elles s’inscrivent sont brièvement présentés dans les paragraphes suivants.

2.1. Etude  n.  1  

Dans la première étude (Korb, Grandjean, & Scherer, 2008), nous avons cherché à établir si la production de sourires volontaires est précédée par un « readiness potential » (RP, aussi appelé « Bereitschaftspotential »).

Le RP, enregistré par EEG, est un potentiel de polarité négative distribué sur la partie centrale et supérieure du scalp, et qui précède – souvent de une à deux secondes – le début d’un mouvement (Kornhuber & Deecke, 1964).

Selon la plupart des auteurs, seuls les mouvements volontaires sont précédés par un RP. Cependant, au moins une étude suggère que des mouvements spontanés, de type reflexe, de la main ou du poignet, peuvent être précédés par un RP (Keller & Heckhausen, 1990). Des études précédentes ont démontré un RP précédant des mouvements volontaires auto rythmés des membres, des yeux, ou de la mâchoire (Colebatch, 2007; Huckabee, Deecke, Cannito, Gould, & Mayr, 2003; Kornhuber & Deecke, 1964; I. Nakajima et al., 1991; Shibasaki & Mark Hallett, 2006; Yamamoto et al., 2004). Ces mouvements sont contrôlés par d’autres nerfs crâniens que les expressions faciales, qui elles sont générées par le septième nerf crânien (appelé aussi le nerf facial). Aucune investigation n’avait encore été menée pour déceler un RP avant des expressions faciales.

Pour cette raison, nous avons demandé à 21 participants de bouger environ toutes les cinq secondes, en cinq blocs expérimentaux différents, leur index gauche, leur index droit, leur joue gauche, leur joue droite, ou les deux

(9)

joues au même temps. Les résultats de cette première étude démontrent que les sourires volontaires sont précédés, tout comme les mouvements des doigts et d’autres parties du corps, par un RP. Par contre, des différences entre le RP du sourire et des mouvements des index ont aussi été relevées.

Ainsi, les sourires volontaires élicitent un RP de moindre intensité et durée, et de topographie plus centrale. Des études futures seraient nécessaires pour investiguer le RP avant d’autres types d’expressions faciales volontaires – comme la colère par exemple, qui implique plus de muscles de la partie haute du visage, qui eux sont en grande partie contrôlés par des aires corticales motrices médianes. De plus, il serait important d’investiguer si des expressions faciales spontanées, c’est à dire générées de manière involontaire suite à un événement émotionnel, sont elles aussi précédées par un RP, ou pas.

2.2. Etude  n.  2  

Un aspect important de la production des expressions faciales concerne leur régulation. Par exemple, nous sommes tous plus ou moins capables de cacher nos ressentis émotionnels en supprimant nos expressions faciales spontanées. En lien avec ceci, la deuxième étude a voulu étudier les bases neuronales de la suppression de sourires spontanés déclenchés par des images humoristiques. En effet, le modèle théorique dominant décrit deux catégories de stratégies de la régulation émotionnelle (Gross, 1998, 2007; pour un resumé en français voir Korb, 2009). La première catégorie, centrée sur l’antécédent de la réponse émotionnelle (« antécédent focused » en anglais), comprend par exemple la stratégie de la

(10)

réévaluation cognitive (« reappraisal ») de la situation. Cette stratégie a été démontrée (Gross, 1998) être efficace pour réduire à la fois le ressenti émotionnel subjectif, et l’expression de l’émotion (par exemple, dans le visage, et dans la voix). La deuxième catégorie est essentiellement composée de la stratégie de suppression de l’expression émotionnelle (« suppression » ou « expression suppression » en anglais). La stratégie de suppression comporte la réduction de son expression émotionnelle en se concentrant, par exemple, sur les muscles faciaux. La suppression ne semble pas agir sur le sentiment subjectif de l’individu (Gross, 1998).

Plusieurs études d’imagerie cérébrale ont investigué les corrélats neuronaux de la régulation émotionnelle, surtout ceux de la réévaluation cognitive. Ces études démontrent que la modulation de l’amygdale et d’autres zones du cerveau impliquées dans la réponse émotionnelle se fait par une augmentation d’activité dans des aires corticales préfrontales (Ochsner &

Gross, 2007, 2005, 2008). Des activités semblables, mais décalées dans le temps, ont été démontrées dans celle qui reste jusqu’à aujourd’hui la seule étude des corrélats neuronaux de la stratégie de suppression (Goldin, McRae, Ramel, & Gross, 2008), comme elle a été définie dans le modèle théorique de Gross (Gross, 1998, 2007). Plus récemment, la réévaluation cognitive a aussi été étudiée avec l’EEG, qui apporte une moindre résolution spatiale, mais une plus grande résolution temporelle que l’imagerie par résonance magnétique fonctionnelle (IRMf). Il a ainsi été démontré que la réévaluation cognitive amène à une diminution ou une augmentation (selon les intentions du sujet) du « late positive potential » (LPP). Le LPP est une onde positive de l’EEG, apparaissant environ 300 ms après le début de la

(11)

présentation de l’image dans des zones centro-postérieures du scalp, et qui pourrait refléter des processus attentionnels et mnésiques (Hajcak, MacNamara, & Olvet, 2010). Par contre, les bases neuronales de la stratégie de suppression n’ont encore jamais été étudiées avec l’EEG.

Dans l’étude n. 2, vingt-quatre sujets femmes ont regardé et jugé le caractère amusant de 300 paires d’images humoristiques et neutres, reparties sur deux conditions. Dans la condition « Spontaneous » ils pouvaient exprimer librement leurs émotions dans le visage, alors que dans la condition

« Suppress » ils devaient supprimer leurs réactions faciales aux stimuli amusants.

Comme les résultats l’indiquent, plusieurs rires/sourires ont été déclenchés par les paires d’images amusantes, qui de plus étaient associées à une LPP de plus grande amplitude que celle pour les images neutres, mais dans la condition Spontaneous seulement. Dans la condition Suppress, par contre, les participants ont pour la plupart réussi à supprimer leurs sourires, comme indiqué par l’absence d’activité dans l’EMG du zygomaticus. De plus, l’amplitude de la LPP ne différait pas entre les images amusantes et neutres dans la condition Suppress. Enfin, comme attendu sur la base du modèle de Gross, le nombre d’images jugées comme étant amusantes ne différait pas entre les deux conditions, suggérant que la suppression n’avait guère changé le sentiment subjectif des participants. Ces résultats sont intéressants car ils découlent de la première étude avec l’EEG sur les bases neuronales de la suppression. D’autres études sont néanmoins nécessaires pour comparer directement les effets sur l’EEG de la suppression et de la réévaluation cognitive, ou encore d’autres stratégies de régulation. Aussi, d’autres

(12)

techniques d’analyse du signal EEG, que celle des potentiels évoqués, devraient être utilisées pour investiguer les mécanismes cérébraux de la régulation émotionnelle.

2.3. Etude  n.  3  

Un autre type d’expression faciale spontanée est l’imitation faciale (IF,

« facial mimicry » en anglais), c’est à dire la tendance à reproduire les expressions faciales perçues chez autrui. L’IF apparaît même si des visages émotionnels sont perçus de manière subliminale et masqués par un visage neutre (Dimberg et al., 2000). Certaines études ont montré que la perception et le traitement de stimuli émotionnels est modifié quand on active les muscles faciaux dans des mouvements qui interfèrent avec l’IF (comme tenir un stilo entre ses dents; Oberman, Winkielman, & Ramachandran, 2007;

Strack, Martin, & Stepper, 1988). Cependant, une seule étude a investigué si on pouvait inhiber l’IF en supprimant les mouvements des muscles faciaux (Dimberg, Thunberg, & Grunedal, 2002). Cette étude a montré que l’IF reste présente même quand la personne essaye de ne pas bouger les muscles de son visage.

Dans la troisième étude (Korb, Grandjean, & Scherer, 2010) nous avons modifié le design expérimental de Dimberg, Thunberg, et Grunedal (2002) afin de créer une tâche Go/NoGo. Dans une condition les participants devaient sourire le plus vite possible à des visages souriants, et ne rien exprimer à des visages neutres. Dans l’autre condition, il faillait sourire à des visages neutres, et garder une expression neutre en réponse à des visages souriants. Un nombre plus important (75%) d’essais requérant une réponse

(13)

motrice, et un temps de présentation des images très bref (500 ms), nous ont permis d’induire chez les participants une forte tendance au sourire.

L’enregistrement de l’activité du zygomaticus par EMG a permis d’établir la présence, la latence, et l’amplitude des réponses.

Les résultats ont montré que les sourires produits en réponse à des visages souriants étaient plus rapides et plus forts qu’en réponse à des visages neutres. De plus, le nombre de fausses alarmes (c’est à dire de réponses dans les essais où il ne fallait pas répondre) était plus important dans les essais avec des visages souriants qu’avec des visages neutres.

Finalement, nous avons pu démontrer la présence de l’IF même dans les essais où les sujets ont inhibé, avec succès, leur tendance au sourire. De manière intéressante, l’IF a débuté plus tard (251-375 ms après le début de la présentation de l’image) dans les essais avec inhibition de la réponse motrice, que dans les essais sans inhibition (126-250 ms). Ces résultats supportent les hypothèses que l’IF ne peut être complètement supprimée et que l’IF est une réponse spontanée et non-volontaire en réponse à des visages émotionnels.

2.4. Etude  n.  4  

La dernière étude de cette thèse a été menée pour investiguer les effets du traitement de l’expression et de l’identité faciale sur l’activité cérébrale (mesurée par EEG) et l’expression faciale (mesurée par EMG). Des modèles théoriques stipulent des étapes cognitives et des substrats neuronaux différents pour le traitement de l’expression et de l’identité faciale (Bruce & Young, 1986; Haxby, Hoffman, & Gobbini, 2000). Pour cette raison, de nombreux chercheurs ont essayé d’établir la latence minimale à laquelle le

(14)

traitement de l’expression et/ou de l’identité faciale se reflètent dans des mesures de l’activité cérébrale. Cependant, les résultats empiriques à ce sujet ne sont pas concluants même s’ils suggèrent que, selon la tâche et les stimuli utilisés, à la fois l’expression et l’identité faciale peuvent modifier l’EEG dans les 100 ms qui suivent le début de la présentation d’un visage (Vuilleumier &

Pourtois, 2007). Certaines études n’ont malheureusement pas contrôlé que les stimuli utilisés ne diffèrent pas par rapport à des caractéristiques visuelles de bas niveau, comme la puissance dans les hautes et basses fréquences, la luminance, et le contraste. Pour cette raison, nous avons voulu tester la latence du traitement de l’expression et de l’identité de visages d’avatars, que nous avons contrôlé par rapport aux caractéristiques visuelles.

Concernant l’EMG facial, l’IF a été décrite comme étant une réponse automatique à la vue de visages émotionnels (Dimberg, 1982) qui ne peut être supprimée volontairement (Dimberg et al., 2002; Korb et al., 2010). Au même temps, les résultats de plusieurs études suggèrent que l’IF est réduite pour des personnes qu’on apprécie moins, comparé à des personnes qu’on apprécie d’avantage (Bourgeois & Hess, 2008; Lanzetta & Englis, 1989;

Likowski, Muhlberger, Seibt, Pauli, & Weyers, 2008; Weyers, Mühlberger, Kund, Hess, & Pauli, 2009). Dans l’étude n. 4 nous avons investigué à quel moment suite à la présentation d’un visage on pouvait observer une modulation de l’IF par l’identité du visage. Contrairement aux études précédentes ont utilisé des fenêtres temporelles d’une seconde ou plus, nous avons moyenné l’EMG sur des fenêtres temporelles de 125 ms.

L’expérience, inspirée de l’étude de Grandjean et Scherer (2008), s’est déroulée sur deux jours. Le premier jour les participants ont complété une

(15)

tâche dans laquelle ils ont appris que deux identités faciales leur apportaient le gain d’un point, deux autres leur faisaient perdre un franc, et encore deux autres ne causaient ni le gain ni la perte de points. Ces points ont été transformés en argent, et les participants ont été rétribués à la fin de l’expérience. Le deuxième jour les participants sont revenus au laboratoire et ont fait plusieurs tâches alors que leur EEG et leur EMG du zygomatique et du corrugateur étaient enregistrés. La tâche principale du deuxième jour ressemblait à celle du premier jour et comportait les mêmes identités faciales associées, comme la veille, avec un gain, une perte, ou rien du tout.

Indépendamment de leur identité, tous les visages ont été montrés avec trois expressions différentes : joie, colère, et neutre.

Les résultats de cette étude démontrent que les expressions faciales de joie et de colère élicitent une N170 de plus grande amplitude que les expressions neutres et une réduction de la puissance dans la bande Gamma de 375 à 625 ms après le début de la présentation du visage. La N170 est un potentiel négatif enregistré approximativement 170 ms après le début d’un stimulus au dessus des aires temporo-occipitales et est habituellement retenu comme étant spécifique à la perception de visages (Bentin, Allison, Puce, Perez, & McCarthy, 1996; mais voir Thierry, Martin, Downing, & Pegna, 2007). De plus, il a été proposé que la N170 reflète l’encodage structurel des visages, c’est à dire une étape cognitive qui précède à la fois la reconnaissance de l’identité et de l’émotion (Bruce & Young, 1986; Haxby et al., 2000). Contrastant cette hypothèse, plusieurs études ont trouvé un effet de l’expression émotionnelle sur l’amplitude de la N170, ainsi que des effets à des latences plus précoces (Vuilleumier & Pourtois, 2007). Nos résultats se

(16)

rajoutent à ceux de plusieurs autres études qui mettent en doute les modèles classiques de la perception faciale en montrant des effets de l’expression faciale à des latences relativement précoces. Les effets dans la bande Gamma n’étaient pas attendus parce qu’une étude antérieure avait trouvé une augmentation de Gamma pour des visages émotionnelles, comparé aux visages neutres (Balconi & Pozzoli, 2007). De manière surprenante, nous n’avons pas trouvé une modulation significative de l’EEG par l’identité faciale.

Les enregistrements EMG ont mis en évidence la présence de l’IF car le corrugateur (le muscle impliqué dans le froncement des sourcils) était plus activé pour les visages de colère que de joie, indépendamment de l’identité faciale. De plus, nous avons trouvé une modulation de l’IF par l’identité faciale. Plus précisément, l’activation du corrugateur à des visages de colère était plus grande quand l’identité faciale était associée à un gain, qu’à une perte – et ceci dès l’apparition de l’IF, c’est à dire 125 ms après le début du stimulus. Par contre, ni des effets de l’expression faciale (IF), ni de l’identité faciale ont été trouvés dans le zygomatique (le principal muscle du sourire).

Nos résultats montrant une plus forte IF pour les visages avec une association positive (gain) sont en accord avec ceux d’études précédentes (Bourgeois & Hess, 2008; Lanzetta & Englis, 1989; Likowski et al., 2008;

Weyers et al., 2009). Cependant, ils sont les premiers à établir que cette modulation est en place dès le début de l’IF.

(17)

3. Introduction  

Human faces are very complex and multi-dimensional stimuli, as they can convey at the same time information about a person’s identity, gender, age, and race; but also about their attention (e.g. via gaze direction), emotion and mood (Caldara, Rossion, Bovet, & Hauert, 2004; Darwin, 1872; Ekman &

Rosenberg, 1997; Haxby et al., 2000; Phelps et al., 2000; Scherer & Korb, 2009). Moreover, even when the perceived face is unfamiliar to us, we have some abilities in judging the person’s honesty, intelligence, dominance, trustworthiness, health, attractiveness, likeability, competence, aggressiveness, and cognitive appraisals (Hess, Blairy, & Kleck, 2000;

Scherer, 1992; Willis & Todorov, 2006). It is undeniable that human faces occupy a principal position in social interaction, and that impairments in recognizing facial identities or facial expressions can lead to important communication difficulties. Of course, it is not only important to be able to recognize faces and facial expressions, but one must also be capable of producing appropriate facial expressions in order to get along in human society. In spite of their tremendous complexity, faces are easily and rapidly detected and successfully processed by human beings, who can therefore be considered to be “specialists” in face-perception. Therefore, “face perception may be the most developed visual perceptual skill in humans” (Haxby et al., 2000 p. 223). Hence, it does not come as a surprise that great effort has been devoted to understand the cognitive and neural processes involved in face perception. However, little effort has so far been made to study the brain

(18)

activity preceding and accompanying the production of facial movements, and more specifically the genesis of emotional facial expressions.

The four studies that will be presented in this thesis mainly addressed scientific questions allowing a better understanding of the neural and/or motor processes occurring right before and during the production of facial expressions. They may be of interest to the scientific community, as the topic of the production of facial movements remains understudied. Importantly, production and perception of facial expressions are heavily related to each other, and may be conceived as the two sides of the same coin. Indeed, facial mimicry (FM), i.e. the production of spontaneous facial expressions corresponding to the facial expressions perceived in others, may – at least under certain circumstances – play a role in the identification and recognition of other’s emotional expressions, and in the attribution of emotional states to others (Niedenthal, 2007; Niedenthal & Maringer, 2009). Thus, production influences perception. Things may partly go the other way around too, in the sense that accurate production requires repetitive perception. Indeed, although certain facial expressions may be biologically rooted and present in the earliest infancy (Darwin, 1872), other types of expressions (those with a less biological and a greater social character, like expressions of contempt, scorn, pride, or guilt) may be acquired through a learning process that begins with perceiving their characteristic facial configurations in the faces of one’s caregivers. In this sense, perception forges production. For these reasons, two of the experiments (described in study n.3 and 4), by presenting face stimuli and recording participants’ facial and neural reactions to them, inform us about aspect of face perception.

(19)

In all but one study (n. 3) we recorded both participants brain activity with electroencephalography (EEG), and the movements of facial muscles involved in smiling (and partly also in frowning) with electromyography (EMG).

This allowed us to assess changes in two of the organism’s subcomponents, which are modified during an emotional episode (see the section on the definition of emotion, here below). Moreover, subjective ratings were assessed in studies n. 2 and 4, and a self-report questionnaire was completed in study n. 3. Across all four studies, we focused on the smiling expression, although study n. 4 included also angry expressions. The reason for this is that smiles are strong facial expressions that can easily be picked up by placing EMG electrodes over the cheeks to record the activity of the zygomaticus muscles. Moreover, smiles occur frequently in a spontaneous manner, and can quite easily be produced voluntarily.

What we know today on the neural correlates of the production of facial movements stems to a large degree from neurological case studies, where a double dissociation between the production of voluntary and spontaneous/emotional facial expressions has been reported. The neural anatomy allowing us to smile merrily to a joke, and to produce fine movements of the lips as required for speech, will be reviewed in the chapter on the production of facial movements (chapter n. 5). Reflecting the widespread distinction that is made in the literature between the production of voluntary and spontaneous (emotional) facial expressions, we performed studies investigating both types of behavior.

(20)

A first study on the neural correlates of the production of voluntary facial expressions (study n. 1, Korb et al., 2008) investigated, using EEG and EMG, whether voluntary, self-paced smiling movements are preceded by a readiness potential (RP). The RP, also called “Bereitschaftspotential”, is a premotor potential with negative polarity and central topography occurring up to two seconds before the onset of a movement (Kornhuber & Deecke, 1964).

There have been numerous reports of RPs preceding voluntary, self-paced movements of the limbs and other body parts, including eye-movements, jaw- movements, and swallowing movements (Huckabee et al., 2003; Nakajima et al., 1991; Shibasaki & Mark Hallett, 2006; Yamamoto et al., 2004; Yoshida et al., 2000). However, before our study, reports of the neural correlates of voluntary facial expressions were lacking. Importantly, almost the entirety of the facial muscles used for producing facial expressions are innervated by the seventh cranial nerve (also called the facial nerve), while eye-movements and jaw movements are controlled by the third to sixth cranial nerves. Moreover, an intricate and complex neural circuitry including several cortical and subcortical structures underlies the production of facial movements (see chapter n. 5 on the production of facial movements). Therefore, it was important to assess whether voluntary facial expressions (unilateral and bilateral smiles, in our case) generate neural antecedents comparable to those preceding voluntary movements of other body parts.

Although voluntary and spontaneous facial expressions appear to differ both at the peripheral level in terms of muscular contractions of the face

(21)

(Ekman & Rosenberg, 1997; Hess & Kleck, 1990), and at the central level in terms of their neural correlates (Hopf, Muller-Forell, & Hopf, 1992; Korb &

Sander, 2009; Morecraft, Stilwell-Morecraft, & Rossing, 2004; Rinn, 1984), this distinction may not be so clear-cut in everyday life and in healthy people.

This becomes evident, for example, by the fact that adult human beings possess the extraordinary capacity to voluntary influence and modify (e.g.

increase or decrease, prolong or shorten) their spontaneous emotional reactions. Not only inner feelings, but also spontaneous facial expressions to emotional stimuli can be voluntary suppressed. Imagine, for example, being extremely angry at, annoyed, and frustrated by some misbehavior of your boss. Imagine, moreover, that these strong emotions emerge during an important work meeting with your boss and other coworkers. If you wish to keep your job, you may probably want to hide, or at least to reduce, the outwards signs of your anger. This form of emotion regulation has been dubbed expression suppression in a recent influential model (Gross, 1998, 2007). Another emotion regulation strategy, called cognitive reappraisal, implies the cognitive reinterpretation of an emotional event in order to modify its emotional impact. The main models of the neural bases of emotion regulation, and some of the most relevant literature in the field, will be reviewed in the chapter on emotion regulation (chapter n. 6).

As the strategy of expression suppression involves the voluntary control of spontaneously triggered facial expressions, we became interested in studying its neural correlates. We carried out an experiment (study n.2), in which healthy participants watched sequences of two pictures. The first picture was always neutral. The second picture contained either an amusing

(22)

incongruity (e.g. a squirrel opening his fur and showing a superman shirt) or a non-amusing additional element (e.g. a picture that contained a sleeping dog and a chair will suddenly contain two chairs). The experiment included two conditions: in one condition participants were asked to suppress, by focusing on the control of their facial muscles, their facial reactions to the humorous pictures. The other condition consisted in free viewing without suppression.

We expected amusing trials to elicit spontaneous smiling and laughter, which we captured via facial EMG, mainly in the free viewing condition. At the same time, we recorded for the first time with EEG the neural correlates of expression suppression, which had been investigated so far by just one single fMRI study (Goldin et al., 2008). It had previously been shown that reappraisal may reduce the amplitude of the late positive potential (LPP, an electrophysiological wave with positive polarity over centro-posterior electrodes, starting around 300 ms after stimulus onset, and reflecting increased allocation of attention and processing resources) in response to arousing emotional scenes (Moser, Hajcak, Bukay, & Simons, 2006; Moser, Krompinger, Dietz, & Simons, 2009; Hajcak & Nieuwenhuis, 2006; for a review see Hajcak et al., 2010). However, before study n. 2 was carried out, it was unknown whether the amplitude of the LPP would also be modified by expression suppression.

We also became interested in studying the regulation of another form of spontaneous facial expressions: facial mimicry (FM). FM describes the people’s tendency to imitate others’ emotional facial expressions, for example, as when you unwillingly smile back to a smiling person. FM has been

(23)

described as a fast (Dimberg & Thunberg, 1998) and automatic, reflex-like mechanism (Chartrand & Bargh, 1999; Dimberg & Thunberg, 1998).

Moreover, some scholars have proposed that FM may be necessary, at least under certain conditions, for recognizing others’ facial expressions and underlying emotional states (Niedenthal, 2007; Niedenthal & Maringer, 2009).

However, few studies have investigated whether FM can be voluntarily suppressed. Therefore, we designed an emotional Go/NoGo task that implied fast smiling during Go trials, and forced participants to heavily suppress their facial movements during NoGo trials (study n.3). In a congruent condition participants saw happy faces during Go trials and faces with neutral expression during NoGo trials. The incongruent condition consisted of neutral faces during Go trials and happy faces during NoGo trials. In both the congruent and the incongruent condition, Go trials were more frequent than NoGo trials, in order to install a prepotent smiling tendency, which would require great effort in order to be suppressed. We were particularly interested in testing whether FM remained present even during NoGo trials, in which participants successfully refrained from smiling overtly.

Although FM seems to be a fast, automatic, reflex-like, and voluntary suppression resistant form of spontaneous facial expression, several studies have nevertheless suggested that it can be modulated through a variety of factors, such as social context (Bourgeois & Hess, 2008), task relevance (Cannon, Hayes, & Tipper, 2009), perceiver’s emotional state (Moody, Daniel N. McIntosh, Mann, & Weisser, 2007), attitudes (Likowski et al., 2008), subliminal competition priming (Weyers et al., 2009), empathy (Sonnby-

(24)

Borgstrom, Jonsson, & Svensson, 2003), or hormonal levels (Hermans, Putman, & van Honk, 2006). For example, it takes just the induction of positive or negative attitudes towards the perceived faces, by asking participants to remember a one-word description for each face, in order to observe modulations of FM (Likowski et al., 2008). Differences in latency of the effects of various factors on FM may explain the apparent contradiction between 1) findings showing automatic triggering of FM by emotional facial expressions, and resistance of FM to suppression, and 2) findings showing modulation of FM by several contextual factors. Unfortunately, earlier studies in the field did not analyze the EMG with the temporal resolution required for testing this hypothesis. The smallest time-unit of analysis in studies investigating the effects of the social relationship established (either through life experience, or imposed by the experimenter) between the sender and the receiver of the emotional expression, was one second (Weyers et al., 2009).

Now, many different patterns of muscular activity can possibly occur in one second, let alone in periods of six or more seconds, which were analyzed in most studies (Bourgeois & Hess, 2008; Lanzetta & Englis, 1989; Likowski et al., 2008).

In an effort to investigate the timing of the modulation of FM by the contextual meaning of the face’s identity, we carried out study n.4, in which the faces of specific avatar identities were paired, through extensive training, with winning money, losing money, or neither of both. Independently of the monetary outcomes they were associated with, each face was shown with a happy, angry, and neutral facial expression. The EMG was averaged over time-bins of 125 ms from the time of stimulus onset, until one second later.

(25)

As mentioned above, face perception has been and continues to be the object of intensive and prolific investigation in several disciplines, such as anthropology, biology, physiology, psychology, and recently also in neuroscience. Despite the complexity of human face perception, scholars have come a long way in their effort to uncover its cognitive and neural processes. In the chapter on the perception of faces (chapter n. 4), I will present the main models of face perception, with an emphasis on the underlying neural processes, and summarize some of the main neurological and experimental findings that have led to these models. Importantly, despite their differences, most models of face processing propose some degree of separation (at the cognitive and/or neural level) of the processing of face identity and facial expression – a distinction we have addressed in study n.4.

In study n.4 participants’ EEG was recorded along with their facial EMG. This allowed us to investigate the occurrence of effects pertaining to the processing of facial identity and facial expression at the level of the brain. In fact, as mentioned before, most models of face perception conceive these two types of information as distinct in terms of their underlying cognitive and neural processes. Many studies have tried to measure the time of the earliest effects in the brain pertaining to the processing of facial identity or expression.

However, the question whether one or the other effect occurs first, and if so at what latency from the beginning of the presentation of the face stimulus, is far from being settled (Haxby et al., 2000). This may partly be due to the scarcity of experiments that investigated the perception of expression and identity

(26)

together – and at the same time independently of each other – in the same task (Calder & Young, 2005).

In the following, the concept of emotion will be shortly defined, as the here presented work investigated mainly the production and perception of emotional expressions. Then, I will start by summarizing the main findings and models on the neural correlates of face perception. Special emphasis will be put on the distinction between the processing of facial identity and expression, which has been postulated by most theoretical accounts. Moreover, we will review studies testing for the temporal sequence of these processes at the level of the brain (this temporal sequence may equally pertain to the topic of the production of facial expressions, where it however has not been addressed yet – a shortcoming we attempted to correct for in study n. 4).

Thereafter, we will summarize some of the literature on the fascinating topic of the production of facial movements. We will focus on the distinction between voluntary and spontaneous (i.e. emotional) facial movements, mainly at the central level, where these two types of behaviors are claimed to rely on different neural circuitries (see study n.1); but also somewhat at the peripheral level, where emotional and voluntary facial expressions may correspond to characteristic patterns (in space and time) of facial muscles contractions (see study n.3). Finally, as emotional facial expressions that arise spontaneously in response to emotional stimuli can be voluntary suppressed and modified, we will provide an overview on the burgeoning field of the study of the neural correlates of voluntary emotion regulation (see study n.2, but also n.3).

(27)

3.1. Definition  of  emotion  

Longtime seen as a not objectively measurable and at best disturbing factor of human’s rational behavior, emotions have recently reemerged as an important interest for psychological and neuroscientific research, as well as in many other fields of science and in society in general. Nevertheless, scholars are still far from being united, providing numerous biologically and evolutionarily, socially, philosophically, or cognitively inspired definitions of what exactly constitutes an emotion (more than 100 were reviewed by Kleinginna & Kleinginna, 1981). These difficulties in establishing a precise definition may originate from the fact that ‘emotion’ is a common-language term which refers to a huge variety of processes – these can be, for example, mild, as well as intense, brief or long-lasting, simple or complex, private or public (Gross & Thompson, 2007)

In the following, we will adopt the definition of emotions proposed by appraisal theories and related accounts (Gross, 1998b; Scherer, 2001a), which take into account the full complexity of human emotions by postulating the existence of several levels at which emotional processes can take place (e.g., from the sensory-motor level which comprises innate basic responses up to the conceptual level, that subsides more complex motivational reasoning and decision taking). Not denying the existence of basic emotional systems that have been shaped through evolution in order to endow us with sets of powerful and rapid responses to particularly relevant situations for survival and procreation (see, for example, Ohman & Mineka, 2001;

Panksepp, 1998), appraisal models of emotion give also tribute to more cognitive, typically human forms of emotional reactions. In contrast, many

(28)

other, more biologically based models (Panksepp, 1998; Rolls, 2005), have come a long way in exploring and understanding basic emotional mechanisms, which are often shared (at the behavioral and at the neural level) by a variety of animal species. However, these models may somewhat fail to provide satisfying explanations of the more cognitive, and thus typically human, emotional phenomena. Last but not least, appraisal models take into account all components of the organism, and do not focus solely on, for example, changes in the autonomic nervous system. By doing so, they permit the study of the effects of emotion and emotion regulation at all levels of analysis.

 

Table   1:   Relationship   between   functions   and   components   of   emotional   responses   and   their   subserving  organismic  subsystems.  From  Scherer  (2001a).  

Appraisal models define emotion as the result of (coordinated and synchronized) changes in all or most of the several subcomponents of the organism due to the evaluation of an event or situation as relevant to the major concerns and/or goals of the organism (Gross & Thompson, 2007;

Scherer, 2001). The organism’s subcomponents comprise behavior, peripheral physiology, and subjective experience (Gross & Thompson, 2007).

(29)

Scherer (2001) adds to this list a motivational and a cognitive component (see Table 1).

In short, an emotion occurs when a subject evaluates a situation or an event (they can be external or internal) as relevant to his or her goals and needs, and when important changes occur in response to this evaluation in the physiological, behavioral, and feeling domains of the individual. Emotions constitute discrete episodes in time and differ from other affective states, such as general stress responses, moods, or other motivational impulses such as hunger, sexual desire, etc. For example, moods are less bound to a particular event, are often weaker in intensity, and do last for a longer time than emotions (Scherer, 2005).

It is important to point out that what appraisal theories call a “cognitive”

evaluation does not relate solely to the higher and more complex forms of (human) cerebral functions (such as language, memory, and consciousness) that have usually been labeled with this term. Instead, the differentiation that has typically been made in psychology between cognitive and emotional processes, and which has been responsible for a series of hot debates in the past (e.g. see the Lazarus vs. Zajonc debate), loses its appropriateness in a theoretical framework that conceives emotion as the result of a series of checks that take place in a multicomponent processing system that includes a basic sensory-motor, a schematic, as well as a conceptual processing level (Leventhal & Scherer, 1987).

Some appraisal models have proposed a fixed chronological sequence of the various appraisal checks. For example, the Component Processing Model (CPM) by Scherer (Scherer, 2009a, 2001; Scherer & Korb, 2009)

(30)

postulates that the first appraisal checks an organism will compute are, in the following order: novelty, intrinsic pleasantness, and goal relevance. Only later the goal conduciveness of a stimulus or event (i.e. whether it helps the organism to attain it’s most important goals) will be evaluated. The hypothesis that intrinsic pleasantness (IP) is evaluated before goal conduciveness (GC) was tested in study n. 4.

If there is today a fair amount of agreement on what an emotion is, there is unfortunately still no consensus about how many different emotions exist. A set of five to 14 (depending on the scholars) “basic” emotions has been proposed by several authors (e.g. Ekman and Izard) and is often used in empirical studies, mainly because of its easiness of use. These basic emotions would then combine and build up to more complex emotions.

After having provided an overview of the thesis structure, and after having defined the concept of emotion, I will start discussing the theoretical models of face perception in the next chapter.

(31)

4. The  perception  of  faces  

The study of face processing has a long tradition in psychology. The probably most famous cognitive model in this field was put forward by Bruce and Young in the mid 1980s. Although this model focused on face recognition, it also discussed the role of facial expressions, and made the important assumption – based on neurological and experimental data – that facial identity and expression are processed separately. More recent models basically kept this distinction, but moved it to the neural level, mainly based on neuroimaging data. Thus, Haxby and colleagues (Gobbini & Haxby, 2007;

Haxby et al., 2000) propose that separate areas of the non-primary visual cortex are involved in the processing of changeable (e.g. facial expression, gaze direction) and invariant (e.g. facial identity) aspects of the face. Others have however partly criticized Haxby’s model and/or the data it is based upon (Calder & Young, 2005), and put forward the importance of subcortical structures like the amygdala, in the processing of emotional facial expressions (Johnson, 2005). The philosophical and psychological tradition of embodiment emphasizes the importance of reproducing the movements and facial expressions perceived in others – the feedback of these peripheral changes to the brain will then allow to recognize the emotions perceived in others (Barsalou, 2008; Niedenthal, 2007; Niedenthal, Barsalou, Winkielman, Krauth- Gruber, & Ric, 2005). In modern embodiment theories and related theoretical accounts, overt mimicry does not occur in every instance, but can occur internally via ‘as if’ loops that simulate peripheral feedback. Embodiment, which thus establishes a link between the perception and production of facial

(32)

expressions, and other related theoretical accounts, have (re)gained increased attention with the recent discovery of mirror neurons – a special class of motor neurons that may constitute the biological substrate allowing for action and intention understanding, and (even more speculatively) for emotion recognition, empathy, and language acquisition (Gallese, Fadiga, Fogassi, & Rizzolatti, 1996; Keysers & Fadiga, 2008; Oberman &

Ramachandran, 2007; di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992; Rizzolatti & Arbib, 1998). Besides visual cortical areas and subcortical routes passing through the amygdala, many other areas of the brain may be involved in face processing. For example, somatosensory cortices may allow the readout of (true or simulated) physiological feedback from the periphery (Adolphs, Damasio, Tranel, Cooper, & Damasio, 2000), and the orbital frontal cortices may play a role in the explicit recognition of facial expressions (Adolphs, 2002a). Given the complexity of facial stimuli, as well as of the cognitive and neural processes involved in their processing, it is no wonder that studies addressing (using mainly noninvasive brain imaging techniques with high temporal resolution, such as EEG and MEG) the temporal dynamics of the processes involved in recognizing a face’s identity or expression have so far led to inconsistent results. In fact, both differences in facial identity and expression have been found to lead to measurable changes in correlates of brain function at various latencies after stimulus onset. Study n. 4 therefore wanted to investigate at what latency effects of experimental manipulations of facial expression and identity become first observable in the ERP, and in specific frequency bands of the EEG. Moreover, in study n. 4 the activity of the main muscles involved in smiling (zygomaticus) and frowning (corrugator)

(33)

was recorded. This allowed us to investigate the independent effects of expression and identity on facial contractions. In addition, we intended to establish at what latency the modulation of FM by the facial identity – an effect that has been reported several times – does first occur.

4.1. Face  perception  model  by  Bruce  and  Young  

An influential paper by Bruce and Young (1986) sketched out – resuming data from laboratory experiments, everyday experience, and studies with neurological patients – a cognitive, and thus more functional than neural, model of face perception and face recognition. This model (see Figure 1) assumes that faces can convey seven types of information, i.e. pictorial, structural, visually derived semantic, and identity-specific semantic information, as well as information about the name of the person, the facial expression, and finally about facial speech codes. Emphasis in this model is put on the types of information judged to be relevant for the recognition of familiar faces, and thus on face identity. In contrast, the processing of expression and speech-related movements are only briefly discussed.

Pictorial information is a general type of code, which can be formed for any picture or visual pattern, and which comprises a record of a particular static visual event. Therefore, it is likely that pictorial information plays a minor role in everyday life, where faces have to be recognized across, for example, several viewing angles and lightning conditions. Structural codes on the other hand are already more abstract, and therefore more likely to mediate everyday recognition of familiar faces. Moreover, as a familiar face is represented by an interconnected set of descriptions (e.g. of the details as

(34)

well as the configuration of the face), a familiar face is probably represented not by a single one, but by a set of structural codes.

Figure  1:  Face  processing  model  by  Bruce  and  Young.  From  Calder  &  Young  (2005).  

After having presented the types of information a face can convey (i.e.

the products of facial processing), Bruce and Young (1986) go on proposing a box diagram showing the processing steps (or modules) implicated in the access and generation of the previously presented information codes. These steps are called structural encoding, expression analysis, facial speech analysis, face recognition units, person identity nodes, and name generation.

They all feed information extracted from faces to the general cognitive system, and can partly be influenced by the cognitive system in return. The structural encoding module extracts view-centered information of the face, and sends it to the expression and facial speech analysis modules (these are linked to the cognitive system). Furthermore, it extracts expression- independent descriptions of the face, which are more abstract and will be used by the face recognition units. There are, in fact, as many face recognition units, as there are known persons. The strength of the output of

(35)

the face recognition unit to the cognitive system will then be proportional to the degree of resemblance between its stored information of a face and the input of the same face provided by the structural encoding step. Face recognition units can convey the feeling of familiarity with a perceived face, even if it is not possible to recall the person’s identity or name. Face recognition units are bidirectionally linked to person identification units, which are part of the associative memory and are, as was the case for the face recognition units, as numerous as the number of persons known. Person identification units can provide identity-specific semantic codes to the face recognition units, and convey the feeling of successful person identification.

Moreover, person identification units are not specific to the face or even to the visual modality, as they respond also to a person’s voice, name, or other non- face specific features. This is evident for example in the dissociation between emotion and person recognition seen in prosopagnosic patients (Tranel, Damasio, & Damasio, 1995), as well as from the dissociation between visual and auditory person recognition, which has been reported in some brain damaged patients. A name generation module outputs the person’s name, but requires the previous structural, face recognition, and person identity steps.

This unilateral informational dependency is postulated on the basis of frequent reports of people being able to recognize or even identify faces, but not remembering their names, while the opposite – i.e. recalling a name but not any other information about the person – has never been reported. Finally, the cognitive system can, via the directed visual processing module, increase attention towards one or the other component of the model.

(36)

To summarize, Bruce and Young (1986) had at their time quite successfully integrated data from different sources in a cognitive model of face perception, putting the emphasis on face recognition. One of the important points in this model, which has been kept in most following models, is that the “analyses of facial identity and facial expression proceed independently” (p. 315). However, although they do review data from studies with patients having specific impairments in face processing, Bruce and Young do not supply much information on where and how in the brain these face-processing steps take place. This type of information, however, became more easily available with the advent of modern non-invasive brain imaging techniques, such as functional magnetic resonance imaging (fMRI), and formed the basis for the face processing model by Haxby and colleagues. In the section 5.7 we will discuss findings from EEG studies that for example tested whether the structural encoding step has to precede the recognition of both facial identity and expression.

4.2. Face  perception  model  by  Haxby  and  colleagues  

The currently most influential model of the cognitive and neural processes underlying face processing has been proposed by Haxby and colleagues (Haxby et al., 2000). The model is heavily inspired by the older and more cognitive model by Bruce and Young (Bruce & Young, 1986), and is based on a thorough review of the results of neurophysiological recordings in humans and non-human primates, as well as on brain-imaging data (e.g.

fMRI) acquired in humans.

(37)

Haxby and colleagues’ model (see Figure 2 and Figure 3) defines a core system for face processing, involving three areas of the occipitotemporal visual extrastriate cortex, i.e. the inferior occipital gyrus (also called the occipital face area, or OFA), the superior temporal sulcus (STS), and the lateral fusiform gyrus. Due to its preferential responding to faces (as compared to pictures of objects, for example), the lateral part of the mid- fusiform gyrus has been dubbed the fusiform face area (FFA) by Kanwisher et al. (1997). In a hierarchical fashion, visual information is first processed in the inferior occipital gyrus, and then sent in parallel to the STS for the extraction of the changeable aspects of the face, such as eye gaze, lip movement, and emotional expression, and at the same time to the lateral fusiform gyrus for the extraction of the invariant aspects of the face, e.g. the perception of the face identity.

In addition, the model comprises an extended system of brain areas that are typically devoted to other functions, but also play a role in face perception. These are the intraparietal sulcus for changes of spatially directed attention (for example in response to gaze direction); the auditory cortex for processing lip movements; areas devoted to emotion such as the amygdala, insula, and the limbic system; and the anterior temporal cortex that may contribute to the retrieval of identity and biographical information.

Références

Documents relatifs

Taking a step further, we developed a co-experience toolkit that could be used by researchers and professionals involved in the study of pavement design and urban planning to

In conclusion, here we show that the visual preferences of 3.5-month-old infants to smiling versus neutral faces are interactively shaped by visual, social experience with faces

We observed from tongue contour tracings that the Mid Bunched configuration generally has a lower tongue tip than the Front Bunched one in speakers who present both bunched

What Weber means by the postulate of causal adequacy, says Schutz in The Phenomenology of the Social World, is nothing else than ‘‘the postulate of the coherence of

Ein verwandt- schaftliches Empfinden ergab sich vor allem aus dem gemeinsamen Status einer Republik, aber auch aufgrund der ähnliehen politischen Institutionen und

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Forty-eight first-year students of English were divided into 3 groups: one group received a high variability perceptual training (PE-Group)(Iverson, Pinet and Evans, 2011), a

Thus, we computed the interaction between the Odor context, Expression, and Group factors with two categories of facial expres- sion: positive expression (i.e., happiness) and