• Aucun résultat trouvé

Improving the measurement of emotion recognition ability

N/A
N/A
Protected

Academic year: 2022

Partager "Improving the measurement of emotion recognition ability"

Copied!
268
0
0

Texte intégral

(1)

Thesis

Reference

Improving the measurement of emotion recognition ability

SCHLEGEL, Katja

Abstract

The ability to recognize other people's emotions from their face, voice, and body (emotion recognition ability, ERA) is crucial to successful functioning in private and professional life.

However, currently available tests to measure this ability are of limited ecological validity and their psychometric quality is unclear. In this thesis, I contributed to the field of ERA by developing and validating a new test based on short videos with sound representing a large number of emotions (Geneva Emotion Recognition Test, GERT). Results supported the satisfactory psychometric quality, construct and predictive validity of the GERT. Specifically, the GERT was a more consistent predictor of participants' performance in dyadic negotiation than standard measures of emotional and cognitive intelligence. Moreover, I showed that ERA is an essentially unidimensional ability across emotions and modalities, and that it is meaningfully related to a wide range of other constucts related to social and emotional effectiveness.

SCHLEGEL, Katja. Improving the measurement of emotion recognition ability. Thèse de doctorat : Univ. Genève, 2013, no. FPSE 537

URN : urn:nbn:ch:unige-285469

DOI : 10.13097/archive-ouverte/unige:28546

Available at:

http://archive-ouverte.unige.ch/unige:28546

Disclaimer: layout of this document may differ from the published version.

(2)

Section de Psychologie Sous la direction de Prof. Klaus R. Scherer et Prof. Didier Grandjean

IMPROVING THE MEASUREMENT OF EMOTION RECOGNITION ABILITY

THÈSE

présentée à la

Faculté de Psychologie et des Sciences de l’Education de l’Université de Genève

pour obtenir le grade de Docteur en Psychologie

par

Katja SCHLEGEL de

Leipzig, Allemagne Numéro d’étudiant : 09-302-845

Thèse No 537 GENEVE, mai 2013

(3)

The ability to recognize other people’s emotions from their face, voice, and body is crucial to successful functioning in private and professional life. Research on individual differences in emotion recognition ability has had a long tradition in psychology and sparked renewed interest as a basic component of the popular emotional intelligence construct. However, currently available tests to measure this ability are of limited ecological validity, as they mostly use stimuli from a single modality (usually the face) and include only a small number of emotions. In addition, only little research focused on the psychometric properties of such tests, such as internal consistency, dimensional structure, and construct validity. In this thesis, I contributed to the field of emotion recognition ability by a) clarifying the dimensional structure of this ability, b) investigating the role of this ability in the nomological network of constructs related to social and emotional

effectiveness, and c) developing and validating a new test based on short videos with sound representing a large number of emotions (Geneva Emotion Recognition Test, GERT1). Results showed that a) accurate recognition of all emotions across different modalities is largely determined by a single underlying ability, b) emotion recognition ability is related to various measures of interpersonal sensitivity, and c) the GERT is a promising new test to measure this ability. Two studies supported the satisfactory psychometric quality, construct and predictive validity of the GERT. Specifically, the GERT was a more consistent predictor of participants’ performance in dyadic negotiation than standard measures of emotional and cognitive intelligence.

1 A short demo version of the GERT is available here: https://cms2.unige.ch/cisa/webExperiments/bin- release/WebExperiments.html?sid=4&lang=eng&token=random#

(4)

This thesis would not have been possible without the support of many people. First, I want to thank the two directors of my thesis, Prof. Klaus Scherer and Prof. Didier Grandjean, for having given me the opportunity to work on this fascinating topic in the Swiss Center for Affective Sciences. Thank you for the stimulating discussions, for believing in my progress, and for

encouraging me to trust my own skills and just go for it. I would also like to thank the two external members of my jury, Prof. Marianne Schmid Mast and Prof. Michael Eid for their genuine interest in my work, their support during earlier stages of the thesis, and their valuable feedback which I truly appreciate. A big thank you also goes to Judy Hall, Andreas Frey, Georg Hosoya, Dave Kenny, and Johnny Fontaine for their comments and tips that each took me another step towards improving this work. Finally, I am still indebted to Gabriel Nagy, the supervisor of my master thesis in Berlin, for patiently teaching me the basics of Item Response Theory, without which I would not have been able to write this thesis.

I am particularly grateful to all my colleagues and friends in Geneva who accompanied me during my thesis and made my stay at the CISA a great experience. Thank you Konni, Jacobien, Vera, Sona, Ines, Marc, Marcello, Irene, Chiara, Cristina, Leonie, Corrado, Sascha, Heidrun, Wiebke, Julie, Valérie, Eduardo, Matthieu, Sophie, Seba, Géraldine, Alison, Aline, Ben, Tommaso, Nele, Eva, Tobias, Daniela, Olivier, and the whole CISA crowd for all the memorable and fun moments since 2009. I hope that many more of these moments are still to come! Special thanks go to my colleagues from the NEMO (Negotiation and EMOtion) study; Marc, Jacobien, Vera, Leonie, Ben, and Géraldine; without whom the last chapter of this thesis would not have existed. When this mammoth project will be finished one day, I will still keep the good and fun memories of you guys!

Finally, I also want to thank Marcello for giving me the opportunity to stay in the CISA for one more year – and I hope this will not be the end of our collaboration.

Finally, I want to thank my family in Berlin, Switzerland, Russia, and the Ukraine for encouraging me to take this challenge in Geneva, for believing in me, and for loving and supporting me here and over the distance.

(5)

Introduction

La perception des signaux verbaux et non verbaux d'autres personnes et les attributions que nous en faisons sont des mécanismes fondamentaux dans l'interaction humaine. Entendre la voix d'une personne, voir son apparence, l'expression de son visage et les mouvements de son corps, nous permet de déduire beaucoup d’informations sur celle-ci. En effet, nous faisons des jugements sur son caractère, ses intérêts, sa véracité, son intelligence et son état émotionnel.

En général, les êtres humains sont assez exacts dans leurs jugements sur la personnalité, l'intelligence ou les émotions d'autres personnes (pour un résumé, cf. Schlegel & Wallbott, 2013).

Comme McArthur et Baron (1983) l'ont souligné, les jugements précis guident des comportements socialement fonctionnels et favorisent la réalisation des objectifs individuels, servant ainsi une fonction adaptative (cf. Gibson, 1986). Un bon exemple est le jugement des émotions, qui peut être bénéfique à la fois pour un fonctionnement social réussi et pour une meilleure adaptation aux changements environnementaux. Par exemple, la psychologie évolutionniste a suggéré que la reconnaissance précise de certaines émotions, comme la peur, à travers des expressions du visage ou de la voix d'autrui, peut être importante pour la survie, puisqu’elle signale un danger (Cosmides

& Tooby, 2000). En ce qui concerne l'interaction sociale, la reconnaissance des émotions permet d'anticiper les actions de l'autre personne, d’adapter ses propres actions et, par conséquent, rend les interactions plus réussies et détendues (Hampson, Van Anders, et Mullin, 2006).

Bien que ce point de vue s'applique généralement à tous les êtres humains en bonne santé, il a également été noté que certaines personnes distinguent plus précisément les émotions des autres, ce qui les rend plus efficaces dans leurs interactions sociales (Bernieri, 2001; O'Sullivan & Ekman, 2008). Dans la présente thèse, j'étudie comment ces différences individuelles dans la capacité de

(6)

standardisé.

Dans le passé, plusieurs chercheurs ont développé des tests pour mesurer la CRE. Dans ces tests, typiquement, des expressions émotionnelles (tels que des images de visages ou des

enregistrements vocaux) sont présentés aux participants qui doivent choisir, à partir d'une liste d'émotions, quelle émotion a été exprimée dans chaque représentation. La CRE est généralement calculée comme la proportion de représentations correctement identifiés. Bien que ces tests aient été utilisés avec succès dans différentes domaines, y compris la psychologie sociale (p. ex., la prédiction de l'efficacité sociale), psychologie organisationnelle (p. ex., le recrutement et la prédiction de la performance au travail), ou la psychologie clinique (p. ex., la caractérisation des déficiences fonctionnelles dans divers groupes cliniques), ils ont aussi certaines limites. En

particulier, la plupart d’entre eux se limitent à une seule modalité de présentation (habituellement le visage) et comprennent seulement un petit nombre d'émotions (surtout des émotions négatives).

Comme les indices non verbaux dans la vie réelle sont souvent transmises de façon multimodale et dynamique, plusieurs chercheurs ont mis en doute la validité écologique des stimuli statiques et unimodales dans les tests de CRE (McArthur & Baron, 1983; Isaacowitz & Stanley, 2011). En outre, des tests de CRE ont tendance à avoir une fiabilité basse et leur structure dimensionnelle n'a pas été étudiée de manière suffisamment approfondie (Hall, 2001). Étant donné que la mesure de la CRE ne suit pas une tradition psychométrique stricte, comme par exemple l'intelligence cognitive, des approches psychométriques modernes, comme la théorie des réponses aux items, n'ont pas encore été appliquées pour évaluer et améliorer la qualité des tests existants.

L'objectif principal de la présente thèse est donc de développer et de valider un nouveau test pour mesurer la CRE qui, premièrement, comprend des stimuli dynamiques et multimodales

représentant un grand nombre d'émotions positives et négatives pour augmenter la validité écologique, et, secondement, prend en compte des standards psychométriques modernes, en

(7)

une meilleure compréhension de la CRE en abordant quelques lacunes relevées par la littérature existante. Plus précisément, mon travail consiste à déterminer si la CRE est une capacité

unidimensionnelle ou si la reconnaissance des émotions et les différentes modalités les représentant nécessitent des compétences distinctes. Par ailleurs, je lie empiriquement la CRE à une variété d'autres construits qui mesurent l'efficacité sociale et affective en analysant leur réseau

nomologique basée sur un large échantillon de questionnaires et de tests. Enfin, la validité

prédictive de la CRE par rapport à la performance objective dans une interaction interpersonnelle sera également évaluée. Cette étude est aussi une des premières qui examine la relation entre la CRE et le comportement réel d’une personne. Ces objectifs sont abordés en cinq chapitres empiriques, dont chacun est résumé ci-dessous.

Chapitre 3: Constructs of social and emotional effectiveness: Different labels, same content?

Pendant de nombreuses décennies, les chercheurs ont tenté d'identifier les individus qui sont particulièrement doués dans l’interaction avec les autres personnes dans la vie privée et

professionnelle. Une variété de construits ou concepts visant le fonctionnement interpersonnel ont ainsi été développées, telles que l'empathie, les compétences sociales, la sensibilité

interpersonnelle, et plus récemment, les compétences « politiques » et l'intelligence émotionnelle.

Bien que la comparaison des instruments respectifs pour mesurer ces construits révèle des

ressemblances importantes dans le contenu des items, ils ont rarement été étudiés ensemble. Il est donc difficile de savoir si les construits derrière les différentes étiquettes sont réellement distinctes ou se chevauchent. Dans le cadre de cette thèse, je me suis particulièrement intéressée par le réseau nomologique autour de la CRE qui a été discuté par plusieurs chercheurs (Hall, 2001; Bernieri, 2001). La CRE est impliqué dans des construits plus vastes tels que la sensibilité interpersonnelle, l'intelligence émotionnelle, les compétences sociales et l'empathie. Ici, mon but était de clarifier

(8)

dimensions sous-jacentes communes existent, et comment ces dimensions sont liées aux « Big Five » de personnalité. À cette fin, j'ai soumis, à un échantillon de 152 étudiants, six tests et

questionnaires ayant un total de 32 sous-échelles. Les résultats ont révélé que les 32 échelles ont été répartis sur quatre dimensions que je dénommé Expressivité, Sensibilité, Aptitudes émotionnelles, et Autocontrôle. La Sensibilité était significativement liée positivement à la CRE, soutenant sa validité de construit. Les quatre dimensions étaient également significativement liées aux « Big Five ». Cette étude a contribué à la connexion des lignes distinctes de la recherche sur des concepts similaires, ce qui peut orienter le choix d'un chercheur pour un instrument ou construit en clarifiant les similitudes entre eux. L’étude a également contribué au débat actuel autour des différentes approches de l'intelligence émotionnelle (des modèles « trait » versus « ability »). Certains

chercheurs insistent sur le fait que l’approche « trait » (p. ex., Petrides & Furnham, 2000) confond trop l'intelligence émotionnelle avec des construits de la personnalité existantes (Mayer, Salovey et Caruso, 2008; Daus & Ashkanasy, 2005; Mayer, Salovey et Caruso, 2004). Mes résultats ont en effet montré que les différentes échelles d'un questionnaire sur l'intelligence émotionnelle « trait » largement utilisé ont une forte saturation sur les mêmes dimensions que les questionnaires plus anciens qui portent des étiquettes différentes comme « compétence sociale », « empathie » etc. Il est donc possible que l’intelligence émotionnelle « trait » ne soit pas un concept distinct.

Chapitre 4: Emotion recognition: Unidimensional ability or a set of modality- and emotion- specific skills?

À ce jour, les chercheurs ont largement omis la question théorique et empirique de savoir si la CRE peut être considérée comme une capacité unitaire unique ou un ensemble de compétences distinctes. Dans la présente thèse, j'ai été particulièrement intéressée par la question si la CRE contient des capacités qui sont liées uniquement à une émotion ou à une modalité spécifique ou si la CRE est une capacité unidimensionnelle impliqué dans la reconnaissance de toutes les émotions et

(9)

d'identifier l'émotion exprimée dans chacune des 120 stimuli dans lesquels différents acteurs expriment dix émotions qui ont été présentées, soit sous forme d'une image fixe, soit sous forme d’un enregistrement audio, vidéo ou audio-vidéo (étude 1) . Étant donné qu'une théorie explicite qui pourrait guider l’analyse et la comparaison des différents modèles dimensionnels manquait, j'ai choisi une combinaison d'analyses factorielles confirmatoires et exploratoires pour identifier la structure dimensionnelle des réponses des individus. Les résultats ont montré que la CRE peut être conceptualisée comme une capacité unique qui est impliquée dans la reconnaissance précise des expressions de toutes les émotions et modalités. En outre, dans cette vaste dimension générale, j'ai trouvé des sous-facettes plus spécifiques impliquées dans la reconnaissance des émotions très similaires (p. ex., l'irritation et la colère). J'ai largement confirmé cette structure dans un deuxième échantillon (étude 2) avec des personnes d’âge et de profession différents, pour une série distincte de stimuli audio-vidéo contenant encore plus d'émotions. Comme cette série de stimuli a également été utilisée pour le développement du nouveau test de la CRE (voir chapitre suivant), mes résultats impliquent que le modèle « Rasch » de la théorie des réponses aux items serait approprié pour les procédures de l’étalonnage plus tard. Cette recherche a contribué au domaine de la reconnaissance des émotions en fournissant la première analyse systématique de la structure dimensionnelle des différences individuelles à travers les émotions et les modalités. Étant donné que les deux études indépendantes avec des stimuli et participants différents ont été effectués dans le cadre de cette thèse et ont mené au même résultat, on peut supposer que la CRE est essentiellement

unidimensionnelle.

Chapitre 5: Introducing the Geneva Emotion Recognition Test: An example of Rasch-based test development.

Les tests existants pour mesurer la capacité de reconnaître les expressions émotionnelles des autres focalisent principalement sur une seule modalité (habituellement le visage) et comprennent

(10)

souvent insatisfaisante et la structure factorielle est rarement documentée. L'objectif de ce chapitre était de décrire le développement d’un nouveau test de la CRE (« Genève Emotion Recognition Test », GERT) qui a) contient des expressions émotionnelles dynamiques et multimodales (des courtes vidéos avec son), b) couvre un grand nombre d’émotions, et c) est basé sur les principes psychométriques modernes (théorie des réponses aux items). Dans cette étude, nous avons demandé à 295 participants germanophones de regarder 108 expressions émotionnelles produites par des acteurs et de choisir, pour chaque expression, laquelle des 14 émotions possibles était exprimée par l'acteur. Nous avons ensuite appliqué le modèle de Rasch pour sélectionner 83 expressions comme des items pour le GERT. Les résultats montrent que le modèle Rasch s’applique au GERT et que ce test a une précision de mesure acceptable. Conformément aux études précédentes, nous avons constaté une diminution de la CRE avec l'âge croissant des participants et une meilleure CRE pour les femmes en comparaison aux hommes. Ces résultats fournissent une première preuve de la validité du construit du GERT. Pour conclure, le GERT est un instrument plein de promesse pour mesurer la CRE d'une façon plus écologiquement valide et complète que les tests précédents.

Chapitre 6: Psychometric properties of the Geneva Emotion Recognition Test: Replication and construct validation

Dans cette étude, nous avons a) reproduit les propriétés psychométriques du GERT selon le modèle de Rasch dans un échantillon de 131 étudiants francophones, et b) examiné la validité du construit en investiguant les corrélations avec d’autres tests de la CRE, l’intelligence émotionnelle, l'intelligence cognitive, et les dimensions « Big Five » de la personnalité. Le modèle de Rasch correspond bien à nos données. La précision de mesure et la difficulté du test étaient comparables aux résultats précédents. L’étude a également soutenu la validité du construit du GERT.

Conformément à nos prédictions, nous avons trouvé des corrélations positives avec les autres tests de la CRE, l'intelligence émotionnelle et, dans une moindre mesure, l'intelligence cognitive et

(11)

névrose, ce qui suggère que des individus qui sont plus stable émotionnellement sont moins sensibles aux émotions des autres. Cependant, la comparaison des précisions moyennes de reconnaissance pour les 14 émotions entre l'échantillon de cette étude et l'étude décrite dans chapitre 5 a révélé des différences significatives pour certaines des émotions. Par exemple, la peur et la colère étaient mieux reconnues dans l'échantillon francophone, alors que la tristesse et le désespoir étaient mieux reconnus par l'échantillon germanophone. Les disparités dans la

signification des termes émotionnels respectifs dans les deux langues sont identifiées comme une source potentielle des différences dans la précision de leur reconnaissance.

Chapitre 7: Sense or sensibility: Which is more important for successful negotiation?

Dans des études précédentes, les compétences émotionnelles telles que la CRE et

l'intelligence émotionnelle ont montré une corrélation positive avec des indicateurs de l'efficacité sociale, tels que les évaluations des supérieurs. Cependant, peu d'études ont utilisé des mesures objectives de l'efficacité obtenues dans un contexte standardisé. Par conséquent, nous avons étudié le rôle des compétences émotionnelles dans une tâche de négociation employé-recruteur dans 65 paires d’étudiants de même sexe. Les participants ont rempli des mesures de la CRE, de

l'intelligence émotionnelle, et des capacités cognitives et il leur était demandé d'évaluer l'atmosphère lors de la négociation ainsi que l’esprit de coopération de leur partenaire. Dans l'ensemble, les participants avec un haut score dans le GERT ont évalué leur propre comportement comme plus coopératif et ont été perçus comme plus coopératifs et sympathiques par leur

partenaire. En particulier dans les paires masculines, des scores de GERT et de compréhension émotionnelle (une composante de l'intelligence émotionnelle) élevés des recruteurs ont été associés à des gains plus élevés pour la paire, alors que la capacité cognitive n'était pas liée aux gains. Pour la compréhension des émotions, cette relation a été médiatisée par l'utilisation de stratégies de négociation plus intégratives que nous avons codé à partir des enregistrements audio de la

(12)

employés, bien que ces scores n’aient pas affecté directement les gains. Pour résumer, cette étude a soutenu l'importance des aptitudes affectives, mesurées par des instruments objectives et

subjectives, pour une interaction sociale efficace.

Une des limites de cette recherche est que je n'ai pas comparé la validité prédictive du GERT à celle des autres tests de reconnaissance des émotions. Comme décrit ci-dessus, un

principal avantage du GERT est censé être sa validité écologique élevée en comparaison à d'autres tests, car il contient des stimuli multimodaux et une plus grande gamme d'émotions. Pour ces raisons, je m'attends à ce que le GERT puisse mieux prédire la performance dans une interaction face-à-face que d'autres tests qui se concentrent uniquement sur les modalités séparées comme le visage ou la voix. Cette hypothèse n'a toutefois pas été explicitement testée dans ma thèse. De futures recherches sont nécessaires pour évaluer si le GERT prédit effectivement d’une meilleure manière le fonctionnement social et professionnel que les tests précédents.

Une deuxième limite de cette recherche est que seules les stratégies de négociation verbales ont été étudiées comme des variables médiatrices potentielles entre la CRE et de la performance. Dès lors, les futures recherches devraient examiner d'autres comportements verbaux et non verbaux, mais aussi les attitudes, les objectifs ou les valeurs qui caractérisent les individus avec une grande CRE dans différentes situations, et comment ces variables sont liées à la situation sociale et

professionnelle des individus. L'objectif à long terme devrait être le développement d'un modèle théorique qui puisse expliquer quand et par quels mécanismes la CRE apporte des avantages (ou des désavantages).

Discussion et conclusion

Le GERT contribue au domaine de la reconnaissance des émotions de plusieurs façons. Il s'agit du premier test de la CRE qui utilise exclusivement des stimuli multimodaux qui reflètent les

(13)

outre, le GERT contient un large éventail d'expressions émotionnelles en raison du grand nombre d'émotions et d'acteurs, que contribue à la validité écologique de l’instrument. Plus précisément, le GERT distingue six émotions positives et couvre ainsi une facette précédemment omis de la CRE.

Comme le GERT évalue la capacité de reconnaissance des émotions d’une manière plus générale que les tests précédents, il pourrait être particulièrement adapté comme mesure de la composante de la perception des émotions dans le construit de l’intelligence émotionnelle. Ainsi, lorsque l'objectif d’une étude est de prédire les résultats sociaux et professionnels de la vie réelle, le GERT pourrait être un bon choix. De même, ce test pourrait être utile dans le cadre du recrutement du personnel ou des « assessment centers ».

En plus de la mesure des différences interindividuelles, le GERT peut aussi servir comme une collection des stimuli pour l'évaluation de la reconnaissance d'émotions au niveau d’une catégorie de population. En particulier, les recherches futures sur la reconnaissance des émotions dans le cadre du vieillissement et des troubles cliniques peuvent être étendues à des stimuli multimodaux et à plus d'émotions en utilisant le GERT. En outre, la mesure plus fine dans le GERT peut également être utile pour étudier si et comment les déficits de la CRE altèrent le fonctionnement social des personnes dans la vie réelle (Phillips & Slessor, 2011). De même, le GERT peut servir comme une mesure plus écologique de la réussite des programmes de réhabilitation pour les groupes cliniques que les tests précédents (par exemple, pour les patients schizophrènes, voir Wexler & Bell, 2005).

Enfin, le GERT peut être utilisé pour étudier davantage les processus cognitifs et les mécanismes cérébraux qui sous-tendent la reconnaissance des émotions. Il existe cependant plusieurs limites au GERT et aux études présentées ici. Tout d'abord, j'ai utilisé le paradigme standard de la

reconnaissance des émotions sur la base duquel les participants sont appelés à choisir une émotion à partir d'une liste. Or, compte tenu du grand nombre d'émotions dans le GERT pour lesquelles les différences sont parfois subtiles, ce format de réponse exigence une connaissance élevée des

(14)

caractéristique désirable dans le contexte de l'intelligence émotionnelle (touchant la composante compréhension émotionnelle), cela peut être problématique face aux différences culturelles et linguistiques quant à la signification des émotions proposées. Mes résultats ont montré que la structure de la signification de certains termes émotionnels n'était pas équivalente entre l'allemand et le français. Il reste à étudier dans quelle mesure ces différences affectent la validité du GERT.

Une deuxième limite du GERT est que la difficulté du test était plutôt faible dans les deux échantillons étudiés. Alors que le GERT est plus difficile que certains tests existants, seulement environ 20% des items ont une difficulté qui est optimale pour la discrimination entre les individus avec une capacité supérieure à la moyenne.

Pour conclure, la CRE est un sujet populaire dans divers domaines de la psychologie et des neurosciences sociales. De nombreux chercheurs en psychologie organisationnelle, clinique et sociale sont intéressés à utiliser des tests de la CRE pour évaluer le fonctionnement émotionnel ou interpersonnel basé sur la performance objective sans compter sur l'auto-évaluation. En revanche, seuls quelques chercheurs se sont concentrés sur l'étude et l'amélioration de la qualité

psychométrique de ces tests. Dans la présente thèse, j'ai adopté une approche psychométrique moderne pour le développement du GERT. Cette approche englobe également l'évaluation précédemment négligée de la structure dimensionnelle de la CRE et des mesures de l'efficacité socio-émotionnelle. Les résultats ont montré que le GERT est un test prometteur qui peut être appliqué dans tous les domaines mentionnés ci-dessus lorsque le chercheur veut mesurer la CRE globale d'une manière écologiquement valide. Il peut également être utilisé pour étudier les déterminants et les conséquences de la CRE. J'espère que l'application réussie de la théorie des réponses aux items dans cette thèse encouragera d'autres chercheurs à adopter cette approche psychométrique lors du développement des futurs tests et lors de l'amélioration des tests existants dans le domaine des compétences émotionnelles.

(15)

1 I

NTRODUCTION AND OVERVIEW

... 1

2 T

HEORETICAL BACKGROUND AND REVIEW OF PAST RESEARCH

... 5

2.1 What is recognized from emotional expressions? ... 5

2.2 Processes and mechanisms underlying emotion recognition ... 8

2.3 Group differences in emotion recognition ... 13

2.3.1 Emotion recognition and culture ... 13

2.3.2 Emotion recognition and gender ... 14

2.3.3 Emotion recognition and age ... 15

2.3.4 Emotion recognition in clinical and antisocial populations ... 17

2.4 Individual differences in emotion recognition ability ... 19

2.4.1 Emotion recognition and interpersonal sensitivity ... 20

2.4.2 Emotion recognition ability and emotional intelligence ... 21

2.4.3 Emotion recognition ability and self-reported socio-emotional effectiveness ... 23

2.5 Emotion recognition ability and psychosocial functioning ... 24

2.5.1 The role of interpersonal sensitivity in social relationships ... 25

2.5.2 The role of interpersonal sensitivity in workplace performance ... 26

2.6 Measurement of emotion recognition ability ... 30

2.6.1 Reliability and validity of standardized emotion recognition tests ... 35

2.6.2 Problems of existing emotion recognition tests and requirements for a new test ... 40

2.6.3 Item Response Theory and its advantages for test development ... 42

2.7 Summary and objectives of the thesis ... 45

3 C

ONSTRUCTS OF SOCIAL AND EMOTIONAL EFFECTIVENESS

: D

IFFERENT LABELS

,

SAME CONTENT

? ... 49

3.1 Abstract ... 49

3.2 Article reprint ... 49

3.3 Supplementary Material ... 55

3.3.1 Description of instruments used in the study ... 55

3.3.2 References ... 60

4 E

MOTION RECOGNITION

: U

NIDIMENSIONAL ABILITY OR A SET OF MODALITY

-

AND EMOTION

-

SPECIFIC SKILLS

? ... 62

4.1 Abstract ... 62

(16)

4.3.1 Using the unbiased hit rate when calculating correlations between emotion

categories ... 69

4.3.2 Development of the Geneva Emotion Recognition Test ... 70

4.3.3 References ... 83

5 I

NTRODUCING THE

G

ENEVA

E

MOTION

R

ECOGNITION

T

EST

: A

N EXAMPLE OF

R

ASCH

-

BASED TEST DEVELOPMENT

... 84

5.1 Abstract ... 84

5.2 Introduction ... 85

5.3 Method ... 88

5.3.1 Participants, stimuli, and procedure ... 88

5.3.2 Data analysis ... 89

5.4 Results ... 91

5.5 Discussion ... 96

5.6 References ... 100

5.7 Supplementary Material ... 102

6 P

SYCHOMETRIC PROPERTIES OF THE

G

ENEVA

E

MOTION

R

ECOGNITION

T

EST

: R

EPLICATION AND CONSTRUCT VALIDATION

... 108

6.1 Abstract ... 108

6.2 Introduction ... 109

6.3 Method ... 114

6.3.1 Participants ... 114

6.3.2 Procedure and measures ... 114

6.3.3 Data analysis ... 116

6.4 Results ... 119

6.4.1 Evaluation of measurement properties ... 119

6.4.2 Construct validity ... 123

6.5 Discussion ... 127

6.6 References ... 137

7 S

ENSE OR SENSIBILITY

: W

HICH IS MORE IMPORTANT FOR SUCCESSFUL NEGOTIATION

? ... 140

7.1 Abstract ... 140

7.2 General introduction ... 141

(17)

negotiation ... 151

7.3.1 Hypotheses ... 151

7.3.2 Method ... 157

7.3.3 Results ... 166

7.3.4 Discussion ... 177

7.4 The role of negotiation strategies in negotiation outcomes and their relationship with emotion recognition ability and emotional understanding ... 182

7.4.1 Hypotheses ... 182

7.4.2 Method ... 186

7.4.3 Results ... 191

7.4.4 Discussion ... 200

7.5 General discussion ... 203

7.6 References ... 213

8 G

ENERAL DISCUSSION AND CONCLUSION

... 218

8.1 The dimensional structure of social and emotional effectiveness constructs ... ... 219

8.2 The dimensional structure of emotion recognition ability ... 220

8.3 Development of the Geneva Emotion Recognition Test (GERT) ... 223

8.4 The role of emotion recognition ability in dyadic interaction ... 228

8.5 Outlook: What determines individual differences in emotion recognition ability? ... 232

8.6 Conclusion ... 237

9 R

EFERENCES

... 239

(18)

1

1 Introduction and overview

The perception of other people’s verbal and nonverbal signals or cues and the attributions that we make based on these perceptions are fundamental mechanisms in human interaction. From hearing a person’s voice and seeing his or her appearance, facial expression, and body movements, we infer a lot of information about this person: We make judgments about his or her character, interests, truthfulness, intelligence, status, and emotional state. This process can be visualized with the help of the Brunswikian Lens Model of interpersonal perception (Brunswik, 1956; Scherer, 1978). As can be seen from Figure 1, according to the Lens Model, characteristics or states of a

“sender” are encoded in and expressed via certain cues which are perceived and decoded by a

“receiver” who then attributes a characteristic or state to the sender.

Figure 1. Brunswikian Lens Model (adapted from Schlegel & Wallbott, 2013).

Generally, people are quite accurate at making attributions or judgments about other individuals’ personality, intelligence, or emotions (for an overview, see Schlegel & Wallbott, 2013). As McArthur and Baron (1983) have emphasized, judgmental accuracy guides socially

(19)

functional behaviors and promotes individual goal attainment, thus serving an adaptive function (see also Gibson, 1986). A good example is the judgment of emotions, because it can benefit both successful social functioning and the adaptation to environmental changes. For example,

evolutionary psychology has suggested that the accurate recognition of certain emotions such as fear from others’ facial or vocal expressions can be important to survival as they signal danger (Cosmides & Tooby, 2000). With respect to social interaction, the recognition of emotions allows to anticipate the other person’s actions, to adapt one’s own actions accordingly and consequently, to smooth interactions (Hampson, Van Anders, & Mullin, 2006).

Although this view generally applies to all healthy humans, it has also been commonly assumed that some individuals are more accurate and skilled in judging others’ emotions, making them more successful in their social interactions (Bernieri, 2001; O’Sullivan & Ekman, 2008). In this thesis, I investigate how such individual differences in the ability to judge or recognize

emotions from others’ nonverbal cues expressed by the face, voice, or body can be measured with a standardized test. In the following, I will refer to this ability as emotion recognition ability,

although other terms like nonverbal receiving ability (Buck, 1976) or nonverbal accuracy (Nowicki

& Duke, 1994) have also been used to label this construct.

In the past, several researchers have developed tests to measure emotion recognition ability.

In such tests, participants are typically presented emotional expressions (such as pictures of faces or voice recordings) and are asked to choose, from a list of emotions, which emotion has been

expressed in each portrayal. Emotion recognition ability is usually calculated as the proportion of correctly identified portrayals. Although these tests have been successfully employed in different fields including social psychology (e.g., prediction of social effectiveness), organizational

psychology (e.g., recruiting and prediction of workplace performance), or clinical psychology (e.g., characterizing functional impairments in various clinical groups), they also have certain limitations.

In particular, they usually focus on a single modality (often the face) and include only a small

(20)

number of –mostly negative– emotions. As nonverbal cues in real life are generally conveyed in a dynamic, changing, and multimodal way, several researchers have questioned the ecological validity of static and unimodal stimuli typically used in research on emotion recognition ability (McArthur & Baron, 1983; Isaacowitz & Stanley, 2011). Furthermore, emotion recognition tests tend to have low reliability and their dimensional structure has generally not been investigated (Hall, 2001). Given that the measurement of emotion recognition ability does not follow a strict psychometric tradition like e.g., cognitive intelligence, modern psychometric approaches like Item Response Theory have not yet been applied to assess and improve the quality of existing tests.

The main goal of this thesis is therefore to develop and validate a new test to measure emotion recognition ability that a) includes dynamic and multimodal stimuli representing a larger number of positive and negative emotions than previous tests to increase ecological validity, and b) meets modern psychometrical standards within the framework of Item Response Theory. The second, related goal of this thesis is to contribute to our understanding of the emotion recognition ability construct by addressing some of the gaps in the literature. More specifically, I aim to investigate whether emotion recognition ability is a one-dimensional ability or whether the recognition of different emotions and different modalities requires distinct skills. Furthermore, I empirically link emotion recognition ability to a variety of other constructs that measure social and emotional effectiveness by analyzing their underlying nomological network based on a wide range of questionnaires and tests. Finally, I evaluate the predictive validity of emotion recognition ability with respect to objective performance in an interpersonal interaction. In this context, I provide one of the first studies which examine the relationship between this ability and actual behavior as a potential mechanism that can explain the emotion recognition ability – performance link.

This thesis is structured as follows: In chapter 2, I provide an overview of past research on emotion recognition and the measurement of emotion recognition ability. I start by discussing how different theoretical approaches to emotion have shaped research on emotion recognition. I then

(21)

review the cognitive and neural mechanisms underlying emotion recognition. Furthermore, I provide a summary of the most important findings on group differences in emotion recognition regarding differences between cultures, men and women, young and old individuals, and healthy and clinical participants. Subsequently, I introduce the individual differences perspective on emotion recognition ability, which is based on the idea that some individuals recognize emotions better than others. I explain how emotion recognition ability is linked to other constructs of emotional and social effectiveness such as empathy, interpersonal sensitivity, emotional competence, social skills, and emotional intelligence. In addition, I review research on the

relationship between individual differences in emotion recognition ability and successful social and emotional functioning in professional and private life. Finally, I describe existing tests to measure emotion recognition ability, identify their limitations, and derive requirements for the development of a new test.

The empirical part comprises five chapters. In chapter 3, I determine the role of emotion recognition ability within the nomological net of other emotional and social effectiveness

constructs. Chapter 4 investigates the dimensional structure of emotion recognition ability itself and aims to evaluate whether it is a one-dimensional ability or whether distinct skills are involved in the recognition of different emotions and different modalities. This chapter also sets the ground for the development of the new Geneva Emotion Recognition Test (GERT). In chapter 5, I describe the development and psychometric quality of the GERT using Item Response Theory based on a sample of German-speaking participants. Chapter 6 aims to replicate the psychometric properties of the GERT in a sample of French-speaking participants. In addition, in this chapter I examine the construct validity of the GERT with respect to other emotion recognition tests, personality, and emotional and cognitive intelligence. Finally, the goal of chapter 7 is to investigate the predictive validity of the GERT with respect to successful social functioning in a dyadic social interaction.

More specifically, I assess to what extent the GERT predicts economic gains and relational

(22)

outcomes in a negotiation task in comparison to emotional and cognitive intelligence. I also analyze the relationship of the GERT with the strategic behavior of the negotiators to help understanding the mechanism underlying the emotion recognition ability – performance link.

The thesis concludes with a discussion in chapter 8 that covers the contributions and limitations of the conducted studies with respect to the field of emotion recognition ability.

Alternative possibilities and future directions to enhance the measurement of this ability are discussed. Furthermore, potential applications of the GERT to study the determinants and mechanisms of individual differences in emotion recognition ability are outlined.

2 Theoretical background and review of past research

2.1 What is recognized from emotional expressions?

Emotional expressions conveyed by the face, voice, or body provide information that can facilitate social interaction and make it more predictable and manageable. Thus, accurately

recognizing the type and intensity of others’ emotional states from their nonverbal expressions is a precondition for understanding and adequately responding to their reactions, thoughts, and

intentions. This importance for human communication has made emotion expression and emotion recognition a widely researched field in psychology over the past decades.

In the typical paradigm to investigate emotion recognition, participants are presented with pictures or recordings of emotional expressions and are asked to choose from a list of emotion terms such as anger, sadness, happiness, surprise, fear, and disgust, the one that describes best each expression. This approach has its roots in basic emotion theory which emerged from the pioneering research conducted by Tomkins (1962), Ekman and colleagues (e.g., Ekman, Friesen, & Ellsworth, 1972), and Izard (1971). Influenced by Darwin’s notion of the universality and innateness of emotion displays, these scholars proposed a small, fixed set of discrete “basic” emotions, which

(23)

consist of a pre-programmed neuromotor network (an “affect program”) and have a

biological origin. Other emotions than the basic ones (i.e., happiness, anger, disgust, fear, sadness, and surprise) are considered blends of the basic emotions. According to this view, emotional expressions signal specific emotions which perceivers decode in a categorical and immediate fashion (Ekman, 1992). The high agreement between perceivers in judging basic emotion

expressions that has been found in many studies is considered as support for this view (e.g., Ekman, 1992). Following basic emotion theory, emotion recognition thus involves linking features of an emotion expression to knowledge about the meaning of the different emotion categories and labels.

However, this perspective on emotion recognition has not been left unchallenged. During the past decades, other emotion theories gained importance which highlight different “objects” of recognition from emotional expressions. For example, dimensional theories postulate that emotions are located in a space defined by several dimensions such as valence and arousal (e.g., Russell, 2003). According to this perspective, Carroll and Russell (1996) claimed that what is recognized from expressions is valence and arousal, on the basis of which a discrete emotion label is inferred.

One reason for this claim is Russell and Bullock’s (1986) finding that the boundaries between discrete categories are fuzzy at the level of recognition and that agreement between perceivers drops considerably when they are asked to freely label emotion expressions instead of picking a label from a list of basic emotions (Russell, 1994). Further supporting the dimensional perspective on emotion recognition, Russell, Bachorowski, and Fernandez-Dols (2003) noted that perceivers agreed substantially in their judgments of valence and arousal and that the same dimensions emerged when analyzing the confusions perceivers made when judging discrete emotions.

The third dominant approach to emotion encompasses componential theories which

postulate that emotions are characterized by changes in different psychological subsystems such as cognitive appraisals, feelings, bodily sensations, and action tendencies (for an overview, see Scherer, Schorr, & Johnstone, 2001). For example, the component process model (Scherer, 1984)

(24)

posits that the differentiation of emotions is determined by the results of event evaluation processes based on a set of appraisal criteria. Support for the appraisal perspective in emotion recognition was found in different sensory modalities. For example, Scherer (1999) found that emotions are

recognized faster from verbal descriptions of emotional events when the description follows a theoretically predicted sequence of appraisal segments. Furthermore, Banse and Scherer (1996) found that vocal patterns of emotion expressions corresponded to a large extent to theoretically derived appraisal constellations and substantially predicted participants’ recognition accuracy.

Other accounts of what is being recognized from emotional expressions can be derived from Fridlund’s (1994) theory of social messages and Frijda’s work on action tendencies (e.g., Frijda, Kuipers, & ter Schure, 1989). According to Fridlund (1994), emotional expressions have the

function of signaling behavioral dispositions or intentions to other people and what is recognized by perceivers is this social message. In contrast, Frijda (see Frijda et al., 1989) proposed that emotional expressions primarily signal action tendencies. Scherer and Grandjean (2008) compared

participants’ accuracy of emotion judgments for facial expressions made on the basis of a) discrete emotion categories, b) social messages, c) appraisal results, and d) action tendencies. They found that emotion categories and appraisals were judged significantly more accurately from faces than social messages and action tendencies. However, recognition accuracy for messages and action tendencies was still substantially above chance level. The authors therefore concluded that facial expressions allow for the recognition of different emotion components individually as well as of the discrete emotion label that integrates these components.

Taken together, these studies suggest that what is actually recognized by people from emotional expressions is not restricted to a discrete emotion label as suggested by basic emotion theory. Rather, emotional expressions encompass a wide range of social signals and convey appraisals of the emotional-related event, action tendencies, social messages, or broad dimensions of valence or arousal. Which aspect or object of recognition is investigated in a particular study

(25)

depends on the underlying emotion theory. To date, the major part of studies on emotion

recognition relies on basic emotion theory. This approach has also largely influenced research on the mechanisms and processes underlying emotion recognition which are discussed in the next section.

2.2 Processes and mechanisms underlying emotion recognition

Emotion recognition involves several sets of processes that are likely to be subserved by different brain structures (Adolphs, 2002). The best studied modality to date is the face. With respect to facial emotion recognition, a first set of processes starting early after stimulus onset and relying on early sensory cortices achieves the perception of visual features and their configuration (“structural encoding”; Bruce & Young, 1986; Haxby, Hoffman, & Gobbini, 2000). While for facial identity recognition configural or holistic processing seems to be beneficial, the role of configural versus feature-based or local processing in emotion recognition is less clear. Calder, Young, Keane, and Dean (2000) found evidence that humans tend to process emotional expressions in a configural fashion. However, for specific emotions certain feature-based cues might be more diagnostic (e.g., a furrowed brow for anger). In a study by Sullivan, Ruffman, and Hutton (2007), local processing (fixating) of the mouth region was associated with higher accuracy for happy and disgusted facial expressions. Martin, Gillian, Allen, Phillips, and Darling (2012) even found a modest advantage in speed and accuracy for emotion recognition when participants were motivated to process locally instead of globally. In contrast, a study by Schmid, Schmid Mast, Bombari, Mast, and Lobmaier (2011) showed that global processing was positively associated with emotion

recognition accuracy, but only when participants were in a sad mood. Taken together, it is likely that both types of processing might be beneficial for accurate emotion recognition, depending on the type of facial expression. However, the detailed mechanisms linking different types of processing to recognition accuracy remain to be clarified.

(26)

Following the first early and subcortical processing step, a second set of processes which links the perceptual properties of the expression to all pertinent knowledge components has been proposed. This knowledge is accessed by reactivating the representations that were originally associated with the emotional expression when they were acquired (Adolphs, 2002). These

representations are likely to be spatially separated in the brain. This step in the recognition process thus presumably involves a dynamic interplay between feedforward, feedback, and horizontal information flow between these different brain regions (Adolphs, 2002).

A third set of processes involved in emotion recognition is based on the assumption that conceptual knowledge necessary to recognize an emotion is triggered by simulating the state of the target person that shows an emotional expression (Adolphs, 2002). This process is supposedly faster than the route involving the matching of sensory input with knowledge described above (Stel

& van Knippenberg, 2008). The idea that perceiving and thinking about emotion involves

perceptual, somatovisceral, and motoric reexperiencing, simulation, or “embodiment“ has received considerable attention in psychological and neuroscientific research (Niedenthal, 2007; Goldman &

Sripada, 2005; Barsalou, 2008; Heberlein & Atkinson, 2009). The basic idea of embodied cognition, simulation, or perception-action matching theories is that the brain captures modality- specific states during perception, action, and interoception and re-instantiates parts of these states to access conceptual knowledge when needed (Niedenthal, 2007; Zaki & Ochsner, 2011). One model specifically focusing on emotion recognition, the reverse simulation model by Goldman and

Sripada (2005), posits that emotions are recognized by subtly mimicking others’ facial expressions, which allows feeling the corresponding emotion through facial feedback. These authors proposed a three-step model of emotion recognition with first, subtle mimicry of a facial expression, second, the generation of this emotional state in the observer, and third, classification of this experience as someone else’s expression.

(27)

To date, there is substantial evidence for the embodiment hypothesis in emotion recognition.

For example, Wicker et al. (2003) showed that overlapping neural circuits were engaged when participants experienced an emotion themselves and when they recognized it from another person’s facial expression. This idea is further supported by research suggesting that humans possess a mirror neuron system encompassing viscero-motor regions such as the anterior insula, which allows direct understanding of others’ emotions without any conceptual reasoning (Gallese, Keysers, &

Rizzolatti, 2004). More recent evidence for overlapping engagement during action observation and imitation has also been found for the regions such as the anterior cingulate cortex, hippocampus, and posterior medial frontal cortex, which Zaki and Ochsner (2011) labeled experience sharing systems.

In addition, behavioral studies demonstrated that people spontaneously mimic others’ facial expressions and that accurate recognition is impaired or slowed down when facial mimicry is blocked, e.g., by asking participants to hold a pen with their mouth (Oberman, Winkielman, &

Ramachandran, 2007; Stel & van Knippenberg; 2008). However, it remains somewhat unclear whether facial mimicry is necessary for emotion recognition or whether it is rather a by-product.

For example, Blairy, Herrera, and Hess (1999) and Hess and Blairy (2001) found that participants spontaneously mimicked facial expressions but that mimicry or the instruction to perform an expression incompatible with the presented cues did not affect recognition accuracy. Furthermore, Bogart and Matsumoto (2010) found that patients suffering from Moebius syndrome, which is characterized by a congenital bilateral facial paralysis and therefore disables mimicry of others’

facial expressions, showed no impairment in their accuracy of emotion recognition. These results suggest that mimicry is not a necessary condition for emotion recognition.

This does not rule out the possibility that the experience of the other’s emotion in oneself is generated without peripheral facial feedback as suggested by the emotional contagion hypothesis (Hatfield, Cacioppo, & Rapson, 1993). Alternatively, participants in which mimicry is inhibited

(28)

might rely on slower, rule-based computational processing involving conceptual knowledge as described above (McIntosh, Reichmann-Decker, Winkielman, & Wilbarger, 2006). Zaki and Ochsner (2011) pointed out that although simulation through experience sharing systems might be the primary mechanism underlying emotion recognition, higher-level inferences like intentions or beliefs of the target person cannot be unambiguously translated into motor states and therefore require additional knowledge and conceptual reasoning. This type of processes involves a brain network including the medial prefrontal cortex, temporoparietal junction, posterior cingulate cortex, and temporal poles, referred to as mental state attribution systems (Zaki & Ochsner, 2011). It remains to be studied whether the engagement of these systems depends on what participants are asked to recognize – whether it is discrete emotions, appraisals, messages, or action tendencies.

Zaki, Hennigan,Weber, and Ochsner (2010) found that when participants relied more on contextual information (a written sentence) about a target than the target’s nonverbal expression, the

engagement of the mental state attribution systems was increased, whereas the engagement of the experience sharing systems was increased when participants relied more on the nonverbal

expression.

As the majority of studies on the processes underlying emotion recognition were conducted with still pictures of facial expressions, it is less well understood if similar processes are involved in the recognition of emotions from other modalities. With respect to still pictures of faces, some researchers suggested that the brain mechanisms engaged in processing them might not be

“authentic” (Johnston, Mayes, Hughes, & Young, 2013). Processing of dynamic faces might be different from that of static faces because they provide additional information that allows building up a three-dimensional representation of the face and that enhances the perception of change (Chiller-Glaus, Schwaninger, Hofer, Kleiner, & Knappmeyer, 2011). In a recent study, Johnston et al. (2013) found that similar regions of the occipitotemporal, parietal, and frontal cortex were activated when discriminating between emotional expressions presented as static or dynamic

(29)

stimuli. However, the frontal network associated with simulation, planning, and inhibition of motor functions was more active when judging static facial expressions. In particular, parts of the inferior frontal gyrus seemed to drive motor simulation processes in static images that the authors

interpreted as a “reconstruction” process that facilitates the disambiguation of face configurations through motor modeling.

The inferior frontal gyrus, especially in the right hemisphere, also seems to play a major role in attributing a specific emotion to prosody (emotional speech; Grandjean & Fruehholz, 2013). This process relies on voluntary attention to the auditory stimulus and is preceded by first, an analysis of the accoustical features in primary auditory cortex not requiring explicit attention and second, an increasingly complex integration of these features in the superior temporal sulcus and superior temporal gyrus into a coherent auditory percept. In a parallel process, the relevance of the auditory stimulus is processed by the amygdala and sent to the auditory cortex, superior temporal sulcus, and superior temporal gyrus, facilitating a more efficient processing of relevant stimuli.

With respect to the processing of emotions conveyed by body postures, De Gelder, Snyder, Greve, Gerard, and Hadjikhani (2004) proposed a model with two separate but connected circuits.

The first circuit involves a largely automatic subcortical processing route to the amygdala and the striatum which prepares behavioral reactions to threatening stimuli. The second circuit is a primarily cortical system involving the amygdala, fusiform, and lateral occipitoparietal regions.

This circuit enables the detailed perceptual processing of the target’s body expressions and

connects these features to knowledge. However, the evidence for amygdala involvement in whole- body emotional expression recognition to date is mixed (Heberlein & Atkinson, 2009).

Taken together, neuroscientific studies suggest that similar processes and shared substrates are involved in the recognition of emotions in different modalities and confirm an important role of simulation processes in various regions (Heberlein & Atkinson, 2009). However, one limitation of

(30)

past research in the cognitive and neuroscience domain is that while focusing on the processes people use when attempting to understand each other, it is usually neglected how successful these attempts are (Zaki & Ochsner, 2011). In particular, very little is known about how individual differences, such as in the ability to recognize emotions (see chapter 2.4), are reflected in emotion recognition processes (for an exception, see Kreifelts, Ethofer, Huberle, Grodd, & Wildgruber, 2009). Furthermore, neuroscientists have only recently begun to investigate the recognition of emotions from multimodal stimuli that combine different sensory modalities such as the face, voice, and body (Klasen, Chen, & Mathiak, 2012). Other limitations of past research include a strong focus on facial expressions presented as still pictures and the preponderance of basic emotion theory in the research paradigms used.

2.3 Group differences in emotion recognition

Since the pioneering research on emotions by Ekman and colleagues, the dominant

approach in emotion research involved comparing emotion recognition accuracy between groups of people, such as members of different cultures, men and women, or young and old participants. This

“group differences” approach continues to be widely employed in the field of emotion recognition.

The key topics using this approach during the past decades include the relationship of emotion recognition with culture, gender, age, and psychological impairments. Each of these topics is briefly reviewed in the following sections.

2.3.1 Emotion recognition and culture

A central issue in emotion research that was inspired by the basic emotions approach is the investigation of the universality versus cultural specificity of emotion recognition. Classical studies by Ekman and colleagues (Ekman, Sorenson, & Friesen, 1969; Ekman & Friesen, 1971; Izard, 1971) provided evidence for the universality hypothesis by demonstrating that basic emotions in pictures of Americans were recognized at above-chance levels in literate and illiterate cultures.

More recently, this finding was also replicated for vocal expressions of basic emotions (Sauter,

(31)

Eisner, Ekman, Scott, & Smith, 2010). However, when different literate cultures are compared, Americans and Europeans tend to achieve higher recognition accuracy when judging pictures of Caucasian targets than do Asians or Africans (Elfenbein & Ambady, 2002a). Thus, contrasting views to the universality hypothesis emerged. Matsumoto (1989) argued that, although emotions are biologically programmed, the process of learning to control their expression and perception is highly culture-dependent, so that emotions are recognized better when there is a match in the cultural background of the sender and the judge. Russell (1994) suggested that discrete emotion categories are culture-specific but that the broad dimensions of valence and arousal are universal.

Another explanation for cross-cultural variability is that cultures differ in their rules for displaying emotions (Wang, Toosi, & Ambady, 2009). Nonverbal dialects in emotion displays have been suggested for vocal expressions (Scherer, Banse, & Wallbott, 2001) and facial expressions (Elfenbein, Beaupré, Lévesque, & Hess, 2007). Furthermore, the words used to describe emotions differ in their meaning between languages. Due to such cultural differences, emotions may be more difficult to decode for people who belong to a different cultural group than the sender. In their meta-analysis on the topic, Elfenbein and Ambady (2002a) concluded that although emotions were universally recognized better than chance, there is evidence for an in-group advantage in emotion recognition. However, they also noted that the majority of the research had been limited to the recognition of facial expressions of discrete basic emotions due to the long domination of basic emotions theory.

2.3.2 Emotion recognition and gender

It is a robust finding in the literature that women recognize emotions better than men. In her meta-analysis, Hall (1978) found that women’s performance was almost half a standard deviation above men’s performance, with the effect being bigger for multimodal (auditory plus visual) stimuli. This result was replicated in many studies, although generally the effect sizes found were rather small (e.g., Scherer & Scherer, 2011). The female advantage is already present in children

(32)

and adolescents (McClure, 2000). Subsequent studies have shown that the female advantage seems to be more pronounced for negative emotions (e.g., Hampson et al., 2006), although some studies also reported higher recognition accuracy for anger in men (e.g., Rotter & Rotter, 1988). In addition, women have been shown to be faster at identifying emotional expressions than men (Sasson et al., 2010) and to be more accurate even when stimuli were presented for durations so brief as to be at the edge of conscious awareness (Hall & Matsumoto, 2004). On the other hand, when exposure times to facial stimuli are unlimited or long, gender differences have been shown to diminish (Kirouac & Dore, 1985). However, for emotion expressions of low intensity, women judge emotions more accurately, while men tend to be biased towards labeling such expressions as neutral (Sasson et al., 2010; Montagne, Kessels, Frigerio, De Haan, & Perrett, 2005).

Several reasons have been discussed to explain the gender difference in emotion

recognition. For example, Hall and Matsumoto (2004) suggested that women might be socialized to focus on emotion recognition already in childhood or that their brain is better adapted to emotion recognition. Hampson and colleagues (2006) tested whether a general advantage of women with respect to perceptual speed exists but did not find any gender difference in this ability which could have explained females’ superior performance in emotion detection. These authors also examined various evolutionary explanations and proposed that women as primary caretakers of children have developed better emotion recognition ability to increase the probability of offspring survival and the likelihood of securely attached children. Previous childcare experience of the female

participants was unrelated to emotion recognition, suggesting that the female advantage is related to an evolved mechanism rather than to individual learning.

2.3.3 Emotion recognition and age

With respect to age, meta-analysis has shown a decline in emotion recognition ability across different modalities (face, voice, and body) in older individuals (Ruffman, Henry, Livingstone, &

Phillips, 2008). Older compared with young participants consistently show impairments particularly

(33)

in the recognition of anger, sadness, and fear, and to a smaller extent for happiness, disgust, and surprise. Recognition of emotions displayed in faces matched with voices seem to pose particular difficulties for older individuals, but only very few studies used multimodal stimuli (e.g., Sullivan

& Ruffman, 2004). The female advantage in emotion recognition discussed above appears to

remain stable also for older individuals (Sasson et al., 2010). As Mill, Allik, Realo, and Valk (2009) showed, the decline of emotion recognition ability starts at about 30 years of age, with sadness and anger being affected first.

Some researchers have suggested that the decline is due to an “own-age bias” (Ruffman et al., 2008), according to which people recognize emotions better from faces that correspond to their own age. As previous studies have mostly used stimuli of young targets, older participants might have been disadvantaged. However, Ebner and Johnson (2009) found that older individuals were worse at recognizing and remembering both younger and older emotional faces, speaking against the own-age bias. An alternative explanation for the age-related decline in emotion recognition might be a decrease in cognitive capacities. However, a recent study by Lambrecht, Kreifelts, and Wildgruber (2012) found that a decrease in working memory and verbal intelligence could not sufficiently account for decreased emotion recognition. To date, the most likely explanation for the age-related decline in emotion recognition seems to be functional and structural changes in the brain (Charles & Campos, 2011). Ruffman and colleagues’ meta-analysis (2008) identified several brain areas including the anterior cingulate cortex, the orbitofrontal cortex, temporal areas, and the amygdala, for which a reduction in volume is likely to explain age-related decline in emotion recognition ability. Furthermore, they proposed that a functional change in neurotransmitters such as serotonin, noradrenaline, and dopamine might be related to impaired emotion recognition in elderly people.

Despite such findings, several researchers have recently criticized that most previous studies have used stimuli presented in only one modality and only for a few basic emotions, which

(34)

prevented alternative hypotheses regarding the age-related decline to be tested. For example, Ruffman (2011) proposed that when stimuli contain rich information from multiple modalities, age differences might diminish. Further, Phillips and Slessor (2011) noted that a positivity bias in old age could attenuate the decline in emotion recognition for positive emotions, but that so far usually only one positive emotion (happiness) has been studied.

2.3.4 Emotion recognition in clinical and antisocial populations

Emotion recognition is a major field in clinical psychology. During the past decades, a large number of studies focused on the comparison of emotion recognition in clinical and antisocial populations with healthy individuals. For most of the studied populations, a deficit in recognizing several or all emotions was found. Somewhat more recently, many studies additionally aim at identifying abnormal brain functioning in order to explain impaired emotion recognition.

One key topic has been the investigation of emotion recognition ability in antisocial populations, which can be characterized by aggressive, criminal, or abusive behavior and/or personality traits such as a lack of empathy and remorse (Marsh & Blair, 2008). It has been suggested that such behaviors and traits might result from a failure to recognize distress-related cues in others, such as fearful expressions, which normally elicit empathy and inhibit aggression (Blair, 2001). In line with this assumption, in their meta-analysis Marsh and Blair (2008) found deficits in the recognition of fear, sadness, and surprise from faces in antisocial populations. More recently, Dawel, O’Kearney, McKone, and Palermo (2012) also found significant impairments in the recognition of fear, happiness, and surprise from vocal cues in a sample of psychopathic

patients. Amygdala hyposensitivity has been suggested as one reason for these emotion recognition deficits in antisocial populations (Marsh & Blair, 2008).

A large deficit in facial emotion recognition has also been found in the meta-analysis conducted by Kohler, Walker, Martin, Healey, and Moberg (2010) for schizophrenic patients. This

Références

Documents relatifs

One might insist that the disagreement between Abelard and Eloise in (15) and (16) is pretty much as faultless as it can get: if Abelard finds it sad – for Joshua, his wife,

The conditional indirect effect of negative scenario content on impression formation through retrieval of negative information was significant for average and high (+1 SD), but

This model emphasizes the fact that emotional socialization is a bidirectional process: children’s characteristics, such as reactivity, affect parents’ interactions with them; and

An extensive literature review was conducted of human emotion, memory systems, emotional memory characteristics, scenarios, and scenarios with emotional aspects, followed by a

Steve Harvey and Loraleigh Keashly, « Emotional Abuse: How the Concept Sheds Light on the Understanding of Psychological Harassment (in Quebec) », Perspectives interdisciplinaires

Using a combination of classical and S-1 bifactor models, we find that (a) a first-order oblique and bifactor model provide excellent and comparably fitting representation of an

state on emotional olfactory perception, we found a lower perception of positive emotions and personality traits in subjects with minor depressive symptoms concerning pleasant

In line with Gross (2007), we defined the ability to strategically manage emotions as composed of two competencies: the abil- ity to identify and select the appropriate emotional