• Aucun résultat trouvé

Group differences in emotion recognition

Since the pioneering research on emotions by Ekman and colleagues, the dominant

approach in emotion research involved comparing emotion recognition accuracy between groups of people, such as members of different cultures, men and women, or young and old participants. This

“group differences” approach continues to be widely employed in the field of emotion recognition.

The key topics using this approach during the past decades include the relationship of emotion recognition with culture, gender, age, and psychological impairments. Each of these topics is briefly reviewed in the following sections.

2.3.1 Emotion recognition and culture

A central issue in emotion research that was inspired by the basic emotions approach is the investigation of the universality versus cultural specificity of emotion recognition. Classical studies by Ekman and colleagues (Ekman, Sorenson, & Friesen, 1969; Ekman & Friesen, 1971; Izard, 1971) provided evidence for the universality hypothesis by demonstrating that basic emotions in pictures of Americans were recognized at above-chance levels in literate and illiterate cultures.

More recently, this finding was also replicated for vocal expressions of basic emotions (Sauter,

Eisner, Ekman, Scott, & Smith, 2010). However, when different literate cultures are compared, Americans and Europeans tend to achieve higher recognition accuracy when judging pictures of Caucasian targets than do Asians or Africans (Elfenbein & Ambady, 2002a). Thus, contrasting views to the universality hypothesis emerged. Matsumoto (1989) argued that, although emotions are biologically programmed, the process of learning to control their expression and perception is highly culture-dependent, so that emotions are recognized better when there is a match in the cultural background of the sender and the judge. Russell (1994) suggested that discrete emotion categories are culture-specific but that the broad dimensions of valence and arousal are universal.

Another explanation for cross-cultural variability is that cultures differ in their rules for displaying emotions (Wang, Toosi, & Ambady, 2009). Nonverbal dialects in emotion displays have been suggested for vocal expressions (Scherer, Banse, & Wallbott, 2001) and facial expressions (Elfenbein, Beaupré, Lévesque, & Hess, 2007). Furthermore, the words used to describe emotions differ in their meaning between languages. Due to such cultural differences, emotions may be more difficult to decode for people who belong to a different cultural group than the sender. In their meta-analysis on the topic, Elfenbein and Ambady (2002a) concluded that although emotions were universally recognized better than chance, there is evidence for an in-group advantage in emotion recognition. However, they also noted that the majority of the research had been limited to the recognition of facial expressions of discrete basic emotions due to the long domination of basic emotions theory.

2.3.2 Emotion recognition and gender

It is a robust finding in the literature that women recognize emotions better than men. In her meta-analysis, Hall (1978) found that women’s performance was almost half a standard deviation above men’s performance, with the effect being bigger for multimodal (auditory plus visual) stimuli. This result was replicated in many studies, although generally the effect sizes found were rather small (e.g., Scherer & Scherer, 2011). The female advantage is already present in children

and adolescents (McClure, 2000). Subsequent studies have shown that the female advantage seems to be more pronounced for negative emotions (e.g., Hampson et al., 2006), although some studies also reported higher recognition accuracy for anger in men (e.g., Rotter & Rotter, 1988). In addition, women have been shown to be faster at identifying emotional expressions than men (Sasson et al., 2010) and to be more accurate even when stimuli were presented for durations so brief as to be at the edge of conscious awareness (Hall & Matsumoto, 2004). On the other hand, when exposure times to facial stimuli are unlimited or long, gender differences have been shown to diminish (Kirouac & Dore, 1985). However, for emotion expressions of low intensity, women judge emotions more accurately, while men tend to be biased towards labeling such expressions as neutral (Sasson et al., 2010; Montagne, Kessels, Frigerio, De Haan, & Perrett, 2005).

Several reasons have been discussed to explain the gender difference in emotion

recognition. For example, Hall and Matsumoto (2004) suggested that women might be socialized to focus on emotion recognition already in childhood or that their brain is better adapted to emotion recognition. Hampson and colleagues (2006) tested whether a general advantage of women with respect to perceptual speed exists but did not find any gender difference in this ability which could have explained females’ superior performance in emotion detection. These authors also examined various evolutionary explanations and proposed that women as primary caretakers of children have developed better emotion recognition ability to increase the probability of offspring survival and the likelihood of securely attached children. Previous childcare experience of the female

participants was unrelated to emotion recognition, suggesting that the female advantage is related to an evolved mechanism rather than to individual learning.

2.3.3 Emotion recognition and age

With respect to age, meta-analysis has shown a decline in emotion recognition ability across different modalities (face, voice, and body) in older individuals (Ruffman, Henry, Livingstone, &

Phillips, 2008). Older compared with young participants consistently show impairments particularly

in the recognition of anger, sadness, and fear, and to a smaller extent for happiness, disgust, and surprise. Recognition of emotions displayed in faces matched with voices seem to pose particular difficulties for older individuals, but only very few studies used multimodal stimuli (e.g., Sullivan

& Ruffman, 2004). The female advantage in emotion recognition discussed above appears to

remain stable also for older individuals (Sasson et al., 2010). As Mill, Allik, Realo, and Valk (2009) showed, the decline of emotion recognition ability starts at about 30 years of age, with sadness and anger being affected first.

Some researchers have suggested that the decline is due to an “own-age bias” (Ruffman et al., 2008), according to which people recognize emotions better from faces that correspond to their own age. As previous studies have mostly used stimuli of young targets, older participants might have been disadvantaged. However, Ebner and Johnson (2009) found that older individuals were worse at recognizing and remembering both younger and older emotional faces, speaking against the own-age bias. An alternative explanation for the age-related decline in emotion recognition might be a decrease in cognitive capacities. However, a recent study by Lambrecht, Kreifelts, and Wildgruber (2012) found that a decrease in working memory and verbal intelligence could not sufficiently account for decreased emotion recognition. To date, the most likely explanation for the age-related decline in emotion recognition seems to be functional and structural changes in the brain (Charles & Campos, 2011). Ruffman and colleagues’ meta-analysis (2008) identified several brain areas including the anterior cingulate cortex, the orbitofrontal cortex, temporal areas, and the amygdala, for which a reduction in volume is likely to explain age-related decline in emotion recognition ability. Furthermore, they proposed that a functional change in neurotransmitters such as serotonin, noradrenaline, and dopamine might be related to impaired emotion recognition in elderly people.

Despite such findings, several researchers have recently criticized that most previous studies have used stimuli presented in only one modality and only for a few basic emotions, which

prevented alternative hypotheses regarding the age-related decline to be tested. For example, Ruffman (2011) proposed that when stimuli contain rich information from multiple modalities, age differences might diminish. Further, Phillips and Slessor (2011) noted that a positivity bias in old age could attenuate the decline in emotion recognition for positive emotions, but that so far usually only one positive emotion (happiness) has been studied.

2.3.4 Emotion recognition in clinical and antisocial populations

Emotion recognition is a major field in clinical psychology. During the past decades, a large number of studies focused on the comparison of emotion recognition in clinical and antisocial populations with healthy individuals. For most of the studied populations, a deficit in recognizing several or all emotions was found. Somewhat more recently, many studies additionally aim at identifying abnormal brain functioning in order to explain impaired emotion recognition.

One key topic has been the investigation of emotion recognition ability in antisocial populations, which can be characterized by aggressive, criminal, or abusive behavior and/or personality traits such as a lack of empathy and remorse (Marsh & Blair, 2008). It has been suggested that such behaviors and traits might result from a failure to recognize distress-related cues in others, such as fearful expressions, which normally elicit empathy and inhibit aggression (Blair, 2001). In line with this assumption, in their meta-analysis Marsh and Blair (2008) found deficits in the recognition of fear, sadness, and surprise from faces in antisocial populations. More recently, Dawel, O’Kearney, McKone, and Palermo (2012) also found significant impairments in the recognition of fear, happiness, and surprise from vocal cues in a sample of psychopathic

patients. Amygdala hyposensitivity has been suggested as one reason for these emotion recognition deficits in antisocial populations (Marsh & Blair, 2008).

A large deficit in facial emotion recognition has also been found in the meta-analysis conducted by Kohler, Walker, Martin, Healey, and Moberg (2010) for schizophrenic patients. This

might be related to these patients’ tendency to visually scan features of the face that are not important in the expression of a particular emotion (Walker, McGuire, & Bettes, 1984). Lowered activation of the amygdala and the ventral temporal-basal ganglia-prefrontal cortex “social brain”

system has also been linked to this impairment in schizophrenia (Li, Chan, McAlonan, & Gong, 2010). Further, Bediou et al. (2007) showed that also healthy siblings of schizophrenic patients performed worse in recognizing emotions, suggesting a familial vulnerability and genetic component in schizophrenia.

Lower facial emotion recognition accuracy, mostly for negative emotions, has also been observed in Parkinson disease, although results have been somewhat inconsistent (Baggio et al., 2012). Ventura et al. (2012) also found an impairment in prosodic emotion recognition for patients whose left hemisphere was affected. In their review, Péron, Dondaine, Le Jeune, Grandjean, and Vérin (2010) concluded that Parkinson desease is generally characterized by a deficit in the recognition of facial and vocal expressions, which could be partially explained by amygdala dysfunction.

Especially in recent years, numerous studies have also investigated emotion recognition in many other disorders, including depression (Van Wingen et al., 2011), bipolar disorder (Derntl, Seidel, Kryspin-Exner, Hasmann, & Dobmeier, 2009), eating disorders (Wallis, Ridout, & Sharpe, 2012) and body-dysmorphic disorder (Buhlmann, Gleiß, Rupf, Zschenderlein, & Kathmann, 2011), borderline personality disorder (Domes, Grabe, Czieschnek, Heinrichs, & Herpertz, 2011), and autism in children and adults (Golan, Baron-Cohen, Hill, & Rutherford, 2007).

In addition to quantifying the deficits in emotion recognition in various medical conditions, these studies also contributed to identifying the brain processes involved in recognizing emotions.

Baggio et al. (2012) concluded that the recognition of different emotions relies on different, yet overlapping neural substrates. However, by far the most research has exclusively looked at the

recognition of basic emotions from pictures of faces. Given that impaired facial emotion

recognition seems to be an established finding for many clinical disorders, future research should especially consider other modalities and emotions a well as more naturalistic stimuli. This would enhance our knowledge about the impact of such impairments on the actual social behavior and communication of patients.