• Aucun résultat trouvé

From facial emotional recognition abilities to emotional attribution: A study in Down

2.2.2 Expression Identification and Expression Matching Tasks

The Expression Identification and the Expression Matching tasks were adapted from two tests of the Bruce et al.‘s battery (respectively called Emotion-Id and Emotion-Match).

The number of the distracters had been increased (two instead of one in the original version) and a new expression (neutral) had been inserted. For the Expression Identification task which was found easier to execute than the Expression Matching task in Hippolyte et al.‘s study, an additional item per facial expression had been added to increase the task demand. Both tasks assessed five facial expressions: joy, sadness, anger, surprise and neutral.

In the Expression Identification task (20 items) participants were shown the stimuli of three faces which were placed next to another, and had to point to the face that displayed a particular emotion named orally by the experimenter (4 items per expression). In the Expression Matching task (15 items), a target face was shown at the top of the page and the participants were asked to point to the face at the bottom (out of three) that displayed the same expression as the top one (3 items per expression). One trial item was administered in this task.

2.2.3 Facial Discrimination Task

The Facial Discrimination task (Rojahn, Rabold, & Schneider, 1995) assessed facial expression recognition and emotion intensity attribution. It consisted of 41 photographs presenting three expressions: happy, sad, and neutral. The participants had to indicate whether a given item depicted a happy face, a sad face, or a face that was neither happy nor sad (neutral). If the response was happy (or sad), they were asked to decide between two intensity levels for that emotion. Level 1 was for a face that was ‗a little‘ happy or sad and level 2 for a face that was ‗a lot‘ happy or sad. Participants had a training session with 6 items, before performing a test which consisted of randomly presenting 12 happy faces (9 for the first level, 3 for the second), 12 neutral faces and 11 sad faces (7 for the first level, 4 for the second). In addition, an emotional bias score reflecting error size and trend (overly positive versus negative responses) could be computed for this task. This measure was obtained by calculating an error ranking for each response (plus one point per degree in the positive trend and minus one point per degree in the negative trend). For example a 1-point positive score was assigned when the participant said ‗very happy‘ instead of ‗a little happy‘.

3 Results

The assumption of a normal distribution using one-sample Kolmogorov-Smirnov tests was tested for all experimental tasks variables (separately for each group). A normal distribution was found for the Identity Matching Test and the Facial Discrimination Task allowing us to conduct parametric analyses. Some of the variables of the two facial expression tasks Expression Identification and Expression Matching did not follow a normal distribution, and non parametric analyses were run for these tasks.

0 20 40 60 80 100

DS Controls

group

Correct response (%)

New Face-Dis New Face-Sim Maskedface-Dis Maskedface-Sim Eyesmasked-Sim

Figure 1. Mean percentage of correct responses in each of the five Identity Matching subtests for the two groups.

3.1 Identity Matching Test

The Identity Matching test data were analysed by means of a 2 (group) X 5 (task) repeated-measure ANOVA. Figure 1 illustrates percentage scores (per group) for the five subtests. There was a significant main effect of task, F(4,176) = 91.75, p < .0001, η2 = .675, but no effects of group, F(1,44) = .69, p = .41, η2 = .015, nor interaction, F(4,176) = .07, p = .58, η2 = .015. Subsequent Bonferroni post-hoc comparisons showed that the first two subtests New Face-Dis and New Face-Sim presenting complete faces were equally well realized by the two groups, and significantly better than the three subtests Maskedface-Dis, Maskedface-Sim, and Eyesmasked-Sim.

3.2 Expression Identification and Expression Matching Tasks

Table 2 presents the main results for the Expression Identification and Expression Matching tasks, as well as the outcomes of the statistical inter-group analysis (Mann-Whitney U tests). In the Expression Identification task, the DS group‘s performances were significantly

lower than these of the control group for all expressions, except sadness. Intra-group analyses were pursued using Wilcoxon signed-rank test. In the DS group, expressions of joy and anger were significantly better recognized than the expressions of sadness (ps < .01), surprise (ps <

.02), and neutral (ps < .001). Whereas no significant differences appeared between the expressions of sadness and surprise, the neutral expression was the worst recognized (ps <

.01). In the control group, the score for the emotion of sadness was significantly poorer than the scores for the emotions of joy (p = .005), anger (p = .001) and surprise (p = .037).

Table 2. Mean raw scores of the two groups on the Expression Identification and Expression Matching Tasks

Relating to the Expression Matching task, Mann-Whitney analyses showed that the DS adults obtained significantly lower scores than their controls for all expressions, except surprise. The main results for the intra-group analyses revealed that the expression of joy was significantly better recognized by the DS participants than neutral (p = .006) and surprise (p = .043). No significant differences appeared between the other four expressions. In the control group, the expressions of joy and sadness were significantly better processed than the expressions of surprise (ps < .001) and neutral (ps < .01).

3.3 Facial Discrimination Task

A 2 (group) X 3 (expression) repeated-measure ANOVA was first conducted, taking into account the scores (percentages) for the happy, sad and neutral expressions (see Figure

2). The analysis revealed that there were significant main effects of group, F(1,46) = 29.37, p

< .0001, η2 = .394 and expression, F(2,90) = 9.64, p < .001, η2 = .197. A significant interaction between expression and group, F(2,90) = 12.87, p < .0001, η2 = .211, was also observed. Bonferroni post-hoc comparisons revealed that the DS adults recognized fewer neutral items than their controls, p < .0001. The DS group identified both sad and happy expressions (ps < .0001) more easily than the neutral ones, while no significant differences appeared between the three expressions for the control group.

0 20 40 60 80 100

DS Controls

group

Correct response (%)

Happy Neutral Sad

Figure 2. Mean percentage of correct responses (per expression) in the Facial Discrimination task for the two groups.

A 2 (group) X 2 (emotion) X 2 (intensity) repeated-measure ANOVA was then performed to determine whether groups differed when rating the emotional intensity of the sad and happy emotions. The analyses revealed two significant main effects: group, F(1,45) = 17.99, p < .001, η2 = .285, and intensity, F(1,45) = 51.11, p < .0001, η2 = .531. There was no significant interaction between group and emotion (p = .13), and group and intensity (p = .14), but there was a marginal effect of the triple interaction group X emotion X intensity, F (1,45)

= 3.21, p = .079, η2 = .066. Post-hoc Bonferroni tests showed that the DS group identified the very happy faces significantly better than all the other expressions (ps < .05). The faces with very sad expressions were better recognized than ones with little happy expressions (p <

.001). No differences were found between the recognition of little and very sad expressions (p

= .254). The control group showed a similar pattern, but their results did not differ between the very happy and very sad faces (p=.98), which were similarly recognized.

Finally, we analysed the error pattern of the two groups by carrying out an analogous error analysis like the one proposed in Hippolyte et al.‘s study. We observed that the large majority of the participants rarely selected an emotion in the opposite hedonic tone (e.g., happy for sad) when they gave an incorrect answer. We also noticed that the DS group tended

to propose the emotion of joy more often than the emotion of sadness and instead of the neutral expression. Participants with DS obtained an emotional bias mean score of 11.95 (SD

=14.5) which was significantly greater than the one obtained by their controls (score = 2.29, SD = 4.34) (t = 3.11, p = .003).

3.4 Correlations between the three facial expression tasks, CA, and cognitive tasks A series of correlations were run to investigate the relationships between the three facial expression tasks, CA and the cognitive tasks. Significant relations appeared in the DS group with all the cognitive tasks (see Table 3), but the most significant among them was found with the receptive vocabulary measure, which was related to several expressions within the three tasks. More particularly, a strong relation was found in the Facial Discrimination task with the neutral expression (r = .66, p = .001). The nonverbal reasoning score and the selective attention measure were related to the Expression Identification and Expression Matching task to a lesser extent.

In the control group, we observed that CA was strongly related to cognitive measures;

full and partial correlations were therefore run controlling for CA. The main results showed that the expression of sadness was related to CA in the three facial expression tasks (Identification: τ = .38, p = .018; Matching: τ = .49, p = .004; Discrimination: r = .42, p = .034). For the Expression Matching task, CA was also related to surprise (τ = .32, p = .046) and neutral (τ = .42, p = .009). The significant relations observed between the facial expression tasks and the EVIP-R score were not preserved after controlling for CA. For the selective attention measure, a significant relation remained with the neutral expression score in the Expression Matching task (r = .53, p = .008).

Table 3. Correlations between the Expression Identification and Expression Matching Tasks (Tau-B), Facial Discrimination Task (Pearson), chronological age and cognitive measures in the DS group

Note: *p < .05. **p < .01.

4 Discussion

In Experiment 1, we examined the capacity to process facially expressed emotions through three modalities, namely identification, matching and recognition. The ability to process faces without emotional content was controlled for, and no differences appeared between the two groups in this task. With regard to the facial expression tasks, we noticed important differences in the DS adults‘ performances depending on the task modality. They were very impaired in the matching condition, whereas they exhibited rather specific difficulties in the identification and recognition task according to the expressions.

Corroborating Hippolyte et al.‘s previous findings, the DS group processed the neutral expression very poorly. In addition, the analysis of their response pattern in the recognition

task showed a tendency to assess expressions as being more positive than they actually were.

The correlational analyses stressed that in the DS group a particular implication of receptive vocabulary skills was involved in the processing of several expressions. The nonverbal reasoning and selective attention abilities were also related to certain expressions, often together with the vocabulary score.

Experiment 2

The main objective of Experiment 2 was to explore the DS adults‘ ability to attribute a facial emotion to a context, since to our knowledge this issue has never been examined with this particular population. For this purpose, we used a new task specifically created for people with mild to moderate ID. Furthermore, we aimed at investigating the relationships between these emotion attribution skills and the emotion processing abilities assessed in Experiment 1.

We were also interested in the relations between the attribution task and specific cognitive competences.

5 Method

5.1 Participants

Each of the participants in Experiment 2 had been recruited for Experiment 1.

However, three of the adults with DS did not participate in Experiment 2. The two groups of participants in this experiment consisted thus of 21 adults with DS (15 men, 6 women) and 24 TD children (17 boys, 7 girls). The mean age of the DS group was 34.7 years (SD = 7.4) and they had a developmental age of 6.6 (SD = 2.7) on the EVIP-R vocabulary matching measure.

For the Nepsy selective attention subtests, their precision score was of 17.62 (SD = 3.18) for the Rabbits (mean response time = 116.05, SD = 42.29) and 4.33 for the Faces (SD = 7.9) (mean response time = 172.71, SD = 16.62). As with Experiment 1, the two groups did not significantly differ on the precision score for both Nepsy subtests, and the DS adults took significantly more time to achieve them (Rabbits: p = .001, Faces: p = .017). The Faces precision score was kept as well for the subsequent statistical analyses. Finally, the DS group‘s raw score was of 15.9 (SD = 5.44) on the Raven CPM task.