5.2 Experiment 1. Emotion × Type of Stimulus

5.2.1 Method


37 students (29 females, mean age 24.0 years) from the University of Geneva participated in this experiment for course credit. All participants were right-handed, had normal or corrected-to-normal vision, and no history of psychiatric or neurological diseases.

The participants were selected from a pool of students in such a way that they represented groups on a controlled continuum spanning from anxious to extraverts (Table 5.1), for two reasons: first, research on the perception of emotion often relates findings to trait personality differences (see also Derryberry & Reed, 1994; Mogg & Bradley, 1998; E. Fox et al., 2001), and we wanted to be able to control for it; second, results in the neuroscience showed that the activation of certain brain areas upon perception of emotional stimuli was correlated to personality traits (Canli et al., 2001, 2004; Hamann & Canli, 2004; Mobbs et al., 2005), especially the amygdala, which was shown to correlate with extraversion scores (Canli et al., 2002; Canli, 2004; Most et al., 2006). Prior to the experiment, participants filled in the following questionnaires: NEO Five-Factor Inventory (NEO-FFI; Costa & McCrae, 1991), State-Trait Anxiety Inventory (STAI; Spielberger et al., 1983), and the dual scale Fear of Negative Evaluation - Social Avoidance Disorder (FNE-SAD; D. Watson &

Friend, 1969). One-way analyses of variance (ANOVA) yielded main effects of theGroup(Anxious, Controls, Extraverts) for each of these scales (two-sided tests), indicating that the groups differed significantly for all measures.


Stimuli were presented on a wide CRT screen, with a refresh rate of 100 Hz, and a resolution of 1024 × 768. Participants were comfortably seated in an armchair presenting head and arm rests. The screen was placed 60 cm away from the head of the participant. The experiment

Anxious Controls Extraverts (N = 13) (N = 13) (N = 11)

M. SD. M. SD. M. SD.

STAI-Trait ** 55.31 7.46 44.08 12.93 30.27 3.37

Fear of Negative Evaluation ** 23.08 5.39 18.15 7.74 12.27 6.05 Social Avoidance and Distress ** 15.69 6.00 9.46 6.76 3.64 3.17

Neuroticism * 36.62 3.25 33.15 7.29 29.82 3.40

Extraversion ** 35.38 2.29 40.46 4.27 45.55 1.63

Table 5.1: Experiment 1. Participants’ trait personality scores. *: p <.01; **: p<.001

was administered using Matlab, and the Psychophysics Toolbox extensions (Brainard, 1997; Pelli, 1997).


T1 and T2 stimuli were faces, subtending 18°×15°visual angle. Each T1 stimulus consisted of one out of 10 faces (five females), showing no emotional expression. Each T2 stimulus consisted of one out of 10 faces (five females), showing controlled combinations of facial Action Units, depicting expressions recognised as happy, fearful, or neutral (Figure 5.1, panel a).

The faces were produced using our software FACSGen (Roesch et al., in revision ; see Chapter 4, on page 51). FACSGen is a novel tool that allows the creation of unique, realistic, static or dynamic, synthetic 3D facial stimuli based on the Facial Action Coding System (Ekman et al., 2002). It provides control over the intensity, and temporal dynamics, of the facial Action Units, and allows researchers to apply the same facial expression to any face produced using FaceGen Modeller 3.3 (Singular Inversions Inc., 2009; Shimojo et al., 2003; Moradi et al., 2005; Corneille et al., 2007). After they took part in the experiment, all 37 participants rated the extent to which the following emotions could be perceived in the facial expressions used during the experiment:

anger, disgust, fear, happiness, sadness, and surprise. A scale was provided, anchored in 0 (”not at all”) and 100 (”enormously”). To determine whether participants could discriminate the emotions portrayed in the faces, repeated measures analysis of variance (ANOVA) were performed on the ratings for each target emotion. There was a significant effect for targets’ emotions (ps<.001), indicating that participants reliably recognised the target emotion portrayed by the faces.

To be able to contrast static with dynamic facial expressions using the same facial expressions, all emotional facial expressions were depicting faces with the mouth open. Dynamic T2 stimuli consisted of eight frames, morphing from a mouth-closed neutral expression (0% expression) to

Figure 5.1: Experiment 1. Emotion×Type of stimulus. Panel a. Examples of T2 targets.

Panel b. Example of a trial. Static stimuli (i.e., T1, static T2, and distractors) were presented for 80 msec (eight refresh periods of the screen), whereas each frame of the dynamic T2 was presented for 10 msec (one refresh period of the screen).

one of the three facial expressions (100% expression), mouth open. Static T2 stimuli were showing 100% expression, mouth open.

Distractors consisted of neutral faces, for which the inside of the face was divided in 10×10 pixel squares, and scrambled randomly. All faces were presented in grayscale, but the T1 faces which were tinted in green to ease their detection (Milders et al., 2006). Statistical analyses confirmed that the three emotional conditions for T2 stimuli did not differ for luminance, and contrast.


The experiment consisted of one block of 10 practice trials, followed by 10 blocks of 60 trials.

Between blocks, participants were forced to a resting period of at least three minutes. Participants were instructed to perform a gender decision task on T1 (green face), and to indicate if they had perceived a non-scrambled face immediately following the first target. Response was given at the end of each trial.

Each trial began with a fixation cross, displayed for 800 msec. Participants were instructed to focus on the fixation cross, and to sustain this gaze throughout the trials, as this was the optimal way to perceive each face as a whole, and gather a maximum amount of information for the tasks.

A trial consisted of 15 stimuli, presented for 80 msec each, without any inter-stimulus interval (Figure 5.1, panel b). Each of the eight frames of the dynamic T2 stimuli was presented for exactly 10 msec (one refresh period of the screen), whereas static stimuli (i.e., T1, static T2, and distractors) were presented for 80 msec (eight refresh periods). T1 stimuli appeared equally often in serial positions 2–5. On T2–present trials, emotional faces appeared equally often in serial positions 3–7, and 9, yielding to T1–T2 intervals of 80, 160, 240, 320, 400, and 560 msec. T2 stimuli were present in 75% of the trials, and absent in the remaining 25%. This allowed us to maximize the number of T2–present trials without making the experiment too long. It also gave some validity to the experiment, as the nature of the experiment often leads to the impression that T2–present trials are very infrequent (E. Fox et al., 2005).

Dans le document Attention meets emotion : temporal unfolding of attentional processes to emotionally relevant information (Page 90-93)