• Aucun résultat trouvé

Weight of faces recognition

Because of the emotional significance of facial expression, emotional faces in the field of neuroscience researches are special and important kinds of stimuli for many reasons. A numbers of studies have underlined the brain's specificity for emotion decoding. Indeed, numerous recent functional imaging, lesion, and single-cell recording studies have used emotional faces to identify neural substrates of emotional processing. Rapid detection of emotional information is highly adaptive, since it provides critical elements on both the environment and the attitude of other people (Darwin, 1872; Eimer & Holmes, 2007). Studying emotional faces is important because it is a central skill in social interactions between human beings and it’s an important non-verbal form of biological communication. Newborn infants discriminate their mother’s face already 45 hours after birth (Field, Cohen, Garcia, & Greenberg, 1984). This finding suggests that facial expressions are special and important kinds of stimuli and that the recognition of faces is automatically learned. However, faces study also has importance in psychiatric disease studies and behavioral disease studies. For instance, ERP studies on autism emphasize the atypical face processing in young autistic children suggesting that autism is associated with face recognition impairment that is manifest early in life (Dawson et al., 2002).

The complex analysis and the detailed elaboration necessary for face detection, the detailed elaboration of the faces recognition, the importance of identification of people, of their emotions and intentions and the particular neural pathways for face treatment are the center of a multitude of studies in neuroscience. This particular filed of research is trying to answer some typical questions: Are the emotional face brain pathways independent from others emotional stimuli pathways? Is there one or more ways to elaborate this kind of stimuli? Is this stimuli context-dependent? For these reasons, many face studies have been carried

out. The general aim of this study is to help answer these questions, whilst at the same time improving our knowledge and consciousness about the emotional face perception.

Specificity of face treatment

There are three main findings that support the case for specific face processing mechanisms. Firstly, the ontogenesis evidence shows that new born children have a preference for schematic faces than for stimuli that do not represent faces (Goren, Sarty, & Wu, 1975). Moreover, new born children show a more distant ocular pursuit for faces than for non-faces stimuli (Johnson, Dziurawiec, Ellis, &

Morton, 1991) and preference for faces that look at them (Farroni, Csibra, Simion,

& Johnson, 2002).

Secondly, the reversal effect is more evident with faces than with other kind of stimuli (Yin, 1969). Thirdly, evidence in favor of the specificity of face analysis is also supported by lesion studies on patients affected by prosopagnosia, a deficit in face recognition (Farah, Levinson, & Klein, 1995). Besides, the existence of the trouble as a problem dealing only with face recognition is an evidence of the specificity of face treatment in the brain.

Models and theories about localization of faces recognition In 1986, Bruce and Young proposed a theoretical model for face recognition (Bruce & Young, 1986). This model is composed by a hierarchical sequence of four processing stages: 1. a structural encoding to distinguish from others faces; 2.

a face recognition unit to recognize familiar faces; 3. a person-identity node for the identification of the person and semantic information; 4. a retrieval node for personal names. The unit of recognition accepts the faces with a big variety of postures and expressions (Hay & Young, 1982).

Burton and colleagues proposed an implementation with two modifications (Burton, Bruce, & Johnston, 1990): primarily, the person-identity nodes was defined as a functional node enabling access to information and not as node holding the information; the second point was enriched with the addition of a semantic information unit which contains particular semantic information about

known people.

Other elaborations on the face recognition model considered a cognitive and an affective way for emotional face processing (Ellis & Lewis, 2001; Ellis & Young, 1990; Ellis, Young, Quayle, & De Pauw, 1997). More recently, Haxby (2000) presented a face perception model based upon two different processing pathways:

one related to the perception of invariant face features and the other related to the perception of changeable face features (see fig. 1). Hierarchically, the core system is represented by extrastriate visual cortex (visual analysis of faces) and it is mediated by the fusiform gyrus for the representation of invariants aspects, and by the region in the superior temporal sulcus for the changeable aspects. The extended system is composed by regions underlain to other cognitive functions, which are recruited by the core system to extract meaning from, for example, emotional faces which recruited limbic regions associated with emotion processing.

Fig. 1. A model of the distributed human neural system for face perception (Haxby, 2000).

The conceiving of these models is a consequence of some studies which have analyzed the spatial and temporal brain resolution in the reacting to human faces using different cognitive paradigms (different psychophysical parameters,

different emotional faces, different attentional tasks and so on).

The fusiform gyrus was discovered in 1997 (Kanwisher, McDermott, & Chun, 1997) comparing the brain activity of faces versus objects by a functional Magnetic Resonance Imaging study (fMRI) in which they saw a specific activation of this area for faces which took the name fusiform face area (FFA).

The existence of selective neurons responding to faces is not something special (there are also selective responses for other types of stimuli) but the proportion of neurons and their spatial organization is quite impressive (Logothesis and Sheinberg, in Barbeau, Joubert & Felician, 2008).

The treatment of emotions, the limbic lobe

The search for cortical representation of emotions has led to the limbic system. In the Papez paper (1937) authors claimed for the first time that the limbic lobe was involved in the perception of emotions. The limbic lobe included a ring of deep-lying cortex structures around the brain stem as well as the cingulate gyrus, the parahippocampalis gyrus and the hippocampal formation, moreover the hippocampus proper, the dentate gyrus, and the subiculum (Purves et al., 2004).

In 1995, Paul McLean extended the concept of limbic system, including parts of the hypothalamus, the septal area, the nucleus accumbens, neocortical areas such as the orbitofrontal cortex, and most importantly, the amygdala. More recent studies have shown that there are extensive connections between neocortical areas, the hippocampalis formation and the amygdala (Aggleton, 2000). Because of its own involvement in many functions, (e.g. emotional treatment, learning, memory) the amygdala presents distinct connections with others brain regions.

Concerning cerebral activation in response to emotional face processing Morris and colleagues brought strong evidence for amygdala activation in response to fear (Morris et al., 1996).

Moreover, Adolphs showed that brain areas, generally involved in evaluating the emotional and motivational significance of facial expressions, appear to be mediated by the amygdala and orbitofrontal cortex, while structures such as the anterior cingulated, prefrontal cortex and somatosensory areas, are linked to the

conscious representation of emotional facial expression, for the strategic control of thought and action (Adolphs, 2003).

ERP correlate of emotional face processing Electroencephalography (EEG)

Electroencephalography (EEG) is a technique to record the neuronal electrical activity. This can be done by either placing electrodes on the scalp (surface recording) or directly from the human cortex (intracranial EEG recoding).

Not all the neuronal brain cells contribute in the same way to the measurement of electricity. Indeed deep structures like amygdala, hippocampus, thalamus or cortical neurons do not contribute directly to the electroencephalogram.

Since the electroencephalography is quite similar from one person to the other, we can compare different electroencephalograms in order to detect anomalies (Purves et al., 2004).

As a result of this technique, we can also measure the electrical responses to a stimulus. An evoked potential is an electrical potential recorded from the human nervous system following the presentation of a physical stimulus (for instance light spot), while event-related potentials are caused by the higher processes (for instance attention) which represent the kind of stimuli used in this experiment.

As said above, event-related potentials (ERP) are the direct results of a cognitive and physiological process measured by EEG technique.

If we measure this response to a particular stimulus we obtain an event related potential (ERP) performing an average of many repetitions of the same semantic category (for instance difference faces representing the same emotion). In this way it is possible to obtain a sign of the response of a specific stimulus category represented by some components (see fig. 2). If we compare different categories of the same stimulus (for example emotional faces representing different emotions) we can observe differences in components as a result of a different cerebral processing (Luck, 2005).

ERPs components

Researchers have developed a variety of methods for the statistical decomposition of ERPs. The simplest one is principal components analysis (PCA) which consists in extracting the major source of covariance (the ERP component) across multiple time or spatial points (Donchin & Coles, 1988).

Fig. 2. Grand-average ERPs recorded by the right lateral occipital electrode. The two superimposed lines indicate different conditions of attention of the observer; the solid line represents cerebral activity evoked when the stimulus where relevant, whereas the dotted line represent the cerebral response to the same stimulus when irrelevant (Zani &

Proverbio, 2003, p.28).

ERPs represent a useful tool to study the time course and the functional properties of emotional face processing stages, such as their automatism, specificity and sensitivity to attentional states (Eimer & Holmes, 2007).

Main components analyzed in this study

Though some ERP components are referred to acronyms, most components are referred to by a letter indicating polarity (negative, N or positive, P), followed by a number indicating either the latency in milliseconds or the component’s ordinal position in the wave form.

N1 and N170

The peak of the N1 component is normally observed in a range of 150-200 msec.

after the onset of the stimulus. The N1 is elicited by visual stimuli. The N1 deflection may be detected at most recording sites, including the occipital, parietal, central, and frontal electrode sites (Mangun & Hillyard, 1991). When a stimulus is presented centrally, the N1 is bilateral (Wascher, Hoffmann, Sänger, &

Grosjean, 2009). Attention is especially relevant to the processing of emotional stimuli, because emotional stimuli are believed to receive preferential attention and perceptual processing. The N1 provides a means of examining which is useful in understanding how emotion is central in capturing attentional resources (Zani

& Proverbio, 2003). In the same way its amplitude is influenced by selective attention and thus, it has been used to study a variety of attentional processes suggesting that attention acts as a sensory gain mechanism that enhances perception of attended versus unattended stimuli (Luck, 2005; Rugg, Milner, Lines, & Phalp, 1987). Similarly, the valence of emotional stimuli has been found to influence the amplitude of N1 (Delplanque, Lavoie, Hot, Silvert, & Sequeira, 2004)

The N170 component reflects the neural processing of faces. When evoked potentials of faces are compared to those elicited by other visual stimuli, the former shows increased negativity at 130-200 msec. after stimulus onset. This response is maximal over occipito-parietal electrode sites, which is consistent with a source localized in the fusiform and inferior-temporal gyri. It normally displays a right-hemisphere lateralization (Rossion & Jacques, 2008). Lots of studies have showed the N170 modulation affected by emotional facial expressions (Blau, Maurer, Tottenham, & McCandliss, 2007).

P2 or P200

The P2 component peaks vary between 150 and 275 msec. The distribution of this component in the brain, as measured by electrodes placed across the scalp, is located over the centro-frontal and the parieto-occipital region.

P2 represents some aspects of higher-order perceptual processing, modulated by attention in visual stimuli. One study suggests that the P2 indexes some form of selective attention which identifies meaningful stimuli thought feature suppression

(Mehta, Ulbert, & Schroeder, 2000). The P2 may index mechanisms for selective attention, feature detection (including color, orientation, shape, etc.) and other early stages of item encoding.

P3 or P300

The P300 is a positive deflection in voltage with latency of roughly 300 to 600 msec. The signal is typically measured most strongly by the electrodes covering the parietal lobe. In 1965, Sutton and colleagues found that when subjects were required to guess what the following stimulus would be, the amplitude of this

“late positive complex” was larger than when they knew what the stimulus would be. The P300 can also be used to measure how demanding a task is on cognitive workload (Donchin & Coles, 1988), and it acts in creating conscious emotional experience.

Topographic map

The distribution on the scalp of ERP recorded on electrodes can be presented by a topographic map. A succession of topographic maps can show us the temporal evolution of activation fields (msec. precision).

The red color represents the positivity and the blue color the negativity (see fig.

3).

Fig. 3. Topographic map representing posterior positivity and centro-frontal negativity at 100 msec. from the presentation of a visual stimulus. The red color represents the positivity and the blue color the negativity.

Source localisation

Interpretation of EEG and ERPs almost involves speculations as to the possible locations of the sources inside the brain that are responsible of the activity observed on the scalp. The basic principle that seemed to apply was that an active source of current inside a finite conductive medium would produce volume current throughout the medium and lead potentials differences on its surface. The process of predicting scalp potentials from current sources inside the brain is generally referred to as the forward problem (see fig. 4). If the configuration and distribution of sources inside the brain are known at every instant in time and the conductive properties of the tissues are known everywhere within the volume of the head, the potentials everywhere on the scalp can be calculated from basic physic principles. Controversy, the process of predicting the locations of the sources of ERPs from measurement of the scalp potentials is called inverse problem (see fig. 4). It turns out that, given a finite numbers of sites on the scalp at which the scalp potentials are measured at some instants in time it is possible for an infinitive number of source configurations to account for those measurements. The principle is to eliminate the non-possible solutions (e.g.

sources cannot be located in the ventricles of the brain) considering that in the collection of possible source configurations inside the brain that can account for the measurement on the scalp there is only one that is correct (Koles, 1998).

Fig. 4. Above: image representing the forward problem. Below: image representing the inverse problem.

The treatment of facial expressions

In this manuscript we are particularly interested in emotional treatment of emotions and, more specifically, in the emotional treatment of facial expressions.

Ekman and Friesen reported that six emotions (anger, happiness, fear, surprise, disgust and sadness) are readily recognized across very different cultures. It’s what we call basic emotions (Ekman & Friesen, 1971). In this study we decide to analyze the treatment of angry, fearful, happy and neutral faces.

By observing the relative shape or posture of facial features of a person, we are able to guess which kind of emotion another person is feeling at that moment.

Standing to the results of Pizzagalli and colleagues (1999) the first perceptive stage, in which the subject completes the “structural code” of face, is thought to be processed separately from complex facial information such as emotional meaning. Lots of studies show the particularity of emotional face stimuli.

At a behavioral level positive emotional faces appear to be easier to recognize than negative ones. Two studies shows the advantage in the detection of happy emotional faces compared to neutral and anger ones. In the Leppänen and Hietanen's article (Leppänen & Hietanen, 2004), happy facial expression were recognized faster than negative (disgust, sad) ones. Further evidence suggesting that positive facial expressions are more easily processed is given by Hess, Blairy, and Kleck (1997). The results showed that there’s more accuracy in decoding positive facial expressions (happy faces) than negatives ones (angry, disgusted and sad faces).

Emotional stimuli also lead to special brain processes and activation. A fMRI (Gorno-Tempini et al., 2001) and a PET study (Damasio et al., 2000) showed the involvement of cortical prefrontal and occipito-temporal junctions and subcortical structures (amygdala, basal ganglia and insula) in processing face emotional stimuli. Actually, Adolphs, Tranel, Damasio and Damasio (1994) have used patients with bilateral amygdala lesions as evidence of amygdala implications in emotional face processing. The disease does not influence the conscious ability to discriminate complex visual stimuli such as faces which suggests that there are specific brain regions involved in emotional face expression recognition. Gur and colleagues (2002) bring an important study in which he compared relevant facial

expressions (identification of emotional valence), to irrelevant affective judgment (age determination). This study reports a greater activation of amygdala, thalamus, and inferior frontal activation when facial expressions were relevant.

This result goes in the direction of Haxby's model (2000) which assumes the existence of two separate neural systems: one for the visual analysis of and the other one for emotional processing of faces.

The time elapsed in processing emotional face expression is also particular.

Streit’s electroencephalography’s studies (Streit, Wölwer, Brinkmeyer, Ihl, &

Gaebel, 2000) have supported the hypothesis that the process of facial-expressions recognition starts very early in the brain, approximately at 180 ms after stimulus onset, slightly later than the face-selective activity reported between 120 and 170 ms, the N170 (Bentin, Allison, Puce, Perez, & McCarthy, 1996). Miyoshi, Katayama and Morotomi (2004), reported a modulation of the N170 by changes on emotional expression. Pourtois and colleagues showed an early negative deflection that primarily reflected category-selective perceptual encoding of facial information, whereas higher order effects of face individuation, emotional expression, and gaze direction produced selective modulations in the same face-specific region during a later time period (from 200 to 1000 ms after onset). These results shed new lights on the time course of face recognition mechanisms in human visual cortex and revealed for the first time anatomically overlapping but temporally distinct influences of identity or emotional/social factors on face processing (Pourtois, Spinelli, Seeck, & Vuilleumier, 2010).

Another issue is represented by the analysis of specific emotional content of face-stimuli. Smith and Lazarus (1990) suppose that subjects might be more emotionally involved by an angry expression (considered as high-arousal emotion) comparing to positive or neutral ones. In the same direction, it has been demonstrated in an EEG study by Marinkovic and Halgren (1998) that emotional face stimuli activate the extrastriate cortex more than neutral face stimuli. In general, there should be more intense emotional reaction viewing a negative rather than a positive emotion. This suggests separate brain processing for different emotions.

People with Huntington’s disease and people suffering from obsessive-compulsive disorders show severe deficits in recognizing facial expressions of disgusts; people with restricted lesions of amygdala are especially impaired in recognizing facial expressions of fear. (Sprengelmeyer, Rausch, Eysel, &

Przuntek, 1998). Moreover, in Adolphs study (Adolphs et al., 1994), most subjects with amygdala damage were impaired on several negative emotions but no subject was impaired in recognition of happy expressions. This dissociation implies effectively that recognition of certain basic emotions may be associated with distinct and non-overlapping neural substrate.

Nevertheless, other studies on impairments of facial-expression recognition do not show a category-specific deficit for recognition of emotional-expressions. For example Sato and colleagues (Sato, Kochiyama, Yoshikawa, & Matsumura, 2001) demonstrated that faces with emotions (positive and negative ones) elicit a larger N2 than neutral faces over the posterior temporal areas, but he didn’t find a significant difference between negative and positive emotions. In addition, (Herrmann et al., 2002) compared expressions with different emotional valence (sad, happy and neutral), and they failed to find emotion-specific ERP correlates for the three emotions. On the base of this background, it seems that structural information is processed separately from emotional information, but no specific ERP profile characterizes single emotional expressions.

Nevertheless, other studies on impairments of facial-expression recognition do not show a category-specific deficit for recognition of emotional-expressions. For example Sato and colleagues (Sato, Kochiyama, Yoshikawa, & Matsumura, 2001) demonstrated that faces with emotions (positive and negative ones) elicit a larger N2 than neutral faces over the posterior temporal areas, but he didn’t find a significant difference between negative and positive emotions. In addition, (Herrmann et al., 2002) compared expressions with different emotional valence (sad, happy and neutral), and they failed to find emotion-specific ERP correlates for the three emotions. On the base of this background, it seems that structural information is processed separately from emotional information, but no specific ERP profile characterizes single emotional expressions.

Documents relatifs