• Aucun résultat trouvé

Emotional face processing: An ERP study on visual selective attention of emotional faces presented subliminally and supraliminally

N/A
N/A
Protected

Academic year: 2022

Partager "Emotional face processing: An ERP study on visual selective attention of emotional faces presented subliminally and supraliminally"

Copied!
62
0
0

Texte intégral

(1)

Master

Reference

Emotional face processing: An ERP study on visual selective attention of emotional faces presented subliminally and supraliminally

FRANCHINI, Martina

Abstract

Faces are considered as important stimuli in neuroscience research. Indeed, many studies have brought to bear significant evidence of special treatment of emotional faces in brain processing. Moreover, these findings have favored the development of several models that highlight the brain areas involved in face perception. One example is Haxby model (2000).

The aim of this study is to assess the effect of selective attention (target and non target condition) to emotional faces presented subliminally and supraliminally with an ERP study.

Three components (N170, P2 and P300) have been analyzed for four emotions (happy, neutral, fearful and angry). The results showed strong evidence in favor of presentation and selective attention effects on the three components analyzed, suggesting cognitive resources as strongly influencing the emotional face brain processing. As suggested by spatial attention studies on emotional faces processing (e.g. Carlson et al., 2009), emotional faces are supposed to enhance selective attention effects. Moreover, an effect of different emotional expressions on early ERP components suggest quickly [...]

FRANCHINI, Martina. Emotional face processing: An ERP study on visual selective attention of emotional faces presented subliminally and supraliminally. Master : Univ.

Genève, 2011

Available at:

http://archive-ouverte.unige.ch/unige:17839

Disclaimer: layout of this document may differ from the published version.

(2)

Emotional face processing: An ERP study on visual selective attention of emotional faces presented

subliminally and supraliminally

MASTER THESIS FOR THE ATTAINMENT OF A Interdisciplinary master degree in Neuroscience

BY

Martina Franchini

UNDER THE DIRECTION OF:

Prof. Alan Pegna

and Dr. Marzia Del Zotto

Laboratory of Experimental Neuropsychology, University Hospital of Geneva

JURY:

Prof. Alan Pegna Dr. Marzia Del Zotto Prof. Guido Gendolla

GENEVA, SEPTEMBER 2011 UNIVERSITY OF GENEVA

(3)

Table of contents

ABSTRACT...4

CHAPTER 1: INTRODUCTION...5

WEIGHTOFFACESRECOGNITION...5

SPECIFICITYOFFACETREATMENT...6

MODELSANDTHEORIESABOUTLOCALIZATIONOFFACESRECOGNITION...6

THETREATMENTOFEMOTIONS, THELIMBICLOBE...8

ERP CORRELATEOFEMOTIONALFACEPROCESSING...9

Electroencephalography (EEG)...9

ERPs components...10

Topographic map...12

Source localisation ...13

THETREATMENTOFFACIALEXPRESSIONS...14

Emotional expression processing and attention...17

SELECTIVE ATTENTION...17

Attention and visual awareness ...19

VISUALAWARENESS...20

OBJECTIVESANDHYPOTHESISOFTHEEXPERIMENT...22

CHAPTER 2: EXPERIMENTS AND METHODS...24

PARTICIPANTS...24

MATERIAL...25

PROCEDURE...26

Pre-experimental study...26

EEG experiment...27

Behavioral experiment...27

EEG recording (technique)...29

Subliminal/supraliminal presentation, the backward masking procedure...30

The ROIs...31

Analysis...32

CHAPTER 3: RESULTS...33

SHORTFORMS:...33

BEHAVIORALRESULTS...33

Accuracy...33

Reaction time (RT)...34

EEG RESULTS...36

N170 component (135-190 msec.)...36

P2 component (200-270 msec.)...42

P300 component (440-540 msec.)...45

SOURCELOCALIZATION ...45

CHAPTER 4: DISCUSSION...49

BEHAVIORAL DATA...49

EEG RESULTS...50

SOURCELOCALIZATION...53

GENERAL DISCUSSION...53

CONCLUSIONS...56

ACKNOWLEDGES...57

REFERENCES...58

(4)

« J'ai eu longtemps un visage inutile mais maintenant j'ai un visage pour être aimé j'ai un visage pour être heureux. » (Paul Eluard)

(5)

Abstract

Faces are considered as important stimuli in neuroscience research. Indeed, many studies have brought to bear significant evidence of special treatment of emotional faces in brain processing. Moreover, these findings have favored the development of several models that highlight the brain areas involved in face perception. One example is Haxby model (2000).

The aim of this study is to assess the effect of selective attention (target and non target condition) to emotional faces presented subliminally and supraliminally with an ERP study.

Three components (N170, P2 and P300) have been analyzed for four emotions (happy, neutral, fearful and angry). The results showed strong evidence in favor of presentation and selective attention effects on the three components analyzed, suggesting cognitive resources as strongly influencing the emotional face brain processing. As suggested by spatial attention studies on emotional faces processing (e.g. Carlson et al., 2009), emotional faces are supposed to enhance selective attention effects. Moreover, an effect of different emotional expressions on early ERP components suggest quickly brain processing of emotional faces.

A source localization analysis confirmed the findings of previous studies, namely the activation of the amygdala for the processing of emotional expressions (in line with the Haxby model, 2000) which is more consistent in the case of fearful faces (as supported by Ledoux, 2000).

Moreover, a behavioral study brought evidences for a preferential treatment of positive faces compared to negatives ones in terms of accuracy and reaction times, it is suggested that a predominance of positive emotions in normal contexts may partly explain this effect (Leppänen and Hietanen, 2004).

In general, our study confirmed emotional faces as special kinds of stimuli, influencing our cognitive resources in both conscious and unconscious presentations. Moreover, evidence of a large and specific neural substrate for emotional expressions treatment has been found with a particularity for fearful faces in the amygdala activation (as supported by Ledoux, 2000).

(6)

CHAPTER 1: Introduction

Weight of faces recognition

Because of the emotional significance of facial expression, emotional faces in the field of neuroscience researches are special and important kinds of stimuli for many reasons. A numbers of studies have underlined the brain's specificity for emotion decoding. Indeed, numerous recent functional imaging, lesion, and single-cell recording studies have used emotional faces to identify neural substrates of emotional processing. Rapid detection of emotional information is highly adaptive, since it provides critical elements on both the environment and the attitude of other people (Darwin, 1872; Eimer & Holmes, 2007). Studying emotional faces is important because it is a central skill in social interactions between human beings and it’s an important non-verbal form of biological communication. Newborn infants discriminate their mother’s face already 45 hours after birth (Field, Cohen, Garcia, & Greenberg, 1984). This finding suggests that facial expressions are special and important kinds of stimuli and that the recognition of faces is automatically learned. However, faces study also has importance in psychiatric disease studies and behavioral disease studies. For instance, ERP studies on autism emphasize the atypical face processing in young autistic children suggesting that autism is associated with face recognition impairment that is manifest early in life (Dawson et al., 2002).

The complex analysis and the detailed elaboration necessary for face detection, the detailed elaboration of the faces recognition, the importance of identification of people, of their emotions and intentions and the particular neural pathways for face treatment are the center of a multitude of studies in neuroscience. This particular filed of research is trying to answer some typical questions: Are the emotional face brain pathways independent from others emotional stimuli pathways? Is there one or more ways to elaborate this kind of stimuli? Is this stimuli context-dependent? For these reasons, many face studies have been carried

(7)

out. The general aim of this study is to help answer these questions, whilst at the same time improving our knowledge and consciousness about the emotional face perception.

Specificity of face treatment

There are three main findings that support the case for specific face processing mechanisms. Firstly, the ontogenesis evidence shows that new born children have a preference for schematic faces than for stimuli that do not represent faces (Goren, Sarty, & Wu, 1975). Moreover, new born children show a more distant ocular pursuit for faces than for non-faces stimuli (Johnson, Dziurawiec, Ellis, &

Morton, 1991) and preference for faces that look at them (Farroni, Csibra, Simion,

& Johnson, 2002).

Secondly, the reversal effect is more evident with faces than with other kind of stimuli (Yin, 1969). Thirdly, evidence in favor of the specificity of face analysis is also supported by lesion studies on patients affected by prosopagnosia, a deficit in face recognition (Farah, Levinson, & Klein, 1995). Besides, the existence of the trouble as a problem dealing only with face recognition is an evidence of the specificity of face treatment in the brain.

Models and theories about localization of faces recognition In 1986, Bruce and Young proposed a theoretical model for face recognition (Bruce & Young, 1986). This model is composed by a hierarchical sequence of four processing stages: 1. a structural encoding to distinguish from others faces; 2.

a face recognition unit to recognize familiar faces; 3. a person-identity node for the identification of the person and semantic information; 4. a retrieval node for personal names. The unit of recognition accepts the faces with a big variety of postures and expressions (Hay & Young, 1982).

Burton and colleagues proposed an implementation with two modifications (Burton, Bruce, & Johnston, 1990): primarily, the person-identity nodes was defined as a functional node enabling access to information and not as node holding the information; the second point was enriched with the addition of a semantic information unit which contains particular semantic information about

(8)

known people.

Other elaborations on the face recognition model considered a cognitive and an affective way for emotional face processing (Ellis & Lewis, 2001; Ellis & Young, 1990; Ellis, Young, Quayle, & De Pauw, 1997). More recently, Haxby (2000) presented a face perception model based upon two different processing pathways:

one related to the perception of invariant face features and the other related to the perception of changeable face features (see fig. 1). Hierarchically, the core system is represented by extrastriate visual cortex (visual analysis of faces) and it is mediated by the fusiform gyrus for the representation of invariants aspects, and by the region in the superior temporal sulcus for the changeable aspects. The extended system is composed by regions underlain to other cognitive functions, which are recruited by the core system to extract meaning from, for example, emotional faces which recruited limbic regions associated with emotion processing.

Fig. 1. A model of the distributed human neural system for face perception (Haxby, 2000).

The conceiving of these models is a consequence of some studies which have analyzed the spatial and temporal brain resolution in the reacting to human faces using different cognitive paradigms (different psychophysical parameters,

(9)

different emotional faces, different attentional tasks and so on).

The fusiform gyrus was discovered in 1997 (Kanwisher, McDermott, & Chun, 1997) comparing the brain activity of faces versus objects by a functional Magnetic Resonance Imaging study (fMRI) in which they saw a specific activation of this area for faces which took the name fusiform face area (FFA).

The existence of selective neurons responding to faces is not something special (there are also selective responses for other types of stimuli) but the proportion of neurons and their spatial organization is quite impressive (Logothesis and Sheinberg, in Barbeau, Joubert & Felician, 2008).

The treatment of emotions, the limbic lobe

The search for cortical representation of emotions has led to the limbic system. In the Papez paper (1937) authors claimed for the first time that the limbic lobe was involved in the perception of emotions. The limbic lobe included a ring of deep- lying cortex structures around the brain stem as well as the cingulate gyrus, the parahippocampalis gyrus and the hippocampal formation, moreover the hippocampus proper, the dentate gyrus, and the subiculum (Purves et al., 2004).

In 1995, Paul McLean extended the concept of limbic system, including parts of the hypothalamus, the septal area, the nucleus accumbens, neocortical areas such as the orbitofrontal cortex, and most importantly, the amygdala. More recent studies have shown that there are extensive connections between neocortical areas, the hippocampalis formation and the amygdala (Aggleton, 2000). Because of its own involvement in many functions, (e.g. emotional treatment, learning, memory) the amygdala presents distinct connections with others brain regions.

Concerning cerebral activation in response to emotional face processing Morris and colleagues brought strong evidence for amygdala activation in response to fear (Morris et al., 1996).

Moreover, Adolphs showed that brain areas, generally involved in evaluating the emotional and motivational significance of facial expressions, appear to be mediated by the amygdala and orbitofrontal cortex, while structures such as the anterior cingulated, prefrontal cortex and somatosensory areas, are linked to the

(10)

conscious representation of emotional facial expression, for the strategic control of thought and action (Adolphs, 2003).

ERP correlate of emotional face processing Electroencephalography (EEG)

Electroencephalography (EEG) is a technique to record the neuronal electrical activity. This can be done by either placing electrodes on the scalp (surface recording) or directly from the human cortex (intracranial EEG recoding).

Not all the neuronal brain cells contribute in the same way to the measurement of electricity. Indeed deep structures like amygdala, hippocampus, thalamus or cortical neurons do not contribute directly to the electroencephalogram.

Since the electroencephalography is quite similar from one person to the other, we can compare different electroencephalograms in order to detect anomalies (Purves et al., 2004).

As a result of this technique, we can also measure the electrical responses to a stimulus. An evoked potential is an electrical potential recorded from the human nervous system following the presentation of a physical stimulus (for instance light spot), while event-related potentials are caused by the higher processes (for instance attention) which represent the kind of stimuli used in this experiment.

As said above, event-related potentials (ERP) are the direct results of a cognitive and physiological process measured by EEG technique.

If we measure this response to a particular stimulus we obtain an event related potential (ERP) performing an average of many repetitions of the same semantic category (for instance difference faces representing the same emotion). In this way it is possible to obtain a sign of the response of a specific stimulus category represented by some components (see fig. 2). If we compare different categories of the same stimulus (for example emotional faces representing different emotions) we can observe differences in components as a result of a different cerebral processing (Luck, 2005).

(11)

ERPs components

Researchers have developed a variety of methods for the statistical decomposition of ERPs. The simplest one is principal components analysis (PCA) which consists in extracting the major source of covariance (the ERP component) across multiple time or spatial points (Donchin & Coles, 1988).

Fig. 2. Grand-average ERPs recorded by the right lateral occipital electrode. The two superimposed lines indicate different conditions of attention of the observer; the solid line represents cerebral activity evoked when the stimulus where relevant, whereas the dotted line represent the cerebral response to the same stimulus when irrelevant (Zani &

Proverbio, 2003, p.28).

ERPs represent a useful tool to study the time course and the functional properties of emotional face processing stages, such as their automatism, specificity and sensitivity to attentional states (Eimer & Holmes, 2007).

Main components analyzed in this study

Though some ERP components are referred to acronyms, most components are referred to by a letter indicating polarity (negative, N or positive, P), followed by a number indicating either the latency in milliseconds or the component’s ordinal position in the wave form.

(12)

N1 and N170

The peak of the N1 component is normally observed in a range of 150-200 msec.

after the onset of the stimulus. The N1 is elicited by visual stimuli. The N1 deflection may be detected at most recording sites, including the occipital, parietal, central, and frontal electrode sites (Mangun & Hillyard, 1991). When a stimulus is presented centrally, the N1 is bilateral (Wascher, Hoffmann, Sänger, &

Grosjean, 2009). Attention is especially relevant to the processing of emotional stimuli, because emotional stimuli are believed to receive preferential attention and perceptual processing. The N1 provides a means of examining which is useful in understanding how emotion is central in capturing attentional resources (Zani

& Proverbio, 2003). In the same way its amplitude is influenced by selective attention and thus, it has been used to study a variety of attentional processes suggesting that attention acts as a sensory gain mechanism that enhances perception of attended versus unattended stimuli (Luck, 2005; Rugg, Milner, Lines, & Phalp, 1987). Similarly, the valence of emotional stimuli has been found to influence the amplitude of N1 (Delplanque, Lavoie, Hot, Silvert, & Sequeira, 2004)

The N170 component reflects the neural processing of faces. When evoked potentials of faces are compared to those elicited by other visual stimuli, the former shows increased negativity at 130-200 msec. after stimulus onset. This response is maximal over occipito-parietal electrode sites, which is consistent with a source localized in the fusiform and inferior-temporal gyri. It normally displays a right-hemisphere lateralization (Rossion & Jacques, 2008). Lots of studies have showed the N170 modulation affected by emotional facial expressions (Blau, Maurer, Tottenham, & McCandliss, 2007).

P2 or P200

The P2 component peaks vary between 150 and 275 msec. The distribution of this component in the brain, as measured by electrodes placed across the scalp, is located over the centro-frontal and the parieto-occipital region.

P2 represents some aspects of higher-order perceptual processing, modulated by attention in visual stimuli. One study suggests that the P2 indexes some form of selective attention which identifies meaningful stimuli thought feature suppression

(13)

(Mehta, Ulbert, & Schroeder, 2000). The P2 may index mechanisms for selective attention, feature detection (including color, orientation, shape, etc.) and other early stages of item encoding.

P3 or P300

The P300 is a positive deflection in voltage with latency of roughly 300 to 600 msec. The signal is typically measured most strongly by the electrodes covering the parietal lobe. In 1965, Sutton and colleagues found that when subjects were required to guess what the following stimulus would be, the amplitude of this

“late positive complex” was larger than when they knew what the stimulus would be. The P300 can also be used to measure how demanding a task is on cognitive workload (Donchin & Coles, 1988), and it acts in creating conscious emotional experience.

Topographic map

The distribution on the scalp of ERP recorded on electrodes can be presented by a topographic map. A succession of topographic maps can show us the temporal evolution of activation fields (msec. precision).

The red color represents the positivity and the blue color the negativity (see fig.

3).

Fig. 3. Topographic map representing posterior positivity and centro-frontal negativity at 100 msec. from the presentation of a visual stimulus. The red color represents the positivity and the blue color the negativity.

(14)

Source localisation

Interpretation of EEG and ERPs almost involves speculations as to the possible locations of the sources inside the brain that are responsible of the activity observed on the scalp. The basic principle that seemed to apply was that an active source of current inside a finite conductive medium would produce volume current throughout the medium and lead potentials differences on its surface. The process of predicting scalp potentials from current sources inside the brain is generally referred to as the forward problem (see fig. 4). If the configuration and distribution of sources inside the brain are known at every instant in time and the conductive properties of the tissues are known everywhere within the volume of the head, the potentials everywhere on the scalp can be calculated from basic physic principles. Controversy, the process of predicting the locations of the sources of ERPs from measurement of the scalp potentials is called inverse problem (see fig. 4). It turns out that, given a finite numbers of sites on the scalp at which the scalp potentials are measured at some instants in time it is possible for an infinitive number of source configurations to account for those measurements. The principle is to eliminate the non-possible solutions (e.g.

sources cannot be located in the ventricles of the brain) considering that in the collection of possible source configurations inside the brain that can account for the measurement on the scalp there is only one that is correct (Koles, 1998).

Fig. 4. Above: image representing the forward problem. Below: image representing the inverse problem.

(15)

The treatment of facial expressions

In this manuscript we are particularly interested in emotional treatment of emotions and, more specifically, in the emotional treatment of facial expressions.

Ekman and Friesen reported that six emotions (anger, happiness, fear, surprise, disgust and sadness) are readily recognized across very different cultures. It’s what we call basic emotions (Ekman & Friesen, 1971). In this study we decide to analyze the treatment of angry, fearful, happy and neutral faces.

By observing the relative shape or posture of facial features of a person, we are able to guess which kind of emotion another person is feeling at that moment.

Standing to the results of Pizzagalli and colleagues (1999) the first perceptive stage, in which the subject completes the “structural code” of face, is thought to be processed separately from complex facial information such as emotional meaning. Lots of studies show the particularity of emotional face stimuli.

At a behavioral level positive emotional faces appear to be easier to recognize than negative ones. Two studies shows the advantage in the detection of happy emotional faces compared to neutral and anger ones. In the Leppänen and Hietanen's article (Leppänen & Hietanen, 2004), happy facial expression were recognized faster than negative (disgust, sad) ones. Further evidence suggesting that positive facial expressions are more easily processed is given by Hess, Blairy, and Kleck (1997). The results showed that there’s more accuracy in decoding positive facial expressions (happy faces) than negatives ones (angry, disgusted and sad faces).

Emotional stimuli also lead to special brain processes and activation. A fMRI (Gorno-Tempini et al., 2001) and a PET study (Damasio et al., 2000) showed the involvement of cortical prefrontal and occipito-temporal junctions and subcortical structures (amygdala, basal ganglia and insula) in processing face emotional stimuli. Actually, Adolphs, Tranel, Damasio and Damasio (1994) have used patients with bilateral amygdala lesions as evidence of amygdala implications in emotional face processing. The disease does not influence the conscious ability to discriminate complex visual stimuli such as faces which suggests that there are specific brain regions involved in emotional face expression recognition. Gur and colleagues (2002) bring an important study in which he compared relevant facial

(16)

expressions (identification of emotional valence), to irrelevant affective judgment (age determination). This study reports a greater activation of amygdala, thalamus, and inferior frontal activation when facial expressions were relevant.

This result goes in the direction of Haxby's model (2000) which assumes the existence of two separate neural systems: one for the visual analysis of and the other one for emotional processing of faces.

The time elapsed in processing emotional face expression is also particular.

Streit’s electroencephalography’s studies (Streit, Wölwer, Brinkmeyer, Ihl, &

Gaebel, 2000) have supported the hypothesis that the process of facial-expressions recognition starts very early in the brain, approximately at 180 ms after stimulus onset, slightly later than the face-selective activity reported between 120 and 170 ms, the N170 (Bentin, Allison, Puce, Perez, & McCarthy, 1996). Miyoshi, Katayama and Morotomi (2004), reported a modulation of the N170 by changes on emotional expression. Pourtois and colleagues showed an early negative deflection that primarily reflected category-selective perceptual encoding of facial information, whereas higher order effects of face individuation, emotional expression, and gaze direction produced selective modulations in the same face- specific region during a later time period (from 200 to 1000 ms after onset). These results shed new lights on the time course of face recognition mechanisms in human visual cortex and revealed for the first time anatomically overlapping but temporally distinct influences of identity or emotional/social factors on face processing (Pourtois, Spinelli, Seeck, & Vuilleumier, 2010).

Another issue is represented by the analysis of specific emotional content of face- stimuli. Smith and Lazarus (1990) suppose that subjects might be more emotionally involved by an angry expression (considered as high-arousal emotion) comparing to positive or neutral ones. In the same direction, it has been demonstrated in an EEG study by Marinkovic and Halgren (1998) that emotional face stimuli activate the extrastriate cortex more than neutral face stimuli. In general, there should be more intense emotional reaction viewing a negative rather than a positive emotion. This suggests separate brain processing for different emotions.

(17)

People with Huntington’s disease and people suffering from obsessive- compulsive disorders show severe deficits in recognizing facial expressions of disgusts; people with restricted lesions of amygdala are especially impaired in recognizing facial expressions of fear. (Sprengelmeyer, Rausch, Eysel, &

Przuntek, 1998). Moreover, in Adolphs study (Adolphs et al., 1994), most subjects with amygdala damage were impaired on several negative emotions but no subject was impaired in recognition of happy expressions. This dissociation implies effectively that recognition of certain basic emotions may be associated with distinct and non-overlapping neural substrate.

Nevertheless, other studies on impairments of facial-expression recognition do not show a category-specific deficit for recognition of emotional-expressions. For example Sato and colleagues (Sato, Kochiyama, Yoshikawa, & Matsumura, 2001) demonstrated that faces with emotions (positive and negative ones) elicit a larger N2 than neutral faces over the posterior temporal areas, but he didn’t find a significant difference between negative and positive emotions. In addition, (Herrmann et al., 2002) compared expressions with different emotional valence (sad, happy and neutral), and they failed to find emotion-specific ERP correlates for the three emotions. On the base of this background, it seems that structural information is processed separately from emotional information, but no specific ERP profile characterizes single emotional expressions.

All this controversy about the effect of emotional valence of the stimuli on the ERP correlates, has led to a recent increase in studies focusing on others type of emotional faces paradigms, as well as on lateralization effects, correct and incorrect responses, repetition effect and attention.

Loughead, Gur and Elliott (2008) performed a study to compare the identification of different facial emotions. They also made a distinction between correct versus incorrect response. Overall, correct detection of angry and fearful faces were associated with greater activation of N170 compared to incorrect responses, especially in the amygdala and fusiform gyrus. The opposite was observed for happy and sad faces, where greater thalamic and middle-frontal activation prefigure incorrect responses. Results indicate the importance on the control of the

“correct-incorrect response” variable.

Altogether, these results demonstrate that the neural system for emotional face

(18)

perception in human beings is largely distributed and implicates an interactive network with distributed activity in time and space. The functional implication of these interactions is complex and remains to be improved performing studies techniques and paradigms of experimentations.

Emotional expression processing and attention

The influence of attention and emotion is mutual (Vuilleumier, 2005).

Vuilleumier explains that emotional processes not only serve to record the value of sensory events, but also to elicit adaptive responses and modify perception.

Recent research has begun to reveal neural substrates by which sensory processing and attention can be modulated by the affective significance of stimuli. The amygdala plays a crucial role in providing attentive signals on sensory pathways, which can influence the representation of emotional events.

Eimer’s studies (Eimer & Holmes, 2002, 2007) have shown that although selective brain ERP responses to emotional faces are triggered at very short latencies, they are modulated by attention (e.g. emotional faces facilitate spatial attention).

As said before, different emotional faces increase positivity compared to neutral faces. The onset of this emotional expression effect was remarkably early, between 120 to 180 ms post-stimulus, in different experiments, where faces were presented at both fovea and lateral view, and with or without non-face distracters (spatial attention paradigm). Similar emotional expressions effects were found for six basic emotions, suggesting that these effects are not primarily generated within neural structures specialized for the automatic detection of specific emotions.

When foveal faces were unattended, expression effects were attenuated, but not completely eliminated.

It is suggested that there are ERP correlates of emotional face processing activity which are independent of attention orientation but that they are strongly affected by attention.

(19)

Selective Attention

Most ERP studies about emotional face processing and attention have explored the effect of the spatial attention which has lead to important conclusions.

Many researchers (Carlson, Reinke, & Habib, 2009; Pourtois, Grandjean, Sander,

& Vuilleumier, 2004) found evidence that threatening emotional faces (angry, fearful) facilitate spatial attention. In particular, Carlson et al. (2009) suggests that fearful faces facilitate spatial attention through a neural network consisting on the amygdala, anterior cingulated and visual cortex.

A higher loading of cognitive resources is therefore supposed to have an effect on brain processing of emotional face stimuli. In this study we want to analyze the effect of the selective attention which brings us to the following question: what is happening in the brain when we try to pay attention to a specific object, looking at a cross without moving the eyes and ignoring all objects which are different?

Selective attention is a key cognitive mechanism that enables the observer to process relevant stimuli while ignoring irrelevant, distracting events (e.g. Pashler

& Johnston, 1998). Some evidence has been reported for attention effects on very early sensory processing during stimulus selection based on a non-spatial feature:

Spatial frequency (Kenemans, Kok, & Smulders, 1993; Zani & Proverbio, 1995).

Looking at Posner theories (Fernandez-Duque & Posner, 2001; Posner &

Petersen, 1990) there are three different attention systems with different functions:

• The anterior attentional system (SAA);

• The posterior attentional system (SAP), with the dorsal (“where”) and the ventral (“what”) streams;

• The alerting system.

According to the paper written by Hillyard and Anllo-Vento (1998), the spatial attention is understood as a mechanism of gain control over information flow in extrastriate visual cortical pathways (V2, V3) and starting about 80 msec. after stimulus onset (P1 component, 80-130 msec.). In addition, an ERP study with fearful face presented on a paradigm of spatial attention found target-evoked enhanced P1 amplitudes (Pourtois, G. et al., 2004).

(20)

The N170 recorded in the primary visual area is also enhanced by attention (Fu, Greenwood, & Parasuraman, 2005). In the same direction, Schoenfeld et al.

(2007) have shown that the striate cortex is activated by non-spatial selective attention 150 msec. after stimulus onset. Two years later, Mohamed, Neumann amd Schweinberger (2009) conclude that the early stages of face processing indexed by the N170 strongly depend on selective attention.

The data provided by Proverbio, Del Zotto and Zani (2010) supports the idea that object-based selective attention processes might also be carried out at the earliest processing stage within the striate visual cortex, similarly to what was found for spatial attention most recently (40-60 msec., at the level of C1 component).

These results suggest that due to their social relevance, human faces may cause paradoxical selective and spatial attention effects on early visual ERP components.

Attention and visual awareness

The relationship between attention (in particular selective attention) and awareness is complex because there are many ways to understand the two concepts.

In this paper we assume selective attention as being a key cognitive mechanism that enables the observer to process relevant stimuli while ignoring irrelevant, distracting events. The awareness is considered as a useful measure to analyze conscious and unconscious perception of faces with the masking paradigm procedure. By low intensity and brief exposure, a target stimulus can be made unrecognized when another stimulus is presented simultaneously, shortly before (forward masking) or shortly after (backward masking paradigm, Rolls &

Heywood, 2004). This paradigm is used to investigate below awareness response to emotional perception in which facial expressions are followed immediately by a masking face. Evidence for the unconscious perception of masked faces has been revealed in terms of subjective reports, autonomic activity, and functional brain imaging measures. Standing to Treisman (1985) definition the backward masking paradigm leads to a preattentive processing of visual information which is performed automatically on the entire visual field detecting basic features

(21)

(colors, closure, line end etc.) of objects in the display. These simple features are extracted from the visual display in the preattentive system and later joined in the focused attention system into coherent objects. Preattentive processing is done quickly, effortlessly and in parallel without any attention being focused on the display. This definition shows the link between the concepts of attention and awareness.

Even thought the two concepts are closed, Koivisto and Revonsuo (2007) showed that the earliest electrophysiological correlate of consciousness emerged independent of the manipulations of spatial and non spatial attention. Thus, the electroencephalography brain responses reflecting visual consciousness and attention are initially independent of each other. Anyway, in the paper of Koivisto and colleagues (Koivisto, Kainulainen, & Revonsuo, 2009), they added that even though the correlation of phenomenal consciousness emerged independently of non spatial selection it is nevertheless strongly influenced by attention.

Visual awareness

Fear is an especially good emotion to use as a model, in particular for the unconscious processing of emotions. Ledoux (2000) argued that the core of the emotional system is a brain mechanism that computes the affective significance of a stimulus. This brain mechanism is part of what gives rise to the conscious experience of emotion, and that it necessarily operates outside of conscious awareness. Ledoux suggested a subliminal model of fear perception, presenting an unaware danger response where attention is oriented in this direction before that cortical representation is elaborated.

Bunce, Bernat, Wong and Shevrin (1999) demonstrated the existence of an expressive motoric response related to aspect operating in reaction to learned but unconscious events (Bunce et al., 1999). He investigated the predictive validity of facial electromyograms (EMGs) in a subliminal conditioning paradigm. These results demonstrated the existence of an expressive motoric response related to aspect operating in response to a learned but unconscious event. Unconscious processing of facial expression can also be demonstrated in clinical context, such as in the case of prosopagnosia. In most cases prosopagnosics appear to recognize

(22)

familiar faces even though they fail to identify the persons verbally. Therefore, the patients showed an unconscious recognition that cannot be accessed consciously.

Moreover, Pegna, Landis and Khateb (2008) provide electrophysiological evidence for early non-conscious processing of fearful facial expression founding a stronger posterior negativity in the N170. These results about unconscious perception had lead to the reflection that the effect induced by a perceived but not consciously elaborated emotional stimulus is critique for a great amount of neuropsychological research, on both normal and pathological subjects (Tranel &

Damasio, 1985). More generally, facial expressions of emotions are considered unique in their ability to orient the subjective cognitive resources, even if people are unable to process information consciously. Secondly, the hypothesis was made that subjects are able to assign a semantic value to the emotional content of faces even in an unaware condition (Dimberg, Thunberg, & Elmehed, 2000; Wong &

Root, 2003).

The detection for emotionally significant information has been mostly studied with fMRI measures. These methods need to be completed with measures that provide insights into temporal parameters of unconscious emotional comprehension, such as ERP. ERP are good and very useful to examine the time course of the conscious versus unconscious stimulus elaboration at a very high temporal resolution (e.g. Shevrin, 2001). Furthermore by comparing wave profiles, they furnish a valid measure of the qualitative nature of the emotional mechanisms, checking the resemblance of the underlying process for attentive and not attentive emotional elaboration (Balconi & Lucchiari, 2007). By comparing ERPs profiles in the conscious and unconscious condition, it is possible to verify the similarity of the comprehension process.

Liddell et al. (2004) has found two main ERP effects, N200 and P300 modulated by supraliminal or subliminal conditions. This result has been recently confirmed by Balconi and Mazza (2009) who found a positive (P300) deflection, maximally distributed on the parietal regions, and a negative (N200) deflection, more localized on the frontal sites. Some differences between two conditions were found in terms of quantitative modulations of the two peaks. Liddell (2004) showed a P300 effect modulation of subliminal and supraliminal presentation.

(23)

Actually, in this component there is increased amplitude of the late P3 in the supraliminal condition. Similarly, a P3 enhancement to fearful faces was observed on supraliminal but not subliminal trials (Kiss & Eimer, 2008).

There is therefore increasing evidence supporting the idea that significant affective processes can happen outside the consciousness, in agreement with Ledoux's theory (1996, in Calvo, 2005). It has been shown that the affective information contained in facial expression is perceived involuntarily (Eastwood &

Smilek, 2005) and that it is able to constrict automatically the focus of attention.

Unconscious stimulation produced a more delayed peak variation than conscious (Shevrin, 2001) which means that subliminal ERPs have a component structure similar to conventional supraliminal ERPs but there’s a difference in time of emergence of ERP (delay).

All these results suggests an unconscious elaboration of emotional faces and specific ERP effects which emerges early (still at N170 time) after the onset of visual stimulation as a consequence of the activation of posterior occipito- temporal and parietal networks.

The basic question about conscious and unconscious perception of emotional facial stimuli is whether the perception is qualitatively different in the two kinds of situations. ERP studies are perfect because of their temporal precision.

(24)

Objectives and hypothesis of the experiment

ERPs are important indexes of the unconscious and attentional elaboration of emotional faces. We decided to use a backward masking and a selective attention paradigm to study this phenomenon. This can allow us to examine ERPs and behavioral differences:

• For four basic emotions with different valence and arousal, like fearful, angry, neutral and happy faces;

• For stimuli presented in a subliminal and supraliminal way, as index of awareness;

• For target compared to non target stimuli, as index of selective attention;

• For three regions of interests (ROIs) left, central and right) on the N1 and P2 component as index of lateralization

Regarding the emotion process, we hypothesize:

• Faster reaction times (RTs) and higher accuracy to positive emotions (happy) compared to negatives ones (angry, fearful);

• An increase of N170 component for negative faces compared to positive ones.

Regarding the awareness mechanism, we hypothesize:

• Faster RTs and higher accuracy for stimuli presented supraliminally than subliminally, independently of the emotional valence;

• An effect on the two occipito-temporal components analyzed: N170, P2;

• An effect on the P300 parietal component Regarding the selective attention mechanism:

• An early effect on occipito-temporal components

• A later effect on P300 component as index of attentional effects

(25)

CHAPTER 2: Experiments and methods

Participants

20 healthy volunteers took part in the experiment (age range 19-33, mean = 24.25, SD = 4.33). Informed and consent standing to the HUG ethic committee, they have all been paid 30 francs. The group comprised 10 women (age range 19-33, mean = 23.7, SD = 4.37) and 10 men (age range, 19-33, mean = 24.8, SD = 4.44).

Mainly students at the University of Geneva, with normal or corrected-to-normal visual acuity, and were all right handed (standing to the Edinburgh Handedness Inventory of Oldfield, 1971). The quotient of lateralization of all subjects was above 0, which indicated a right lateralization (min. = 8, max = 20, mean = 13.9).

The exclusion criterion was their history of psychopathology or neurological diseases at the exception of a subject, suffering of hypertension. Three participants have token medicament such as Beta-blockers, ACE inhibitors and anti-calcium (for the hypertension subject) and two others Zyrtec and Spasmo Canulase. All these drugs are supposed to reduce the reaction capacity. The participants had slept 7.33 hours on average, 7 subjects had estimated the night shorter than usual, one as longer than usual and the others as normal. The consumption of nicotine, tea or coffee was considered as normal by 14 subjects, as more than usual by 2 subjects and less than usual by 4 subjects. Three of the 20 participants considered smoking less than usual. 5 participants had consumed alcohol in the 24 hours prior the experiment, but the hours between their last consumption and the onset of the experiment was never inferior to 7 hours.

According to the rewired literature about the influence of anxiety in the recognition and detection of emotional stimuli of Staugaard, anxiety is supposed to modulate behavioral and ERPs responses to emotional stimuli. For this reason an anxiety test (Self-Evaluation Questionnaire of Spielberger) was submitted, before the EEG recording. No subjects presented an anxiety level susceptible to influence the EEG results

(26)

Material

The stimulus material was taken from Ekman & Friesen’s set of pictures and others similar databases (fig. 5). Black and white pictures of actors, presenting, respectively, happy, angry, fearful and neutral faces. The stimuli were modified, thanks to Adobe Photoshop 11, eliminating hair, ears and non facial boarding of faces. The aim was to give an oval shape to all the faces. The background was black for every stimulus.

The same scrambled faces (fig. 5) were used as masks for the masking paradigm, backward masking, that prevented the visual access to the targets, preserving the same physicals parameters (Di Lollo, Enns, & Rensink, 2000). A luminance analysis was made to standardize this variable and exclude luminance effects on face detection and recognition. The size of the square was 6X6 cm, the distance from the screen of 114 cm and the visual angle of 9’.

There were 40 stimuli (20 faces representing males and 20 representing females) for each emotional condition (angry, fearful, happy and neutral). The total amount of stimuli was 160 and each stimulus was presented 10 times for a total of 1600 stimuli and 1600 masks. Stimuli were divided in ten sequences of 160 stimuli each. The sequences were matched. Sequences were identical for each subject, but the presentation was randomized:

• In each sequence, half of the stimuli were presented subliminally and in the other half supraliminally.

• In half of the sequences the subjects had to respond to positive emotions (happy and neutral) and in the other half, they had to respond to negative ones (fearful and angry).

In half of the sequences they had to respond with their right hand and in the other half with the left hand.

All these stimuli were transformed in bitmap images and they were presented by e-prime™ software.

(27)

Fig. 5. Example of normal and scrambled faces representing an angry and a neutral face used in the experimental study.

Procedure

Pre-experimental study

Before initiating the EEG experimental study, we had done a pre-experimental study to select emotional faces which represent with greater accuracy the four emotions chosen for the EEG experiment (happy, neutral, angry, and fearful).

This had been done because the emotional faces came out from different databases. The 51 subjects (28 Females and 23 Males) were submitted to an e-

(28)

prime™ software presentation representing the four emotions (from 30 to 40 for each category) and had to choose between the six basics emotions standing to (Ekman & Friesen, (1971): anger, happiness, fear, surprise, disgust and sadness.

EEG experiment

Subjects were seated comfortably in a moderately dark room with the monitor screen positioned approximately 114 cm in front of their eyes. Pictures were presented in a randomized order in the center of a computer monitor. During the examination, participants were requested to minimize blinking. They were required to observe stimulus during behavioral task and ERP recording. In the subliminal condition, it was emphasized that sometimes the target faces would be difficult to see, but the subjects were requested to concentrate as best as they could on stimulus, and to answer with less delay as possible the categories they were requested to pay attention to (positive or negative ones). A training test was made to familiarize them with the task.

Prior to recording ERPs, subjects were familiarized with the overall random order of all emotional stimuli presented in the successive experimental session. The images were passing by very fast.

The EEG experiment was divided in two evaluations:

Behavioral experiment

The behavioral part was the evaluation of correct versus incorrect responses in an attended and unattended condition (selective attention paradigm) and in a subliminal and supraliminal condition (awareness paradigm). During every presentation (total of ten parts), the subjects had to respond to a specific category that was positive (happy and neutral faces) and negative (angry and fearful faces).

There were four buttons and the response types were matched between the hand and the button to push. Before each of the ten sequences a screen explained if they

(29)

had to respond to the positive or negative faces, with which hand and with which button. Further, the experimenter verbally reiterated the procedure to follow. The subjects sometimes believed that they would be exposed to challenging masks and expressed their feelings about not being able to easily detect the emotions implied in the stimuli. However, when such a masked image did occur, they were asked to give an answer even if they were not entirely sure about the response (first impression). Moreover, although these fears are legitimate, it should be noted that before the EEG experiment the subject was submitted to an exposition of images representing the four emotions and a trainer was present. The experimenter could thus check whether the subject had well understood the task.

Fig. 6. Example of a sequence with the fixation cross and of the presentation of fearful faces in a supraliminal (287 msec. mask 13 msec.) or subliminal (13 msec., mask 287 msec.) way. The participant had to answer to each face presented as quickly as possible;

he/she had to determine whether the face was the target (positive, neutral or happy;

negative, angry or fearful) by pushing on the corresponding button during the inter stimuli interval (ISI, 1500 ms).

(30)

EEG recording (technique)

The EEG recording was taken in a Faraday cage, with a Geodesic system (Electrical Geodesics, Inc., USA) at high density. The data was recorded in a continuous way with a sample of 1000 Hz, starting from 256 electrodes equidistant placed covering the scalp of the subjects and with a vertex reference.

Each impendency was maintained fewer than 50 kΩ.

Analysis was made with Cartool (http://sites.google.com/site/fbmlab/cartool, 3.40 versions). The recording was filtered for the noise between 0.001 and 50 Hz.

During the averaging, the epochs contained artifacts (electric noises, muscular noises) or ocular movements were rejected. The epochs surmounting 100 µV were also eliminated. After averaging the electrodes of cheeks was interpolated (passage from 256 to 204 electrodes) and after that the electrodes that were excluded before. Grand-averaging for every of the 16 conditions were calculated.

The independent variables of the study are:

• The type of the emotion (fearful, angry, happy or neutral);

• The selective attention condition: if the stimulus it’s a target (T, positive, happy and neutral or negatives, angry and fearful) or not (NT, negative, happy and neutral or positives, angry and fearful);

• The presentation mode: if the stimulus were presented in a subliminal (sub, 13 msec., mask 287 msec.) or in a supraliminal (sup, 287 msec., mask 13 msec.) way.

• Three Regions of Interests (ROIs) for the N170 and the P2 component (temporo-occipital left, central and right) and one for the P300 component (parietal).

The dependent variables of the study are:

• The accuracy and the reaction times in the behavioral responses;

• Difference in peak components (N170, P2, P3) The controlled variables were:

(31)

• The gender, the age, the hand laterality The matched Variables were:

• The order of the presentation of the stimuli which varied randomly

• The hand which activated the buttons

Subliminal/supraliminal presentation, the backward masking procedure

The short stimulus presentation in the preattentive (subliminal) condition prevented the subjects from having a clear cognition of the stimulus. In the current study we employed an objective threshold for pre-attentive condition. It was defined by an identification procedure, the case where stimulus is perceived by the subjects not more than 50 % of the times (Liddell et al., 2004). According to the signal detection theory (SDT, Macmillan & Creelman, 1990), when detection threshold sensitivity is at chance level, it is unlikely that there is conscious awareness of the stimulus. In the current study we employed an objective which was defined by the stimulus duration, where the stimulus is perceived by the subject in 50 % of the cases (Merikle, Smilek, & Eastwood, 2001). The behavioral study and post-hoc briefing comparison confirmed that subjects were unable to detect target stimulus in the subliminal condition. During the experiment, we used a masking procedure. Each facial stimulus (target) was presented for either 13 (subliminal) or 287 (supraliminal) msec., followed by a scrambled face presented for 287 (subliminal) or 13 msec. supraliminal (Bràzdil et al., 1998 paradigm in Liddell et al., 2004). The duration of each sequence was 5.5 minutes, for a total of an hour of experimentation considering short pauses between sequences). The short stimulus presentation in the subliminal conditions prevented the subjects to have a clear cognition of the stimulus, but it allowed a semantic elaboration of the emotional faces. Target and mask pairs depicted the same individual. In total there were 1600 target-mask pairs in each threshold conditions (each facial expression was presented forty times). The conditions were counterbalanced between subjects (Bernat, Bunce, & Shevrin, 2001).

(32)

The ROIs

Depending on the component analyzed, different Region of Interests (ROIs) were created with different electrodes.

N1 and P2 (occipito-temporal), P3 (parietal)

Tree Region of Interest (ROIs) were created in order to compare left temporo- occipital posterior, right temporo-occipital posterior and middle occipital posterior with the three variables studied (emotional expression, type of presentation, selective attention).

Fig. 7. Three occipito-temporal ROIs used for the analysis of the N170 and P2 component. On the right part the electrode numbers are; TP10 191 201. On the central part; 122 123 01 125 126 135 136 Oz 138 147 02 157 158 1 and in the left part; 83 93 TP9

One region of interest (ROI) was created to observe the P3 component on the three variables studied (emotional expression, type of presentation, selective attention).

Fig. 8. Parietal ROI created for the analysis of the P300 component. The electrode numbers were: P1 89 100 129 130 P2 Pz POz.

(33)

Analysis

During the analysis phase, only the correct responses were retained (Lougheah et al., 2008). All results have been analyzed thanks to Excel and Statistica 8. For behavioral and ERP analysis an analysis of variance (ANOVA) was applied. See details (number of factors and levels) in the chapter results.

For the source localization we used Cartool (3.40 versions). In particular, the inverse solution retained was the local autoregressive average (LAURA, Grave de Peralta et al., 2001).

(34)

CHAPTER 3: Results

Short forms:

1. ROI (R): Occipito-temporal Left (Lh), Occipito-temporal Central (C), Occipito-temporal Right (Rh)

2 Emotion (E): Angry (A), Happy (H), Fearful (F), Neutral (N) 3. Emotional Valence (EV): Positive (Pos), Negative (Neg) 4. Presentation (P): Subliminal (Sub), Supraliminal (Sup) 5. Selective Attention (A): Target (T), Non target (NT) 6. Gender face (GF) : Female (F), Male (M)

Behavioral results Accuracy

The accuracy of facial expression discrimination in the sub trials was 45.17 % (- 0.19 in z-score) and in the sup trials 87.27 % (1.32 in z-score). The binomial distribution shows that the probability that the sub trials are not at chance level is less than of 0.05; [p. < 0.05].

A repeated measure analysis of variance (ANOVA 2x2: P x EV) was applied. The analysis showed significant differences for the P. [F (1, 18) = 77.71, p. < 0.001, mean value: sub= 46.46, S.D. = 4.47; sup= 87.08, S.D. = 2.42].

The z-score normalization shows also a significant P effect. [F (1, 18) = 82.562, p.

< 0.001, mean value: sub= -0.15, S.D= 0.15; sup= 1.23, S.D. = 0.12].

To analyze the E effect an ANOVA 4x2x2: E x P x GF was applied.

We noticed a significant E effect. [F (1, 18) = 8.32, p. < 0.001. Mean values: H=

70.13, S.D. = 3.13, N= 53.15, S. D. = 4.22, F= 71.53, S.D. = 3.65, A= 65.56, S.D.

= 3.45]. More in details, the post-hoc analysis in the Sup condition (LSD test) shows a significant difference between H and F (p. < 0.05) and a tendency between H and A (p. < 0.05).

(35)

An interaction effect between E and P was also significant [F (3, 54) = 6.12, p. <

0.001], (see fig. 9).

Sub Sup

H N F A

E 0

10 20 30 40 50 60 70 80 90 100 110

%

Fig. 9. Interaction effect of emotion (E = emotion; H = happy, N = neutral, F = fearful, A

= angry) and presentation factors (Sub = subliminal, Sup = supraliminal). The y- coordinate represent the percentage of correct responses for each emotion presented in the experiment.

Reaction time (RT)

To analyze the emotional expression effect an ANOVA 4x2x2: E x P x GF was applied. The analysis showed significant differences for the P [F (1, 16) = 17.97, p. < 0.001. Mean value: Sub= 660 msec. S.D. = 34.41, Sup= 590 msec. S.D. = 22.17].

An E effect was also significant. [F (1, 18) = 19.96, p. < 0.001. Mean values: H=

600 msec., S.D. = 23.44, N= 700 msec., S. D. = 35.89, F =600 msec., S.D. =

(36)

29.78, A= 600 msec., S.D. = 26.87]. More in details, the post-hoc analysis (LSD test) shows a significant difference between N-H (p. < 0.001), N-F (p. <0.001) and N-A (p. <0.001).

An interaction effect between E and P was not significant but with a strong tendency [F (3, 48) = 2.43, p. = 0.08], (see fig. 10).

Sub Sup

H N F A

E 400

450 500 550 600 650 700 750 800 850 900 950

msec.

Fig. 10. Interaction effect of emotion (E = emotion; H = happy, N = neutral, F = fearful, A = angry) and presentation factors (Sub = subliminal, Sup = supraliminal). The y- coordinate the average time (in msec.) per each emotion presented in the experiment.

Sub: H= 651 msec., S.D. = 28.80; N= 731 msec., S.D. = 44.76; F= 627 msec., S.D. = 39.5; A= 632 msec., S.D. = 34.09. More in details, the post-hoc analysis (LSD test) shows a significant difference between N-H (p. < 0.001), N-F (p.

<0.001), N-A (p. <0.001).

Sup: H= 549 msec., S.D. = 21.02; N= 669 msec., S.D. = 28.58; F= 574 msec., S.D. = 22.21; A= 567 msec., S.D. = 21.02. More in details, the post-hoc analysis

(37)

(LSD test) shows a significant difference between N-H (p. < 0.001), N-F (p.

<0.001), N-A (p. <0.001) and H-F (0.05).

EEG results

N170 component (135-190 msec.)

At the N170 latency, an ANOVA 3x4x2x2: R x E X A X P was applied.

An R effect resulted significant; [F (2, 19) = 12.59, p. < 0.001. Mean values: Lh=

-3.64 μV, S.D. = 0.41; C= -2.78 μV, S.D. = 0.31; Rh= -3.92 μV, S.D. = 0.4], (see fig.11).

The analysis showed significant differences for the E [F (3, 18) = 7.66, p. < 0.001.

Mean values: H= -3.37 μV, S.D. = 0.39; N= -2.90 μV, S.D. = 0.32; F= -3.8 μV, S.D. = 0.31; A= -3.79 μV; S.D. = 0.36], (see fig. 11). Furthermore, the post-hoc (LSD test) analysis shows a significant difference between H and F (p. < 0.05), H and A (p. < 0.001), N and F (p. < 0.05) and N and A (p. < 0.001).

An A effect was also observed [F (1, 18) = 26.14, p. < 0.05. Mean values: NT=

-3.11 μV, S.D. = 0.36; T= -3.44 μV, S.D. = 0.28], (see fig.14, below).

A P effect was observed [F (1, 18) = 46.37, p. < 0.001. Mean values: Sub= -2.79 μV, S.D. = 0.34; Sup= -3.77 μV, S.D. = 0.31], (see fig.14, above).

An interaction effect between E and P was significant [F (3, 18) = 3.22, p. < 0.05]

(see fig. 15).

More in details, a post-hoc comparison analysis (LSD test) show difference in sub condition between N-A (p. < 0.001), N-F (p. < 0.001) and N-H (p. < 0.05), (see fig. 12).

(38)

Fig. 11. ERPs of N1 and P2 components recorded on the right, left and central electrode sites. The x-coordinate represent the temporal evolution from the stimulus onset (origin, o msec.) to 350 msec. The y-coordinate represent the μV recorded respectively the three occipito-temporal ROIs together (average).

(39)

Fig. 12. ERPs of N1 and P2 components recorded on the right, left and central central occipito-temporal electrode sites in the subliminal condition. The x-coordinate represent the temporal evolution from the stimulus onset (origin, o msec.) to 350 msec. The y- coordinate represent the μV recorded respectively the three occipito-temporal ROIs together (average).

(40)

Fig. 13. ERPs of N1 and P2 components recorded on the right, left and central occipito- temporal electrode sites in the supraliminal condition. The y-coordinate represent the μV recorded respectively the three occipito-temporal ROIs together (average). The x- coordinate represent the temporal evolution from the stimulus onset (origin, o msec.) to 350 msec.

(41)

Fig. 14. Above: ERPs (N1 and P2) recorded for the occipito-temporal lobe (three ROIs together) for the presentation effect (sub = subliminal, sup = supraliminal). The y- coordinate represent the μV recorded on the three occipito-temporal ROIs together (average). The x- coordinate represent the temporal evolution from the stimulus onset (0 msec.) to 350 msec.

Below: ERPs (N1 and P2) recorded for the occipito-temporal lobe (three ROIs together) for the selective attention effect (T = target, NT = non target). The y-coordinate represent the μV recorded on the three occipito-temporal ROIs together (average). The x- coordinate represent the temporal evolution from the stimulus onset (0 msec.) to 350 msec.

Références

Documents relatifs

539.. attention condition), amplitude of N170 showed significant emotion effect for all emoticon expressions as well as human expressions at right hemisphere, and

Right temporal lobe dysfunc- tion starting early in life, presumably before 5 year-old, may produce more severe and persistent deficits affect- ing performance for different

Learning-rate parameter estimates of the Rescorla-Wagner model implementing dual learning rates using the best-fitting parameters for positive predictions errors

The greater P300 amplitudes observed when the participants are chosen by the proposer suggest an increase in attentional resources allocated to the task when the participants

Learning-rate parameter estimates of the Rescorla-Wagner model implementing dual learning rates using the best-fitting parameters for positive predictions errors

The third PLS analysis (retrieval phase) was performed on the ERPs elicited by the prospective memory cues and by the ongoing task stimuli with the same emotional valence of the PM

Following the hypothesis of greater binding effects for angry stimuli, we would expect that participants show a greater difference between a complete repetition and alternation

Given that the use of spatial frequency information is rel- atively flexible during the visual processing of emotional stimuli depending on the task ’ s demands, we also explored