• Aucun résultat trouvé

An electroencephalographic approach to the processing of relevant stimuli in the human brain: the role of emotion, attention and awareness

N/A
N/A
Protected

Academic year: 2022

Partager "An electroencephalographic approach to the processing of relevant stimuli in the human brain: the role of emotion, attention and awareness"

Copied!
178
0
0

Texte intégral

(1)

Thesis

Reference

An electroencephalographic approach to the processing of relevant stimuli in the human brain: the role of emotion, attention and

awareness

TIPURA, Eda

Abstract

In this thesis, we used three approaches to investigate the neural correlates of the processing of relevant stimuli using electroencephalography (EEG). Time-frequency transforms performed on a cortically blind patient showed frontal slow wave oscillations in response to faces and scrambled faces, supporting the existence of a singular pathway for processing visual stimuli in this neurological condition. Analyses performed on intracranial data revealed that in this case emotion does not modulate the parts of the amygdala that were targeted and that the orbitofrontal cortex seems to better reflect the processing of emotion and stimulus awareness. Finally, we used a cueing paradigm in healthy participants to show that when they are not consciously perceived, emotional faces do not capture attention. Taken together, these findings support previous evidence for a subcortical processing route. However, we also showed that top-down processing cannot be triggered if the cue is not consciously perceived in healthy participants.

TIPURA, Eda. An electroencephalographic approach to the processing of relevant stimuli in the human brain: the role of emotion, attention and awareness. Thèse de doctorat : Univ. Genève, 2017, no. FPSE 675

DOI : 10.13097/archive-ouverte/unige:97006 URN : urn:nbn:ch:unige-970069

Available at:

http://archive-ouverte.unige.ch/unige:97006

Disclaimer: layout of this document may differ from the published version.

(2)

Section de psychologie

Prof. Olivier Renaud, co-directeur, Université de Genève Prof. Alan Pegna, co-directeur, University of Queensland

An electroencephalographic approach to the processing of relevant stimuli in the human brain: the role of emotion,

attention and awareness

THESE

Présentée à la

Faculté de psychologie et des sciences de l’éducation de l’Université de Genève

pour obtenir le grade de Docteur en Psychologie

par

Eda TIPURA

de

Lausanne, Suisse

Thèse No 675

GENEVE, juin 2017

No étudiant : 05-428-149

(3)
(4)
(5)
(6)

Remerciements

Durant l’élaboration de cette thèse, j’ai eu la chance d’être entourée par des gens exceptionnels que je tiens à remercier.

Tout d’abord, j’aimerais remercier mes deux co-directeurs, dont la complémentarité des compétences m’a permis d’allier si justement les deux domaines qui m’ont passionés pendant ces dernières années: l’analyse de données et la neuropsychologie. Merci Olivier Renaud de m’avoir poussée à me dépasser et de m’avoir, avec tant de pédagogie, transmis tes connaissances. Merci Alan Pegna de croire en moi depuis le début et de me transmettre de manière aussi contagieuse ta passion pour cet organe mystérieux qu’est le cerveau humain. Merci à vous deux d’avoir été des personnes en qui j’ai pu avoir confiance dans toutes les étapes de cette thèse.

Je remercie chaleureusement les membres du jury pour leur investissement.

Merci à Cyril Pernet pour ton engagement dans l’élaboration de cette thèse et pour ton aide dans les aspects techniques liés aux analyses de mes données. Merci Didier Grandjean d’avoir pris part à l’évaluation de ce travail et de m’avoir donné le goût pour la recherche pendant mon travail de master. Merci Holly Bridge d’avoir accepté de faire partie de ce jury et d’avoir pris le temps d’évaluer ce travail.

Merci à mes collègues du MAD de m’avoir soutenue, de près ou de loin, tout au long de ce travail. Merci les anciens: Guillaume Fürst, Florian Dufour, Nadège Jacot, Jonathan El-Methni, Boris Cheval et Audrey Bürki; et les actuels: Sezen Cekic, Delphine Paumier, Elisa Gallerne, Marc Yangüez Escalera, Fabio Mason, Stephen Aichele, Marie-Hélène Descary, Paolo Chisletta et Julien Chanal. Merci à Jaromil Frossard (aka jakemil) pour tes compétences statistiques et ton humour. Un grand merci à Catherine Audrin, Emilie Joly et Emmanuelle Grob d’avoir rendu cette période de vie inoubliable et de m’avoir permis d’y associer une valence positive grâce à tous les éclats de rire dont sont imprégnés les murs du quatrième. Merci Sandrine Amstutz pour ta patience face à nos déboires administratifs, les rires à travers le mur qui nous sépare et ton engagement pour que tout se passe bien pendant notre passage au MAD.

Merci Andres Posada pour ton aide dans la programmation de mes expériences et les debugs techniques liés à l’EEG.

Merci Nicolas Burra pour les discussions autour des potentiels latéralisés et pour tes conseils.

Je remercie mes amis qui on été présents et m’ont soutenue durant ce travail.

Merci Anaëlle, Andreia et Olivia d’être des amies aussi fidèles. Prendre de la distance par rapport à ma thèse avec vous a été essentiel. Merci Morgane d’être une amie si dévouée. Merci Julien et Basile pour toutes vos questions qui m’ont appris, plus ou

(7)

moins, à vulgariser mon sujet de thèse. Nos discussions philisophiques ont été très inspirantes durant ces dernières années. Merci à Julie pour ton amitié et ta confiance.

Le plus grand des mercis va à ma famille, mes parents et à ma soeur. Merci pour votre soutien et votre amour inconditionnel. Tous les mots ne pourraient pas exprimer la gratitude que je ressens envers vous et ce que vous m’avez permis d’accomplir. Merci maman et papa d’être des parents si encourageants. Grâce à vous j’ai appris à toujours aller de l’avant et ne jamais baisser les bras.

Merci à Cyril, mon amour. Merci pour les relectures, les corrections, les mises en page. Merci pour l’intérêt que tu portes à ce que je fais. Merci de ton soutien pendant toutes les étapes de cette thèse, et surtout merci de toujours me pousser à croire en moi.

Finalement, merci à Lina, mon poussin, d’être un petit être si exceptionnel. Tu ne t’en rends pas encore compte, mais ta présence durant les derniers mois de cette thèse m’a permis de prendre tout le recul nécessaire à la bonne élaboration de ce travail. Je me réjouis de te voir prendre conscience à quel point ton sourire est devenu ma définition de la pertinence.

(8)

Abstract

Due to the important information conveyed by human faces, they are among the most relevant stimuli for the individuals. The processing of pictures of faces has been widely investigated in the field of psychology and neuroscience and has been shown to be mediated by task demands or stimulus properties, including emotional expressions, awareness and attentional demands. These studies have shown that in particular negative expressions such as fear or anger lead to specific behavioural and neural outcomes. Moreover, these outcomes have also been observed when the stimulus is presented under the threshold of awareness and strengthened when it is attended in cueing paradigms. At a neural level, the involvement of a subcortical processing route in the processing of relevant stimuli at different levels of awareness has been postulated.

In this thesis, we used three approaches to investigate the neural correlates of these processes using electroencephalography (EEG). Time-frequency transforms performed on a cortically blind patient showed frontal slow wave oscillations in response to faces and scrambled faces, supporting the existence of a singular pathway for processing visual stimuli in this neurological condition. Analyses performed on intracranial data using a backward masked paradigm revealed that in this case emotion does not modulate the parts of the amygdala that were targeted and that the orbitofrontal cortex seems to better reflect the processing of emotion and stimulus awareness. Finally, we used a cueing paradigm in healthy participants to show that when they are not consciously perceived, emotional faces do not capture attention. Taken together, these findings support previous evidence for a subcortical processing route. However, we also showed that top-down processing cannot be triggered if the cue is not consciously perceived in healthy participants.

(9)
(10)

TABLE OF CONTENTS

INTRODUCTION AND OVERVIEW ... 1

THEORETICAL PART ... 3

1. FACE PERCEPTION ... 5

1.1 BRAIN STRUCTURES INVOLVED IN FACE PERCEPTION ... 6

1.2 ELECTROPHYSIOLOGICAL CORRELATES OF FACE PERCEPTION ... 9

2. EMOTIONAL FACE PERCEPTION ... 12

2.1 BRAIN STRUCTURES INVOLVED IN EMOTIONAL FACE PERCEPTION ... 12

2.2 ELECTROPHYSIOLOGICAL CORRELATES OF EMOTIONAL FACE PERCEPTION ... 14

3. UNCONSCIOUS PROCESSING OF EMOTIONAL STIMULI ... 17

3.1 AFFECTIVE BLINDSIGHT AND THE STUDY OF EMOTIONAL FACES AT A NON-CONSCIOUS LEVEL ... 17

3.2 PROCESSING OF SUBLIMINAL EMOTIONAL FACES IN HEALTHY SUBJECTS ... 19

3.3 ELECTROPHYSIOLOGICAL CORRELATES OF SUBLIMINAL EMOTION FACE PROCESSING ... 20

4. ATTENTION AND EMOTION ... 22

4.1 ELECTROPHYSIOLOGICAL CORRELATES OF SPATIAL ATTENTION ... 23

4.2 THE ROLE OF RELEVANCE AND AWARENESS ... 24

4.3 THE ROLE OF ANXIETY ... 26

5. SPECIFICITIES OF HUMAN BRAIN INTRACRANIAL RECORDINGS ... 28

6. OBJECTIVES... 29

EXPERIMENTAL PART ... 31

STUDY 1.VISUAL STIMULI MODULATE FRONTAL OSCILLATORY RHYTHMS IN A CORTICALLY BLIND PATIENT: EVIDENCE FOR TOP-DOWN VISUAL PROCESSING ... 33

1. INTRODUCTION ... 34

2. MATERIALS AND METHODS ... 36

3. RESULTS ... 42

4. DISCUSSION ... 49

5. CONCLUSIONS ... 52

6. ACKNOWLEDGMENTS ... 53

7. SUPPLEMENTARY MATERIAL 1 ... 54

(11)

8. SUPPLEMENTARY MATERIAL 2 ... 56

9. APPENDIX 1 TO STUDY 1.EVENT-RELATED POTENTIALS DURING THE PROCESSING OF FACES IN THE GROUP OF CONTROL PARTICIPANTS ... 60

10. APPENDIX 2 TO STUDY 1.EVENT-RELATED POTENTIALS DURING THE PROCESSING OF SUBLIMINAL FACES IN THE GROUP OF CONTROL PARTICIPANTS ... 63

STUDY 2.DETECTION OF SUBLIMINAL AND SUPRALIMINAL EXPRESSION OF FEAR IN THE AMYGDALA AND THE ORBITOFRONTAL CORTEX: AN INTRACRANIAL EEG STUDY ... 67

1. INTRODUCTION ... 68

2. MATERIALS AND METHODS ... 69

3. RESULTS ... 72

4. DISCUSSION ... 88

5. CONCLUSION ... 89

STUDY 3.ATTENTION SHIFTING AND SUBLIMINAL CUEING: AN EEG STUDY USING EMOTIONAL FACES... 91

1. INTRODUCTION ... 92

2. MATERIAL AND METHODS ... 94

3. RESULTS ... 98

4. DISCUSSION ... 105

5. CONCLUSIONS ... 107

6. SUPPLEMENTARY MATERIAL -INTERACTION PLOTS ... 108

7. APPENDIX TO STUDY 3 ... 110

GENERAL DISCUSSION ... 115

1. INTEGRATION OF THE MAIN FINDINGS ... 117

2. THEORETICAL IMPLICATIONS... 118

2.1 THE ROLE OF THE FRONTAL CORTEX IN THE PROCESSING OF RELEVANCE WITHOUT AWARENESS ... 119

2.2 ATTENTION AND CONSCIOUSNESS ... 121

3. LIMITATIONS ... 123

4. CONCLUSION ... 124

REFERENCES ... 125

RÉSUMÉ EN FRANÇAIS... 155

(12)

OVERVIEW

The mechanisms underlying human vision have been investigated by neuroscientists interested in the neural circuitry linked to different aspects of visual perception. Typical questions that were raised in this sense have been for example how specific low-level properties of a stimulus affect behavioural and neural outcomes leading to bottom-up modulations by stimulus characteristics, and how higher-level cognition can influence the perception of visual stimuli, with top-down processing operating as a modulator of sensory input. This refers to the two functions related to attention in the human, whose brain capacities are limited. Indeed, one cannot pay attention to all of the sensory stimuli surrounding the environment (Katsuki &

Constantinidis, 2013). In our everyday life, some of these stimuli capture our attention because of saliency in their physical features, such as for example its colour, shape, or the sudden sound that it makes. This capture leads to a prioritisation in the hierarchy of the sensory stimuli that we have to process at a certain time. Among all the sensory stimulation, the salient stimulus will therefore be preferentially processed and will lead to a deeper processing. This type of processing refers to bottom-up attention of sensory stimuli. On the other hand, when a stimulus is attended, it will be easier to process its features than processing aspects of unattended stimuli. For example, if one is asked to focus on a certain location in space, features of stimuli appearing at that location will be detected faster and more accurately than stimuli appearing at other locations. This second attentional type is called top-down processing.

In the field of cognitive neuroscience, the processing of relevant stimuli has been widely investigated (Sander, Grafman, & Zalla, 2003) and research has shown that this specific category of stimuli leads to an enhancement of both bottom-up and top-down attention. Indeed, a stimulus that is considered as relevant for an individual leads to increased saliency and preferential volitional orientation. One of the stimuli that is considered as most relevant for the individuals are human faces. Indeed, faces convey crucial information that is necessary for adapted social communication skills. Therefore, if we want to investigate behavioural and neural outcomes relative to the processing of relevant stimuli, we may use pictures of human faces.

The general aim of this thesis is to investigate the electrophysiological correlates associated with the processing of faces in the human brain. The modulation of this

(13)

processing under different levels of awareness, cognitive load, and as a function of emotional expression, is central in this work; the interactions between face perception and these different aspects will therefore be discussed.

Overview of the thesis

This work is divided into three parts: a theoretical part, an experimental part, and a general discussion. In the first theoretical part, we will first review behavioural and brain imaging studies highlighting the specificities of the perception of human faces (chapter 1). The neural substrates of this preference will be highlighted and timing of this early processing will be investigated. Faces are relevant per se, but their perception can be modulated by task demands such as stimulus duration or working-memory load, or other features such as emotion. The modulation of face perception by emotional expression will therefore be discussed, with an emphasis on the neural circuitry sustaining this processing (chapter 2). The interaction between emotion and consciousness will be explored (chapter 3), and the way this interaction is modulated by attention will be discussed (chapter 4).

In the second experimental part, the three studies that were conducted will be described. In study 1, data from a cortically blind patient will be presented. In this study, faces and scrambled faces were presented to the patient and electroencephalogram (EEG) was recorded in order to test neural responses to relevant stimuli in the absence of awareness. Specifically, the presence of a subcortical processing route in this processing was hypothesised. In study 2, intracranial electroencephalographic data from epileptic patients were recorded while they viewed images of faces expressing fear or neutral faces at different levels of awareness.

Responses to emotional faces were contrasted with responses to neutral faces in order to test whether a spreading effect between the amygdala and the orbitofrontal cortex would occur when emotional faces are not consciously perceived. In study 3, based on the results of study 1, an attentional component was added in an experimental procedure in order to test if top-down control by unconscious emotional faces would appear in healthy participants.

Finally, a general discussion will be presented, with the integration of the main findings of the experimental part, a presentation of the theoretical implications of this thesis, the limitations associated with the work.

(14)

THEORETICAL PART

(15)
(16)

1. Face perception

Among all the visual stimuli continuously modulating an individual’s perception of the world, human faces represent one of the most relevant. Indeed, faces convey fundamental information regarding one’s identity, intentions, emotions, and so on, and are therefore necessary for the construction and maintenance of social interactions (Little, Jones, & DeBruine, 2011). New-born infants stare at face-like stimuli more importantly as compared to other types of images (Goren, Sarty, & Wu, 1975; Johnson, Dziurawiec, Ellis, & Morton, 1991), placing faces as privileged stimuli in the study of human visual perception. Participants are more rapid when asked to detect a face as compared to other visual stimuli such as houses (Purcell & Stewart, 1986; Tottenham, Leon, & Casey, 2006), again suggesting that faces convey crucial information that needs to be processed promptly by the individuals.

During an interaction, the face of the interlocutor is the most relevant and available piece of information used by the individual. Extracting this information in the face requires processing of a myriad of face-related events and probably relies on the activation of several brain structures and their interactions in order to take place. It has been proposed that face detection relies on configurational processing – encoding the relationships between the different face parts and not only the shape of a face (this processing is opposed to featural processing) – which distinguishes three types of face processing (first-order relations, holistic processing and second-order relations; Maurer, Le Grand, & Mondloch, 2002). According to Maurer et al. (2002), first-order relations refer to the recognition of a face based on its features and the fact that the face is represented by two eyes arranged above a nose, itself hanging above a mouth. This type of processing would allow rapid distinction between face stimuli and other non-face stimuli and is quite easy since all the faces are characterised by the same attributes.

Holistic processing (also named featural processing in the literature; Wang, Guo, & Fu, 2016) allows linking the face features into a whole shape, leading to face individuation.

This type of processing makes it difficult to detect features in isolation, as revealed by the composite face effect: when the upper and the bottom halves of two faces representing different individuals are combined, subjects are slower and make more mistakes when asked to recognise the top half of the face identity (Hole, 1994; Young, Hellawell, & Hay, 1987). Finally, second-order relations (also named configural face processing in the literature; Wang et al., 2016) is based on the distance between internal face features (eyes, nose, mouth) and would allow the recognition of one’s face.

This particular interest in stimuli representing human faces has led researchers to

(17)

processing of relevant stimuli, neuroscientists seek to understand the timing and localisation of this processing and disentangle the different aspects associated with face perception. For example, is identity encoded at early stages of processing or does it rely on more sophisticated processes leading to a later evaluation of face identity. Is a face processed holistically or do some structural features such as the eye region convey all the information needed for stimulus evaluation. Is there a specific brain region which collects all the data available from the visual input and how this region would interact with the other systems of the brain during the processing of faces. These questions will now be addressed, through a review of studies using functional imaging and electroencephalography to highlight the specificities of face processing in the human brain.

1.1 Brain structures involved in face perception

The superior temporal sulcus (STS) in the macaque brain contains neurons that respond specifically to faces using single-unit recordings (Gross, Roche-Miranda, &

Bender, 1972; Desimone, 1991; Perrett et al., 1991). This evidence has led researchers interested in face perception to question whether a region in the human brain is specialised in this processing as well. Besides the fact that faces seem to be preferentially processed by the individuals (Yin, 1969; Bruce, Doyle, Dench, & Burton, 1991), some neuropsychological conditions in brain damaged patients with specific face processing dysfunctions have also led to the investigation of brain areas linked to face perception (Damasio, Tranel, & Damasio, 1990; Behrmann, Winocur, & Moscovitch, 1992).

Bruce and Young (1986) have proposed that two parallel processing routes are responsible for facial expressions and identity processing. These routes would work independently such that expression can be treated without identity processing (Bauer, 1984; Breen, Caine, & Coltheart, 2000). Support for this model comes from prosopagnosic patients who are still able to recognise expressions (e.g. Damasio, Damasio, & Van Hoesen, 1982; Damasio et al., 1990) and other patients with intact identity recognition but who are unable to process specific facial expressions as for example disgust following insula damage (Calder, Keane, Manes, Antoun, & Young, 2000) or fear following amygdala damage (Adolphs et al., 1995). This model has been questioned by some behavioural data showing that expression evaluation can be affected by familiarity (Schweinberger & Soukup, 1998) and that learning of new face identities is facilitated by expression (Baudouin, Gilibert, Sansone, & Tiberghien, 2000;

Sansone & Tiberghien, 1994). However, face perception seems to activate specialised brain regions that were investigated using functional magnetic resonance imaging (fMRI), which allows the investigation of brain structures involved in the processing of

(18)

stimuli with a high spatial resolution. Several brain regions have been highlighted during the processing of faces as compared to other types of stimuli in studies using numerous experimental paradigms. In particular, more important activation in the ventral occipitotemporal cortex (especially the lateral fusiform gyrus) and the inferior occipital gyrus are observed during the processing of faces. The activation of these regions does not appear in isolation and they have often been reported to collaborate in order to categorise and bring to consciousness the concept of a face.

The model proposed by Bruce and Young (1986) was elaborated following observations made in behavioural experiments. Based on fMRI, PET and ERP studies, Haxby, Hoffman and Gobbini (2000) proposed an influential model where a comprehensive representation of human faces is extracted through a joint activation of frontal, temporal and occipital cortices. According to this model, a core system in the visual cortex processes invariant and changeable aspects of faces. This system interacts with an extended system where other brain regions specialised in different aspects of face perception allow the completion of visual analysis. The extended system integrates features related to social communication such as attentional processes, speech perception, emotion and identity-related information (see Fig. 1). The integration of all these aspects would allow face identification.

The organisation of the visual system has been categorised into two distinct pathways emerging from the occipital cortex. The “what” pathway goes along the ventral stream and is responsible for object recognition; the “where” pathway goes through the dorsal stream and reflects processing of spatial vision. This categorisation

Figure 1: The distributed neural system for face perception. According to this model, two systems interact during face perception. The core system refers to visual anlysis of the face

(19)

raised the question of whether a specialised region for faces is present along the ventral vision pathway. Haxby et al. (1994) have highlighted that the ventral pathway processes several object categories, whose neural correlates are represented in distinct regions.

Kanwisher, McDermott and Chun (1997) showed activation of the lateral fusiform gyrus to faces as compared to different types of other stimuli, including scrambled faces, houses and human hands. This study showed that activation of this region is not linked to visual attention, stimulus classification or processing of human forms, highlighting its specificity in the processing of faces. Consequently, the lateral fusiform gyrus has been labelled “the fusiform face area”. At the same time, another study used continuously changing montage in which faces, common objects and nonobjects were displayed and showed bilateral activation of the fusiform gyrus when contrasting faces to nonobject processing. Moreover, when contrasting faces to object processing, only the right fusiform gyrus showed significant activation (McCarthy, Puce, Gore, &

Allison 1997), suggesting the specificity of the right hemisphere in face perception.

The inferior fusiform gyrus (IOG), also known as the occipital face area (OFA), in collaboration with the fusiform gyrus and the superior temporal sulcus, seems to be a central neural substrate in the processing of faces. Indeed, the IOG is more active during the processing of faces than other types of stimuli (Pitcher, Walsh, & Duchaine, 2011).

In support of this observation, lesion studies showed that damage in this brain region leads to a deficit in face recognition (prosopagnosia; Bouvier, & Engel, 2006).

However, it is not yet established whether this region is involved in early stages of face processing (which is claimed by some authors based on the observation that IOG is the most posterior region related to face processing (e.g. Pitcher et al., 2011) or in later stages shaping identity recognition (Rossion, Dricot, Goebel, & Busigny, 2011).

Intracranial studies on epileptic patients directly implanted in this brain region recorded event-related potentials during the processing of faces and showed a specific activation for faces in the IOG in the time range generally observed during the processing of these stimuli (~170 ms) (Allison, Puce, Spencer, & McCarthy, 1999; Rosburg et al., 2010), suggesting that this region plays a crucial role in early stages of processing.

Other brain regions have also been observed during the processing of face perception, such as the amygdala (Garvert, Friston, Dolan, & Garrido, 2014), implied in early stages of face perception and the anterior temporal lobe (Von Der Heide, Skipper,

& Olson, 2013) during face identification and memory. Neurons in the orbitofrontal cortex (OFC) who respond selectively to faces have also been reported in the macaque brain (Rolls, Critchley, Browning, & Inoue, 2006). It is suggested that this network is activated differentially depending on task demands (Vuilleumier & Pourtois, 2007), but the way these different regions act to form the concept of a face is still an open question.

Specifically, the timing of activation of these different regions will now be investigated.

(20)

1.2 Electrophysiological correlates of face perception The N170

Electroencephalography (EEG) allows the investigation of the processing of visual stimuli in the human brain with a very good temporal resolution. Therefore, it is possible to study the timing of the processing of faces. The first study addressing this issue has highlighted a specific negative deflection appearing ~170 ms after the presentation of a face-stimulus over occipito-temporal loads. This specific component has therefore been labelled the N170 (Bentin, Allison, Puce, Perez, & McCarthy, 1996) and its occurrence following face perception has been replicated in a great number of studies, placing it as a specific marker of faces perception. Its activation is larger over the right hemisphere, which is consistent with neuropsychological studies showing that prosopagnosia is more likely to appear following right hemisphere damage than left hemisphere damage (De Renzi, Perani, Carlesimo, Silveri, & Fazio, 1994). In healthy subjects, recognition is better and faster when faces are presented in the left visual field (projecting to the right hemisphere) (Hellige & Jonsson, 1985; Levine & Koch-Weser, 1982; Levine, Banich, & Koch-Weser, 1988). The N170 has also been observed following presentation of face-like stimuli (Liu et al., 2016), suggesting its role in configural processing.

An open question remains of whether identity is already encoded at this early stage of processing (Yovel, 2016). Bentin and Deouell (2000) compared responses to famous versus unfamiliar faces and observed no differences in the N170 potentials relative to these two types of stimuli, suggesting that the identification of faces is not reflected by this component and may arise later. Moreover, this component does not seem to reflect encoding of head detection or specific processing of internal face features (Eimer, 2000). Jacques and Rossion (2006) have questioned this point of view using a continuous-stimulation paradigm where a face was either replaced by another face of the same identity or by another’s identity face. These authors showed that the N170 was increased when the face pairs were representing different identities, therefore suggesting that face discrimination can occur at very early stage of processing, and coinciding with face detection. Furthermore, some authors have claimed that face processing encoded in the N170 range is mainly associated with isolated characteristics of the face, as supported by the important responses observed when eyes are presented in isolation (e.g. Bentin et al. 1996; Itier et al. 2007). On the other hand, Eimer (1998) showed that this component is rather sensitive to the processing of the whole face, when comparing its activation following the presentation of faces with and without eyes. In fact, it seems that faces are processed holistically, but the eyes are necessary for the integration of face features in the N170 component (Itier et al., 2007).

(21)

The N170 is also sensitive to stimulus orientation. Interestingly, upright faces are better recognised than upside down faces and this finding is not observed with other types of stimuli (Yin, 1969). The amplitude of the N170 is more important for inverted than upright faces, with a delayed latency (Rossion et al., 2000; Rossion & Gauthier, 2002), supporting the behavioural difference observed between these two types of stimuli. Interestingly, when upright and inverted faces are presented in the left visual field, the superiority of the left visual field discussed earlier disappears (Leehey, Carey, Diamond, & Cahn, 1978), highlighting the role of the right hemisphere in the inversion effect. EEG allows the extraction of another interesting type of information linked to the frequency domain, where specific modulations in response to face stimuli have also been reported.

Brain oscillations

Over the past years, the study of cognitive processes has been widely linked to oscillations occurring in the human brain. The frequency domain of a signal can be investigated through electro- or magneto-encephalography measures and its content has been divided in low (delta range, <3 Hz; theta range, 4-7 Hz), mid (alpha range, 8-12 Hz; beta range, 15-25 Hz) and high (gamma range, 30-120 Hz) frequencies (Tallon- Baudry, 2009). Gamma oscillations for example have systematically been observed in cognitive tasks involving visual stimuli (e.g. Lutzenberger, Pulvermüller, Elbert, &

Birbaumer, 1995; Tallon-Baudry, Bertrand, Delpuech, & Pernier, 1997). In the study of face perception, induced gamma-band activity (iGBA) has been observed during the processing of faces as compared to ape faces, human hands, buildings and watches (Zion-Golumbic, Golan, Anaki, & Bentin, 2008) and to faces as compared to mosaics and houses at early as well as later stages of processing in the right IOG, emphasising the role of this brain region in featural and configural processing stages (Sato et al., 2014). iGBA reflects neural synchronisation in high frequencies relative to a stimulus, but is not phase locked to stimulus onset (Bertrand & Tallon-Baudry, 2000). Its latency varies from trial to trial, making averaging useless for the detection of any induced activity. The iGBA observed following face perception gives new insights to the neural correlates of the processing of these stimuli, suggesting that face processing integrates a larger network than other visual stimuli (Zion-Golumbic et al., 2008).

Taken together, these behavioural, neural and electrophysiological findings show the importance accorded to faces in the human visual system. The neural signature emerging as a specialised pathway for face processing reflects the evolutionary advantage of having a specific brain region devoted to the processing of the most important stimulus informing about social inferences between individuals. Obviously, face perception interacts with other processes such as emotion or awareness, leading to distinct brain regions associated with the perception of different types of face stimuli,

(22)

depending on low-level (for example physical features of the faces) or high-level (for example the engagement of attention or the deployment of cognitive load) processes.

(23)

2. Emotional face perception

Faces represent the first filter to social interactions and emotional expressions integrate important social and biological features linked to face perception (Ohman &

Mineka, 2001). Emotion gives fundamental information about one’s intentions, it is therefore necessary to be able to quickly recognise and categorise an emotion. Ekman and Friesen (1975) proposed a model of emotion based on Darwin’s modular perception distinguishing modules such as fear or anger (Darwin, 1872). According to Ekman and Friesen (1975), there are seven fundamental emotions – fear, anger, sadness, happiness, surprise, disgust and contempt – other affective expressions being derivatives of them.

In the field of experimental psychology, research has shown that emotional expressions modulate a lot of social and cognitive elaboration, from attractiveness and evaluations of trustworthiness to facial recognition and identity discrimination. The intensity of a smile influences judgements of attractiveness (Golle, Mast, & Lobmaier, 2014) and a recent study showed that emotion expressed by another’s face influences the evaluation of attractiveness of a target face (Huang, Pan, Mo, & Ma, 2016). A face is even evaluated as less healthy when expressing negative emotions (Miriams et al., 2014).

Concerning judgements of trustworthiness, children of 10 years old, just as well as adults, evaluate angry cues as less trustworthy than happy cues, showing an influence of emotional faces early in the developmental process (Caulfield, Ewing, Bank, & Rhodes, 2015). In their study, Kaufmann and Schweinberger (2004) showed that participants were faster at recognising happy familiar faces, suggesting that identity and emotion interact and influence subsequent face discrimination. This effect has even been observed in patients with left or right temporal lobectomies, suggesting that processing of emotional faces is modulated by the amygdala (Gallegos & Tranel, 2005). The discrimination of faces is improved for emotional versus neutral faces (Lorenzino &

Caudek, 2015) and emotional faces are better remembered than neutral faces.

All these studies are examples of evidences that point out the importance of emotion in our everyday functioning through social interactions. The question of whether specialised brain regions process these emotions will now be explored and the timing of this processing will be discussed.

2.1 Brain structures involved in emotional face perception

Single-cell responses in the monkey brain have shown specific activation for face expression and face identity processing (Hasselmo, Rolls, & Baylis, 1989), with more important response to threat than other emotions. In the human brain, emotion

(24)

modulates response to faces in the fusiform gyrus (Vuilleumier & Pourtois, 2007), with greater activation following expressions of fear (eg. Vuilleumier, Armony, Driver, &

Dolan, 2003; Vuilleumier, Richardson, Armony, Driver, & Dolan, 2004; Surguladze et al., 2003). A PET study reported correlations between fusiform and amygdalar responses to fearful faces, suggesting that cortical activation originates in the limbic area. The amygdala is importantly associated with the rapid processing of relevant stimuli (Sander et al., 2003). The evidence that this structure plays a key role in the processing of emotional events related to danger (particularly to facial expressions of fear or anger) has been suggested by numerous studies (e.g. Morris, Frith, Perrett, &

Rowland, 1996; Phelps & Ledoux, 2005; Vuilleumier, Armony, Driver, & Dolan, 2001). There are at least two parallel routes engaged in the visual processing of facial expressions involving the amygdala (Adolphs, 2002). The first one is the well- documented visual pathway known to convey information in the mammalian brain, which projects from the retina to the lateral geniculate nucleus, and reaches the visual cortex and subsequently the temporal and parietal cortical regions. However, about 10%

of the fibres from the retina follow an extrastriate pathway to the superior colliculus and pulvinar, and from there may reach the amygdala without accessing the striate cortex.

Human lesion studies have allowed us to determine the role played by the amygdala in a large array of emotional processes. Indeed, following bilateral amygdala damage, recognition of emotional expressions (especially expressions related to negative affect) seems to be impaired, as demonstrated by several studies (e.g.

Anderson & Phelps, 2000; Schmolck & Squire, 2001). Moreover, fMRI studies highlighted the important activation of the amygdala while healthy subjects were processing facial expressions of fear compared to anger or neutral expressions (e.g.

Breiter et al., 1996; Whalen et al., 2001). This suggests a specific role of this brain region in the processing of fear, when compared to other types of negative emotions.

However, a caveat relative to the type of task used to observe this amygdala activation is needed, since labelling a facial expression is not equivalent to a passive viewing of these expressions. Indeed, when participants are asked to label an emotion, the regional cerebral blood flow as measured by fMRI shows deactivation in the amygdala (Hariri, Bookheimer, & Mazziotta, 2000), which is correlated with an attenuation of the autonomic changes generally occurring when performing this task (Kapler, Hariri, Mattay, Mcclure, & Weinberger, 2001). These observations led to the conclusion that amygdala is activated when fearful facial expressions are processed during an implicit task, suggesting a role in automatic emotion processing. However, this activation can be attenuated or abolished when a cognitive task is engaged, reflecting the inhibition of this region by higher-level structures (Critchley et al., 2000). Interestingly, several studies have shown a lateralisation relative to this response where the right amygdala shows greater implication in face processing (Phillips et al., 2001; Wright et al., 2001;

(25)

Sato et al., 2011). This effect was also found during the perception of the same stimuli without awareness (Morris et al., 1999; Nomura et al., 2004).

Frontal regions have an influence on the amygdala when the emotional task is accompanied by a cognitive aspect. The OFC can be modulated by the activity of the amygdala as well (Price et al., 1991; Rolls, 2004). Particularly, the right OFC shows an enhanced activation when presenting fearful versus neutral faces (Vuilleumier et al., 2001) and impairment of that region may be accompanied by impairment in the recognition of emotions (Hornak, Rolls, & Wade, 1996). Moreover, in opposition to the reactions of the amygdala, the OFC and anterior cingulate cortex (ACC) have shown activation during the presentation of facial expressions of anger as opposed to sadness while subjects were engaged in a cognitive task (sex categorisation) (Blair, Morris, Frith, Perrett, & Dolan, 1999). These observations suggest a specific activation of this frontal regions in the processing of facial expression of anger.

2.2 Electrophysiological correlates of emotional face perception

Once the structures of a cognitive process are defined, understanding the time course of activation of these structures is crucial to determine the precise role of each brain area and literature on this temporal aspect has shown controversial results. The precise time course of the processing of emotional faces has not been systematically investigated and some studies have highlighted a network involving occipital, temporal and frontal cortices (e.g. Krolak-Salmon, Hénaff, Vighetto, Bertrand, & Mauguière, 2004). According to Adolphs (2002), the processes involved in the recognition of emotion from facial expressions can be very briefly summed up in two stages: an early perceptual processing phase and a subsequent recognition phase (Fig. 2).

The early processing is associated with the representation of an emotional stimulus at a low-level, where features related to the shape, the colour or the configuration of a facial expression are processed. This processing involves occipital and temporal lobe cortices. Subsequently, amygdala and the orbitofrontal cortex are engaged in order to recognise and label a specific emotion and replace it in its context.

Stimulus onset: fast perceptual processing;

120 ms

Detailed perception:

emotional reaction involving the body;

170 ms

Conceptual knowledge of the emotions signaled by the face:

> 300 ms

Figure 2: Timing of the processing of facial expressions. The three stages in emotional face processing, from stimulus onset, to perception, and finally recognition. Adapted from Adolphs

(26)

In an ERP study, Allison, Puce, Spencer, and McCarthy (1999) found an early negativity (~200ms post-stimulus) specific to faces when compared to other types of stimuli and emerging from left and right fusiform and inferior temporal gyri. Moreover, modulation of the N170 component by facial expressions has also been shown where fearful faces enhanced larger amplitudes than neutral or surprised faces (Batty &

Taylor, 2003). Three structures are implicated in the processing of facial expressions:

the superior temporal sulcus (STS), the amygdala and the orbitofrontal cortex. MEG studies have made the same conclusion: amygdala responses occur very rapidly in reaction to facial expressions of anger (Bayle, Hénaff, & Krolak-Salmon, 2009; Hung et al., 2010; Luo, Holroyd, Jones, Hendler, & Blair, 2007; Maratos, Mogg, Bradley, Rippon, & Senior, 2009). However, the inconsistency in the precise timing of this activation has to be highlighted between these studies, including ~20ms, ~80ms,

~100ms and ~50-250ms. One ERP study with epileptic patients implanted in occipital, temporal and frontal cortices (Krolak-Salmon et al., 2004) showed a rapid activation of the amygdala (~200ms post-stimulus), followed by activation of the occipito-temporal, anterior temporal, and orbitofrontal cortex. Another study on epileptic patients showed activation of the amygdala with a peak at ~135ms post-stimulus (Sato et al., 2011). This rapid activation of the amygdala has thus been shown to occur even before activation of visual cortices (Klopp, Marinkovic, Chauvel, Nenov, & Halgren, 2000), suggesting an automatic response to threatening stimuli. These results confirm the observations highlighted above, namely facial expressions conveying a negative affect are capable of activating the human amygdala and may bypass cortical visual areas.

While some studies show an early processing of facial expressions occurring in frontal and temporal brain areas (Batty & Taylor, 2003; Eimer & Holmes, 2002), others have shown later involvement of fronto-central or temporal regions (Munte et al., 1998;

Krolak-Salmon, Fischer, Vighetto, & Mauguière, 2001). This discrepancy reflects the difficulty of investigating these types of processes in the human brain, as well as the pertinence of using complementary brain imaging techniques with different types of signal analyses. Applied to data recorded from healthy controls and brain damaged patients, these techniques allow drawing an overall conclusion about this cognitive phenomenon. The rapid amygdala response (~135ms post-stimulus) to human facial expressions of fear reported by Sato et al. (2011) is related to an increase in the gamma- band activity and is consistent with (i) animal studies showing the involvement of high frequency bands in the processing of emotional cues (e.g. Oya, Kawasaki, Howard, &

Adolphs, 2002; Luo et al., 2007; Popescu, Popa, & Paré, 2009), (ii) human EEG studies suggesting the same gamma-band activation during the processing of human facial expressions (Balconi & Lucchiari, 2008; Balconi & Pozzoli, 2009) and scenes conveying an emotional content (e.g. Keil et al., 2001).

These effects of emotion on face perception can be modulated by different

(27)

leading to different thresholds of awareness. These modulations by task demands and perceptual features be discussed in the next two chapters.

(28)

3. Unconscious processing of emotional stimuli

These effects of emotion on face perception have led researchers to question whether unconscious perception of emotional stimuli might affect behavioural performances and neural responses as well. Fear-conditioning has especially gained interest in this field, since it relies more on autonomous circuits detecting danger than the explicit subjective experience of fear (Ledoux, 2000). Faces, as being relevant stimuli for the humans, have been used to test the unconscious perception of emotion.

Stimuli that are biologically relevant (for example the ones that signal danger) have to be processed quickly and the activation of the amygdala during the processing of emotional faces has been interpreted as a quick and dirty route to rapidly detect signals of potential danger (Vuilleumier & Pourtois, 2007). Based on the phenomenon of affective blindsight and perception of subliminal stimuli in healthy subject, the neural correlates of this processing will now be discussed.

3.1 Affective blindsight and the study of emotional faces at a non-conscious level

One of the first motivations that initiated the study of the non-conscious processing of affective signals in the human brain has been the observation that affective blindsight patients are able to perceive emotional signals of fear despite the destruction of the visual cortex (De Gelder, Vroomen, Pourtois, & Weiskrantz, 1999).

Blindsight (or cortical blindness) refers to the neurological state where the striate cortex (V1) is damaged, resulting in blindness in the contralateral visual field. Despite this blindness, some patients have been able to discriminate features of visual events, as motion direction or orientation (Weiskrantz, Barbur, & Sahraie, 1995). These patients thus perform a cognitive task in the absence of any explicit knowledge, illustrating a case of implicit processing where awareness is dissociated from perception (Weiskrantz, 1991). One possible explanation of this phenomenon is the existence of a parallel visual pathway which would pass through the superior colliculus and the pulvinar. This pathway is believed to be able to process visual information even when the visual cortex is damaged (Weiskrantz, 1986).

These observations involving blindsight patients were first made on basic representations, like detecting a light spot in the blind field, relying on low-level

(29)

processes. This raises the question of whether higher level representations can take this non-conscious route as well. Interestingly, stimuli conveying an emotional message seem to represent a specific category that is processed non-consciously in blindsight patients. In a pioneer study by De Gelder et al. (1999), patients were able to guess above chance level the expression of emotional faces, even without conscious awareness.

Moreover, another study on a patient with a bilateral destruction of the visual cortices (Pegna, Khateb, Lazeyras, & Seghier, 2005) showed that emotional expressive faces can elicit brain responses, particularly in the right amygdala. As was stated by Ledoux (2000), a subcortical pathway allows the processing of threatening stimuli very rapidly, without getting to the visual cortex and passing through the thalamus before reaching the amygdala. This route is a plausible candidate for the processing of emotional stimuli presented in the absence of awareness, especially faces expressing a negative emotional valence (Morris, Öhman, & Dolan, 1999). In support of this theory, direct connexions between the thalamus and the amygdala have been found in rodents (Cowey & Stoerig, 1991; Linke, De Lima, Schwegler, & Pape, 1999). This theory has been confirmed using a fear conditioning paradigm with an angry face associated with a 95dB white noise in healthy participants. fMRI results showed that for stimuli that were consciously seen, visual areas were activated. On the other hand, when the conditioned non-seen faces were processed, the superior colliculus and the pulvinar were activated, and this activity was associated with the one of the right amygdala.

Concerning the processing of faces without awareness, it is not yet established how the brain processes perception of unaware emotional stimuli (Khalid, Finkbeiner, König, & Ansorge, 2013). Structures involving two subcortical processing routes have been proposed in the literature to account for this unconscious processing (see Tamietto

& De Gelder, 2010). As described above, visual stimuli are known to project to the primary visual cortex through retinal fibres to the lateral geniculate nucleus of the thalamus. From there, visual information conveys information to the striate and then the extrastriate cortex along the ventral and dorsal streams. However, a small portion of fibres project to the superior colliculus, the pulvinar, the amygdala, the substantia innominata and the nucleus accumbens (see Fig. 3 a). The latter pathway is a non-visual pathway processing emotional stimuli which includes the following brain structures: the locus coeruleus, the periaqueductal grey, the nucleus basalis of Meynert, the basal ganglia, the hypothalamus and the hippocampus (see Fig. 3 b).

Each of these structures plays a different role in human cognition and some structures from these two pathways are proposed to interact during the processing of unconscious emotional signals. As mentioned above, the superior colliculus, the pulvinar and the amygdala seem to act in synergy in this processing.

(30)

3.2 Processing of subliminal emotional faces in healthy subjects

In healthy subjects, the effects of non-conscious representations have been investigated by some experimental designs, including backward masking or binocular rivalry. In the first paradigm, a neutral stimulus is presented to the subject immediately after the presentation of an emotional target stimulus which appears at a non-conscious level (Stigler, 1910). Whalen et al. (1998) used this design to study the perception of masked faces (i.e. without awareness). In their paradigm, a face was flashed during 33ms, immediately followed by the presentation of a visible face presented during 167ms. The flashed faces were not consciously perceived and were representing either happy or frightened expressions. The visible stimuli were always representing happy faces. Using fMRI, they observed that the amygdalae were significantly more activated by the flashed faces representing an expression of fear, as compared to the happy faces.

It seems therefore that even if the emotional representations are not consciously perceived, activation of the amygdala may occur. Following this discovery, Morris, Figure 3: The two pathways involved in unconscious emotional processing. (a) The visual pathway represents direct projections from the retina to the visual cortex through the LGN in the thalamus (Th; thick arrows). The alternative visual pathway represents the minority of fibres projecting to the superior colliculus (SC) and the pulvinar (Pulv), before reaching the extrastriate cortex (thin arrows). (b) The non-visual pathway involved in emotion processing consists of cortical and subcortical structures such as the amygdala (AMG), the substantia innominata (SI), the nucleus accumbens (NA), brainstem nuclei (the periaqueductal grey (PAG) and the locus coeruleus (LC), the orbitofrontal cortex (OFC) and the anterior cingulate cortex (ACC). Taken from Tamietto & De Gelder, 2010.

(31)

Öhman, and Dolan (1998) used PET to explore the route taken by the non-conscious stimuli before reaching the amygdala and found that the subcortical route activating the superior colliculus was engaged. In summary, these studies have highlighted brain structures involved in the perception of non-consciously perceived emotional stimuli (Morris et al., 1998; Pasley, Mayes, & Schultz, 2004; Whalen et al., 1998; Williams, Morris, McGlone, Abbott, & Mattingley, 2004) including more or less the same regions involved in the non-conscious perception of emotional stimuli in blindsight patients, namely the amygdala, the superior colliculus, the basal ganglia and the pulvinar.

Interestingly, the activity of these structures was enhanced in the condition of an emotional non-consciously perceived stimulus, as compared to the activity observed in the condition of a consciously perceived stimulus (Anderson, Christoff, Panitz, Rosa, &

Gabrieli, 2003).

Binocular rivalry refers to an effect of visual perception where two different images are presented alternatively to each eye, which leads to perceptual dominance of one image at a time while the other is suppressed (Blake & Logothetis, 2002). Using this paradigm with fearful faces versus images of chairs, Pasley, Mayes and Schultz (2004) showed increased left amygdala activation following fearful faces, but no increase in the inferior temporal cortex (IT). This region is a part of the visual system responsible for object recognition (Gallant, 2000). Fearful faces therefore bypass this high-level visual area, directly accessing the amygdala through a phylogenetically older visual system. In another study, Williams, Morris, McGlone, Abbott and Mattingley (2004) used fearful, happy and neutral faces which were systematically presented with the image of a house with binocular rivalry. They showed that extrastriate visual areas were only activated when faces and houses were consciously perceived, whereas fearful (versus neutral) faces activated the amygdala bilaterally in both conscious and suppressed conditions. Concerning happy faces, they elicited greater amygdala activation only when suppressed. Therefore, the amygdala would refer to a rapid rough processing route, conveying basic non-specific information about visual input.

3.3 Electrophysiological correlates of subliminal emotion face processing

When interested in the timing of processing of emotional stimuli presented under the threshold of awareness, we may ask if non-conscious presentation of fearful facial expressions would have an effect on the N170 component specific to faces as well. Fear enhanced N200 was found to occur in fronto-central sites in subliminal presentations (Liddell et al., 2005) while N400 and P300 were associated with supraliminal presentations. Kiss and Eimer (2008) found earlier activation for the same condition (140-180ms) in anterior sites. An ERP (event-related potential) study investigated the

(32)

modulation of the N170 using the backward masked paradigm, where the target stimuli composed of neutral, happy or fearful faces appeared at 5 different durations (16ms, 33ms, 66ms, 133ms and 266ms) followed by a neutral stimulus (Pegna et al., 2008).

The task of the participants was to respond if they saw a fearful face or not. Behavioural results showed that the longer the duration of the target stimulus, the shorter the reaction times. As for the electrophysiological results, the subliminally presented fearful faces produced a stronger N170 than non-fearful faces, which was also the case for the supraliminal conditions. This activation was located in right temporal and temporo- occipital sites, again suggesting the involvement of the right amygdala in this type of processing.

When one is interested in the perception of faces under different levels of awareness, one may also ask how awareness modifies the electrical activity in the brain.

This has indeed been the central issue in many studies on brain processes. The synchronised gamma-band activity occurring in the visual cortex has been associated with visual awareness (Crick & Koch, 1990; Sewards & Sewards, 1999): when neurons fire in synchrony at this frequency (gamma), awareness arises. Moreover, when a stimulus reflects goal conducive features, gamma band oscillations were observed in a study (Grandjean & Scherer, 2008). Phase synchrony between beta and alpha frequencies has been linked to consciousness (Meador et al. 2002; Gross et al. 2004;

Palva et al. 2005; Srinivasan et al. 1999; Doesburg et al. 2005) and observed when participants report a subjective feeling towards a stimulus (Dan Glauser & Scherer, 2008). Theta band activity has also been reported in this processing, with induced activation during recollection of personal events (Guderian & Düzel, 2005) and awareness of perceived words in frontal regions (Melloni et al. 2007). Prestimulus alpha synchronisation has also been highlighted when predicting seen (versus unseen) stimuli (Mathewson, Gratton, Fabiani, Beck, & Ro, 2009).

Gamma band synchronisation has also been observed in the amygdala when participants process emotional stimuli consciously (Oya et al. 2002; Luo, Holroyd, et al.

2007), as well as in a distributed network including amygdala and visual, prefrontal, parietal and posterior cingulate cortices when processing supraliminal and subliminal emotional stimuli, suggesting that the gamma band does not only reflect consciousness but may also sustain emotional processing in some circumstances (Luo et al., 2009).

(33)

4. Attention and emotion

Studies investigating the interaction between attention and emotion have first been based on the well-known effect that attention to one location in space leads to enhanced accuracy and shortened reaction times when participants have to discriminate a stimulus at that location (Posner, Snyder, & Davidson, 1980). Attention has been divided into two different types: endogenous and exogenous attention. Exogenous attention refers to a bottom-up process, where an external cue captures attention, leading to enhanced sensory input at that location and therefore to behavioural outcomes (see Fig. 4A – left side of the figure). Endogenous attention (also known as selective attention) refers to a top-down processing, where, for instance, a central stimulus (an arrow) orients attention to a certain location in space, also leading to better and faster discrimination at that location (see Fig. 4A – right side of the figure).

When investigating endogenous attention, valid trials lead to faster reaction times than invalid trials, independently of the duration between the cue and the target (SOA; Posner, 1980) (see Fig. 4C). In the case of exogenous attention the effect of

Figure 4: (A) The Posner paradigm with on the left a target preceded by a peripheral cue and on the right a target preceded by a central cue. (B) Reaction times as a function of SOA for valid and invalid trials in non- predictive peripheral cues. (C) Reaction times as a function of SOA for valid and invalid trials in central predictive cues. Reproduced from Chica, Bartolomeo and Lupianez, 2013.

(34)

validity is SOA-dependent: a short SOA leads to faster reaction times for valid than invalid trials, whereas a longer SOA leads to slower reaction times for valid than invalid trials (see Fig. 4B). The effect of validity with a short SOA has been interpreted as an effect of facilitation of the processing of the target by attentional capture (Cameron, Tai

& Carrasco, 2002). On the other hand, the effect observed when using a longer SOA has been labelled “Inhibition of return” (IOR; Posner, Rafal, Choate, & Vaughan, 1985). If attention is oriented out of the location that has just been explored by the individual, he will be less efficient when exploring again this region. After an attentional shift, the IOR prevents attention to be redirected to previous locations. This phenomenon therefore acts like a visual field exploration facilitator.

4.1 Electrophysiological correlates of spatial attention

Attention has been shown to modulate components of the EEG as early as 100 ms (P1) after target onset and spreading to the N1 component, at occipital and parieto- occipital scalp sites. The enhancement in the amplitude observed on these components would reflect selective amplification of sensory input (Hillyard & Anllo-Vento, 1996;

Mangun, Hillyard, & Luck, 1993). However, these components reflect amplification of sensory input contralateral to the attended position rather than deployment of attention.

Therefore, electrophysiological responses relative to cue onset might inform us about the process underlying the allocation of attention to a specific location. For this purpose, several components have been highlighted using the difference in amplitude between the two hemispheres, reflecting the deployment of attention to the right or to the left visual field respectively reported. These are lateralized components. The most commonly observed lateralized component in the field of visual search is the N2pc (N states for negativity; 2 states for the timing since it appears at ~200 ms post-stimulus; pc states for the localisation: posterior-contralateral; Luck & Hillyard, 1994), which is generally amplified over the visual cortex contralateral to the attended location.

However, due to the lateralized characteristics of the N2pc, we do not compute analysis on this component with centred stimuli, but only when stimuli are lateralized, allowing the computation of the difference between ipsilateral and contralateral presentations (see Luck, 2012).

Several studies have also discussed specific components relative to cue onset referring to selective (endogenous) attention, or when the cue is presented at the centre of the screen (Eimer & Van Velzen, 2002; Hopf & Mangun, 2000; Jongen, Smulders, &

Van Breukelen, 2007; Nobre, Sebestyen, & Miniussi, 2000). Three components have especially been reported: the early directing attention negativity (EDAN), the anterior directing attention negativity (ADAN) and the late directing attention positivity (LDAP) (Harter, Miller, Price, Lalonde, & Keyes, 1989).

Références

Documents relatifs

The following chapter addresses mechanisms of internal appetite control, starting by discussing intuitive eating, a recently introduced concept that describes eating in response

The working hypothesis introduced here revises some of the key statements formulated in Kihlstrom’s (1987) theory of « the cognitive unconscious » by stating that

Attaching the non-functional requirements transparency to the requirements building process increases the overall awareness in the universe of discourse, making explicit

Results: This study showed that, during the presentation of negative stimuli (compared with neutral stimuli), 10 sessions of iTBS increased activity in the left anterior

In a study by Kolb, Wilson, and Taylor (1992), children and adults were shown a single emotional photograph or a cartoon depicting an emotional situation and then

First, economic rents in the present can affect political equilibria; policies that seek to address market failures present can affect political equilibria; policies that seek

The results using a simplified ver- sion of the relationship given in RADOLAN (2004) showed a slight underestimation of total areal rainfall sum in summer 2001 in comparison