• Aucun résultat trouvé

Being moved by music: literally and figuratively

N/A
N/A
Protected

Academic year: 2022

Partager "Being moved by music: literally and figuratively"

Copied!
335
0
0

Texte intégral

(1)

Thesis

Reference

Being moved by music: literally and figuratively

SCHAERLAEKEN, Simon

Abstract

Dans ce travail, nous avons mis en évidence (a) l'impact d'une audience virtuelle sur la production et la perception de la performance musicale par le mouvement, (b) l'émergence de deux réseaux dans le cerveau du musicien pour prédire l'intention, un lié aux régions préfrontales et motrices pour la prédiction du mouvement et un autre aux régions pariétales pour le décodage des intentions. À partir des réponses recueillies sur un total de 754 participants, nous avons également créé (c) les Geneva Musical Metaphor Scales, comprenant cinq échelles: Apesanteur, Mouvement, Force, Intériorisation et Déplacement. (d) Nous avons décrit chaque échelle en termes d'émotions musicales, de caractéristiques sonores et d'entrainement. De plus, nous montrons que (e) l'exposition à

de telles échelles peut influencer la performance musicale en modifiant les gestes musicaux et entrainer une plus grande appréciation des musiciens lorsque l'exposition ces métaphores est conforme au contenu métaphorique d'une pièce.

SCHAERLAEKEN, Simon. Being moved by music: literally and figuratively. Thèse de doctorat : Univ. Genève et Lausanne, 2019, no. Neur. 259

DOI : 10.13097/archive-ouverte/unige:152541 URN : urn:nbn:ch:unige-1525417

Available at:

http://archive-ouverte.unige.ch/unige:152541

Disclaimer: layout of this document may differ from the published version.

1 / 1

(2)

et de Lausanne

UNIVERSITÉ DE GENÈVE FACULTÉ DE PSYCHOLOGIE ET SCIENCES DE L’ÉDUCATION

Professeur Didier Grandjean, directeur de thèse Donald Glowinski, PhD, co-directeur de thèse

TITRE DE LA THÈSE

BEING MOVED BY MUSIC: LITERALLY AND FIGURATIVELY

THÈSE Présentée à la

Faculté de Psychologie et Science de l’Éducation de l’Université de Genève

pour obtenir le grade de Docteur(e) en Neurosciences

par

Simon SCHAERLAEKEN

de Liège, Belgium Thèse N259

Genève

Editeur ou imprimeur : Université de Genève 2019

(3)

1

Acknowledgements

Let’s be real. This thesis has not been a one-man job. Neither literally nor figu- ratively (see what I did there?). I heard the acknowledgment section was the right opportunity to thank the people who contributed to this piece of work and helped me grow as a person. I want to sincerely thank everyone who offered their help and support in times of need. You have changed my life, more than I could have imagined. Throughout this PhD, I have met truly amazing people, and I cannot be grateful enough for the love and support I received. So let’s start by the beginning:

Didier.

Didier, as my primary supervisor, I want to thank you for your help, support, and supervision. You offered me all the freedom I could fathom while being present when I needed you. As our relationship deepened, I got not only to see you as my supervisor, but as a true mentor. Our conversations and drinks late at night will be forever remembered. Thank you for being the most flamboyant professor of all times, thank you for choosing me.

Donald, as my secondary supervisor, I want to extend the thanks I gave to Didier to you. Your help in times of need was always precious and our conversations always brightened my days. I was very lucky to choose a seat next to you when I did not know anybody around. I’m even more grateful to have been, for some time, part of your life and your family. In Rome or in Vevey, you made me feel like I was at home. You are family to me.

Coralie, Cyrielle, Leonardo, and the rest of the lab, you guys deserve another big

"thank you". First of all, for making me an active part of the group. Second, for being awesome, for making me laugh, for our collaborations, for our lab meetings, for the dinners with Didier. Coralie, thank you for being my partner in crime in so many of these activities, thank you for being loud with me, thank you for being you. I will never forget ABIM together!

I want to thank many members of the CISA for their contributions and the help they offered during these four and a half years of PhD. The administration team, Daniela, Sandrine, Carole, and Marion, as well as Sylvain from IT, for their constant help and conversations. I’m very grateful I got supported by such an amazing team of incredibly competent people. Prof. David Sander, I want to thank you personally for the time and advice you gave me when I was deciding which path

(4)

to choose, for answering my questions, for always being so nice and open. Giada, Alexandra, Guillaume, Augustin, Mario, you guys are just perfect! You conduct groundbreaking research and I hope that throughout our career we meet again, and frequently. Finally, Heather, I want to tell you just how much you mean to me.

Our relationship is one of the pillars on which I based who I am. Your advice is always spot on, and I’m always impressed at how well you know me. Thank you for everything you have done for me. I’m a better person because of you.

Another major pillar of my life is my family. We might be kilometers apart but I always feel like we are close. You hear me complain, you hear me laugh. I know you are always here for me, just a phone call away. I thank you for giving me the space to grow in to the independent person I am today. Your encouragement has allowed me to go places I never thought I would see. I hope you are proud of who I have become. Your support got me where I am today, but your love will forever push me to go further and do my very best. Thank you for your advice, thank you for your love.

I’m also very very grateful for my friends, and there are many I want to thank.

Edo, Fanfoue, Caro, from the EPFL team, Jon, Sloughi, Armelle, Robin, Thibaut, and my roommates, thank you for the good times spent together. Thank you for all of the burgers and experiences we’ve shared. Thank you for all the conversations that have helped me grow and made me laugh. To the people in Challenge, and especially my XVI committee, thank you for the awesome fun you brought into my life. I will forever cherish the thousands adventures we had together. Thank you making my PhD life so extraordinarily beautiful. The people at Pro Natura, and especially at the summer camp, Julien, Cookie, Ewa, Louise, thank you for making me understand where my true passion lies. Taking care of kids with you is not only a pure moment of happiness, it is also a life lesson. You people are truly amazing friends and you showed me a better way to live fully.

And then, there are three amazing human beings: Patoch, Pepa, and Oli. I love you three more than I have ever imagined I could love someone. You complete me.

You support me. You make me laugh and love. You are the world to me. I know for a fact that our relationship will last more than a lifetime. I’m so grateful I got the chance to meet you and spend so much time with you three. You are bigger than everything I have accomplished these four and a half years. I love you.

Finally, there are you, the readers and members of the jury. I know this thesis is pretty long, and if you are here, you’ve already read 0.6% of it. Thank you for your time and attention. I hope you enjoy the rest.

(5)

3

Abstract

Motion and space are crucial aspects of music understanding. Literally, the visual kinematic cues such as musical gestures associated with music production and be- yond have been shown to influence significantly the appreciation of the performance.

Figuratively, metaphors of motion and space dominate musicological writings, mu- sic education, and our conceptualization of sonic features associated to music (e.g.

pitch). However, there is, to our knowledge, no tool available to measure such metaphorical experiences. Moreover, factors impacting the production of musical gesture such as the presence of an audience as well as the perception of such ges- tures and the prediction of intentions should be more thoroughly investigated. In this work, we provided evidence for (a) the impact of a virtual audience on both the production and the perception of music performance through motion features such kinetic energy, (b) the emergence of two networks in the musician brain to predict intentions, one linked to prefrontal and motor regions for the prediction of movements and another one to parietal regions for the decoding of intentions.

From the responses collected in a total of 754 participants, we also create (c) the Geneva Musical Metaphor Scales, comprising fives scales: Flow, Movement, Force, Interior, and Wandering. (d) We described each scale in terms of musical emo- tions (e.g. Movement to Joyful activation), sonic features (e.g. Force with energy and rhythmic complexity), and entrainmennt (e.g. Interior is inversely connected to entrainment). Additionally, we show that (e) exposure to such metaphor scales can influence music performance by modifying musical gestures (e.g. distance and kinetic energy) and results in higher levels of appreciation in musicians when the metaphor indication is congruent with the metaphorical content of a piece. These findings suggest that motion and space are essential aspects of music performance, both literally and figuratively, and provide a new tool for assessing the metaphorical content of music.

(6)

Résumé

Notre compréhension et appréciation de la musique est fortement liée aux aspects de mouvement et d’espace. Litéralement, des études montrent que les indications vi- suelles tels que les gestes musicaux associés à la production de musique influencent de manière significative l’appréciation de la performance. Au sens figuré, les métaphores du mouvement et d’espace dominent les écrits musicologiques, l’enseignement de la musique et notre conceptualisation des caractéristiques sonores associées à celle-ci (p. ex. "hauteur" des notes). Cependant, à notre connaissance, aucun outil n’est disponible pour mesurer une telle expérience métaphorique. De plus, les facteurs ayant une incidence sur la production de gestes musicaux, tels que la présence d’une audience, ainsi que la perception de ces gestes et la prédiction de l’intention, de- vraient faire l’objet d’une recherche plus approfondie. Dans ce travail, nous avons mis en évidence (a) l’impact d’une audience virtuelle sur la production et la percep- tion de la performance musicale par le mouvement, avec la modification de carac- téristiques motrices telles que l’énergie cinétique, (b) l’émergence de deux réseaux dans le cerveau du musicien pour prédire l’intention, un lié aux régions préfrontales et motrices pour la prédiction du mouvement et un autre aux régions pariétales pour le décodage des intentions. À partir des réponses recueillies sur un total de 754 participants, nous avons également créé (c) les Geneva Musical Metaphor Scales, comprenant cinq échelles: Apesanteur, Mouvement, Force, Intériorisation et Dé- placement. (d) Nous avons décrit chaque échelle en termes d’émotions musicales (par exemple, Mouvement et Activation), de caractéristiques sonores (par exemple, Force avec énergie et complexité rythmique) et d’entrainement (par exemple, Intéri- orisation est inversement relié à l’entrainement). De plus, nous montrons que (e) l’exposition à de telles échelles peut influencer la performance musicale en modifi- ant les gestes musicaux (par exemple la distance et l’énergie cinétique) et entrainer une plus grande appréciation des musiciens lorsque l’exposition à ces métaphores est conforme au contenu métaphorique d’une pièce. Ces résultats suggèrent que le mouvement et l’espace sont des aspects essentiels de la performance musicale, au sens propre comme au sens figuré, et comportent un nouvel outil d’évaluation du contenu métaphorique associé à la musique.

(7)

Contents

I State of the art 15

A Introduction 17

A.1 Preamble . . . 17

A.2 Impact of music on our society . . . 19

A.3 Musical training . . . 20

A.4 Musical performances . . . 22

A.5 Musical gestures . . . 24

A.6 Musical emotions . . . 25

A.7 Musical brain . . . 30

A.8 Musical meaning . . . 33

A.9 Goals . . . 37

II Studies 1-5 39

B Study 1 - Playing for a virtual audience: The impact of a social factor on gestures, sounds and expressive intents 41 B.1 Introduction . . . 43

B.1.a Controlling environmental variables in virtual reality . . . 43

B.1.b Music performance: from sound to gesture . . . 44

B.1.c Goal of this study . . . 48

B.2 Materials and Methods . . . 49

B.2.a Recording session . . . 49

B.2.b Rating experiment . . . 50

B.2.c Multi-modal Expressive Behavior Analysis . . . 51

B.3 Results . . . 52

B.3.a Proximal performance data analysis . . . 52

B.3.b Distal participant data analysis . . . 54

B.4 Discussion . . . 58 5

(8)

B.4.a Absence of an audience . . . 59

B.4.b Impact of the presence of a virtual audience . . . 60

B.4.c Interactive Virtual Environments as a tool to study musical performances . . . 62

B.5 Conclusions . . . 63

B.6 Supplementary Materials . . . 64

B.6.a Supplementary Material 1 . . . 64

B.6.b Supplementary Material 2 . . . 65

B.6.c Supplementary Material 3 . . . 66

C Study 2 - Frontoparietal, cerebellum network codes for intention prediction in altered perceptual conditions 67 C.1 Introduction . . . 69

C.2 Materials and Methods . . . 70

C.2.a Participants . . . 70

C.2.b Experimental Stimuli: Communicative intent task . . . 71

C.2.c Experimental procedure: Communicative intent task . . . 72

C.2.d Behavioral Data Analysis . . . 72

C.2.e Neuro-imaging Image Acquisition . . . 73

C.2.f Neuro-imaging Data Analysis . . . 73

C.3 Results . . . 76

C.4 Discussion . . . 80

C.5 Supplementary Materials . . . 82

C.5.a Supplementary Material 1 . . . 82

C.5.b Supplementary Material 2 . . . 83

C.5.c Supplementary Material 3 . . . 84

C.5.d Supplementary Material 4 . . . 85

C.5.e Supplementary Material 5 . . . 86

C.5.f Supplementary Material 6 . . . 87

C.5.g Supplementary Material 7 . . . 88

C.5.h Supplementary Material 8 . . . 89

C.5.i Supplementary Material 9 . . . 90

C.5.j Supplementary Material 10 . . . 91

D Study 3 - "Hearing music as...": Metaphors evoked by the sound of classical music 93 D.1 Introduction . . . 95

(9)

CONTENTS 7

D.2 Study 1 . . . 100

D.2.a Materials and methods . . . 101

D.2.b Results and discussion . . . 102

D.3 Study 2 . . . 103

D.3.a Materials and methods . . . 103

D.3.b Results . . . 104

D.3.c Discussion . . . 108

D.4 Study 3 . . . 110

D.4.a Materials and methods . . . 110

D.4.b Results . . . 111

D.4.c Discussion . . . 113

D.5 Supplementary Materials . . . 126

D.5.a Supplementary Material 1 . . . 126

D.5.b Supplementary Material 2 . . . 127

D.5.c Supplementary Material 3 . . . 128

D.5.d Supplementary Material 4 . . . 129

D.5.e Supplementary Material 5 . . . 130

D.5.f Supplementary Material 6 . . . 131

D.5.g Supplementary Material 7 . . . 132

D.5.h Supplementary Material 8 . . . 133

E Study 4 - Linking musical metaphors and emotions evoked by the sound of classical music 135 E.1 Introduction . . . 137

E.2 Method . . . 141

E.2.a Participants . . . 141

E.2.b Materials . . . 142

E.2.c Procedure . . . 143

E.2.d Statistical analyses . . . 143

E.3 Results . . . 145

E.4 Discussion . . . 153

E.5 Supplementary Materials . . . 162

E.5.a Supplementary Material 1 . . . 162

E.5.b Supplementary Material 2 . . . 163

E.5.c Supplementary Material 3 . . . 164

E.5.d Supplementary Material 4 . . . 165

(10)

E.5.e Supplementary Material 5 . . . 166

E.5.f Supplementary Material 6 . . . 167

E.5.g Supplementary Material 7 . . . 168

E.5.h Supplementary Material 8 . . . 169

E.5.i Supplementary Material 9 . . . 170

E.5.j Supplementary Material 10 . . . 171

E.5.k Supplementary Material 11 . . . 172

E.5.l Supplementary Material 12 . . . 173

E.5.m Supplementary Material 13 . . . 174

E.5.n Supplementary Material 14 . . . 175

F Study 5 - Being moved: literally and metaphorically 177 F.1 Introduction . . . 179

F.2 Material and Methods . . . 183

F.2.a Production procedure . . . 183

F.2.b Evaluation procedure . . . 184

F.2.c Data analysis and statistics . . . 185

F.3 Results . . . 186

F.3.a Production of musical excerpts . . . 187

F.3.b Perception of musical excerpts . . . 188

F.4 Discussion . . . 194

F.5 Supplementary Materials . . . 200

F.5.a Supplementary Material 1 . . . 200

F.5.b Supplementary Material 2 . . . 201

F.5.c Supplementary Material 3 . . . 202

F.5.d Supplementary Material 4 . . . 203

F.5.e Supplementary Material 5 . . . 204

F.5.f Supplementary Material 6 . . . 205

F.5.g Supplementary Material 7 . . . 206

F.5.h Supplementary Material 8 . . . 207

F.5.i Supplementary Material 9 . . . 208

F.5.j Supplementary Material 10 . . . 209

F.5.k Supplementary Material 11 . . . 210

F.5.l Supplementary Material 12 . . . 211

F.5.m Supplementary Material 13 . . . 212

F.5.n Supplementary Material 14 . . . 213

(11)

CONTENTS 9

III General discussion 215

G Discussion 217

G.1 Main theme . . . 217

G.2 Main findings . . . 218

G.3 Linking findings with past research . . . 221

G.4 Limitations . . . 228

G.5 Perspectives . . . 230

G.6 Conclusion . . . 232

IV Appendix 233

H Study 6 - Unfolding and dynamics of affect bursts decoding in hu- mans 235 H.1 Introduction . . . 237

H.2 Materials and methods . . . 241

H.2.a Participants . . . 241

H.2.b Stimuli . . . 241

H.2.c Experimental Procedure . . . 242

H.2.d Statistical Analysis . . . 243

H.3 Results . . . 244

H.3.a Recognition Accuracy . . . 246

H.3.b Recognition Certainty . . . 247

H.3.c Linear and Nonlinear Recognition Curves . . . 247

H.3.d Impact of Acoustic Parameters . . . 251

H.4 Discussion . . . 255

H.4.a Recognition of Emotion . . . 255

H.4.b Shape Functions . . . 258

H.4.c Acoustic Features . . . 259

H.5 Conclusion . . . 261

H.6 Supplementary Materials . . . 262

H.6.a Supplementary Material 1 . . . 262

H.6.b Supplementary Material 2 . . . 263

H.6.c Supplementary Material 3 . . . 264

H.6.d Supplementary Material 4 . . . 265

H.6.e Supplementary Material 5 . . . 266

(12)

H.6.f Supplementary Material 6 . . . 267

H.6.g Supplementary Material 7 . . . 268

H.6.h Supplementary Material 8 . . . 269

H.6.i Supplementary Material 9 . . . 270

H.6.j Supplementary Material 10 . . . 272

H.6.k Supplementary Material 11 . . . 273

H.6.l Supplementary Material 12 . . . 279

I EMC Toolbox - Exploring and exporting motion features made simple 281 I.1 Introduction . . . 283

I.2 Data representation and toolbox philosophy . . . 284

I.3 Exploring and computing features with the toolbox . . . 285

I.3.a Reading data and preprocessing . . . 286

I.3.b Visualizing data . . . 287

I.3.c Computing features . . . 289

I.4 Conclusion . . . 296

Bibliography 296

(13)

List of Figures

A.1 Possible neurobiological mechanisms for the rehabilitative effect of

music . . . 21

A.2 Neural correlates of music-evoked emotions . . . 32

A.3 Key brain areas associated with music processing . . . 34

B.1 Motion and acoustic features computed . . . 53

B.2 Impact of the interaction of the expressiveness and the presence of an audience on body features . . . 54

B.3 Interaction of the expressiveness and the presence of an audience on the perceived emotional intensity and the perceived authenticity . . . 56

B.4 Impact of the interaction of the computed features and the presence of an audience on the perceived emotional intensity and the perceived authenticity . . . 58

C.1 Behavioral and neural evidence of communicative intent decoding by musicians and matched controls . . . 78

D.1 Results of the exploratory factorial analysis. . . 106

D.2 Examples of musical profiles with the factors computed in the ex- ploratory factorial analysis as fixed effects for the linear mixed model. 108 D.3 Confirmatory factor analysis on ratings of visual imagery expression in response to music. . . 115

E.1 Estimated binary ratings for GEMS and GEMMES based on the at- tributed affective content of the musical excerpts . . . 147

E.2 Polar graph of the estimated binary value of each metaphor based on the emotional content of the musical excerpts . . . 149

E.3 Polar graph of the estimated binary value of each metaphor based on the principal components of the acoustic and perceptual features associated with the musical excerpts . . . 150

11

(14)

E.4 Multi regression using best subset selection . . . 152

E.5 Multi-dimensional scaling based on the Spearmann correlations be- tween every item of all scales and features . . . 155

F.1 Estimated values of each principal component based on the experi- mental conditions and the excerpts . . . 189

F.2 Estimated ratings for both liking and expressive intent based on the experimental conditions and the level of expertise of the participant . 191 F.3 Estimated ratings for both liking and expressive intent based on prin- cipal component . . . 192

F.4 Estimated odds ratio for GEMMES based on the instructions given to violinists . . . 194

F.5 Modulations of odds ration for GEMMES by the principal compo- nents extracted from acoustic and motion features . . . 195

H.1 Unbiased recognition over time . . . 248

H.2 Certainty over time. . . 250

H.3 Impact of acoustic features on recognition across and over time. . . . 254

I.1 Skeleton of a violinist on a single time frame with the addition of the motion trajectories, speed and acceleration for the violin and hands . 288 I.2 Comparison of the timeseries of the both wrist markers according to the three spatial dimensions. . . 288

I.3 Chronograph of the musician’s motion associated with a representa- tion of the potency of the kinetic energy. . . 289

I.4 Euclidean norm of the first derivative (speed) applied to all dimen- sions for every marker. . . 290

I.5 Translational kinetic energy associated to each body segment (and not markers like others features). . . 291

I.6 Translational kinetic energy associated to each body segment. . . 292

I.7 Angles between different body parts . . . 293

I.8 Convex hull encapsulating the body parts. . . 293

I.9 Symmetry and fluidity. . . 294

I.10 Synchronization between multiple body parts . . . 295

(15)

List of Tables

D.1 Correlations Between Factors From the EFA . . . 105 D.2 Comparison of Confirmatory Factorial Analysis models . . . 114 D.3 Estimated Correlation Matrix for the Latent Variables of the CFA . . 116 E.1 Correlation table between every item of every scale . . . 154 H.1 Percentages of recognition and confidence for each emotion across all

stimuli. . . 245 H.2 Contrasts comparing Hu scores at last gate between emotions . . . . 249 H.3 Comparison between polynomial fits and orthogonal polynomial con-

trasts for each emotion curve . . . 251 H.4 Contrasts comparing Hu scores at last gate between emotions . . . . 252 I.1 Example of CSV summary table containing the computed features

kinetic energy and convex hull for both imported files . . . 296

13

(16)
(17)

Part I

State of the art

15

(18)
(19)

Chapter A Introduction

A.1 Preamble

Music is a fundamental part of our evolution. It can be traced back to preliterate cul- tures around the world and has been present ever since (Conard, Malina, & Münzel, 2009; Wallin, Merker, & Brown, 2000). It changed over the centuries and acquired nuances based on different styles and genres. It was expanded with each new instru- ment and continue to do so with the multiple technological breakthroughs, such as the invention of the electricity or the computer. Music is part of ourselves; people enjoy playing music as well as listening to it. They compose music, synchronize their movements to the beat, dance, and let their voice harmonize with a plethora of different instruments. Music is a little part of everyone’s daily life (P. N. Juslin, Liljestrom, Laukka, Vastfjall, & Lundqvist, 2011), from the song that is played on the radio to the one stuck in your head perpetually, for no apparent reason. Music can also transport us to unexpected places. Besides your local band and concert venue or Vienna and its magnificent opera, it also connects to imaginative places in your head (Osborne, 1981). Music makes you travel metaphorically. People report that music can carry meaning (Koelsch & Siebel, 2005) and this meaning can be transformed into a story. As a result, even without lyrics, it is not hard to imagine yourself in front of a magnificent scenery or diving into a story line of romance and conflicts. Similarly, with little effort, you can start to see music jump and fall, spin and grow, ascend or descend, overall move in many various ways to many various places (Eitan & Granot, 2006; Rigas & Alty, 2005). Especially, music can make us feel (P. N. Juslin, 2009). It can create emotions and moods, from the sadness of a depressed cello to the excitation of a happy beat. Music can consequently be a teacher in the matter of emotions and moods, from negative to positive. Follow-

17

(20)

ing up, music can actually teach and help our communities in more diverse ways.

From the music therapies (R. Blake & Bishop, 1994) and the rhythmic training for dyslexic children (Flaugnacco et al., 2015; Overy, 2003) and Parkinson’s patients (Si- hvonen et al., 2017), or the social aspects of concerts and dancing (Turino, 2008), to the incredible impact music training has on multiple brain structures (Herholz &

Zatorre, 2012), music can shape our world for the better.

In the last few decades, researchers have grown interested in understanding how music works. Research covers many areas of the musical experience, from the very basic appreciation of a tonality or the timbre of an instrument (e.g. (Menon et al., 2002)), to the emergence complex emotions (P. N. Juslin, 2009), or the social cognition at play in duos, quartets, or orchestras (Keller, 2014). While many studies have also focused on the generation of musical gestures, the metaphorical aspect of music is often left aside. However, metaphors in music can be found at many levels from the description of a series of notes and rhythms (Larson, 2012), to the commu- nication between a composer and the musicians playing a particular piece (Pannese, Rappaz, & Grandjean, 2016). Metaphors are an essential part of this art form and yet they are understudied and poorly understood. What metaphors describe best the experience of music? How are those metaphors related to different acoustic fea- tures that constitutes musical pieces or even the related evoked emotions? As we move with music, how can metaphors impact such gestures and invite the listener to imagine new ones? Finally, can you be trained to perceive these metaphors and therefore enhance your appreciation of musical pieces? What is it that musically trained professional perceived in gestures, in expressive intents, and in fine, in musi- cal metaphors? Can we pinpoint the necessary mechanisms and the brain structures responsible for such complex cognitive processes?

This present work offers to fill a part of the huge gap left to be explored in the world of musical metaphors, gestures, and perception. It starts exploring the impact of musical training on the production and perception of gestures, investigates the relationship with gestures and external factors such as the presence of an audience.

It continues by introducing a new scale to appreciate musical metaphors in Western classical music and measure its relationships to emotions and eventually musical gestures. Overall, it explains parts of how music makes us move, how music makes us feel, and how music makes us think, in hope of improving the musical experience, teaching, and therapy, for the sake of everybody, musicians or not.

(21)

A.2. IMPACT OF MUSIC ON OUR SOCIETY 19

A.2 Impact of music on our society

Music has been around for a long time, originating very early in human his- tory (Conard et al., 2009). Some evolutionary theories of music have placed it as a drive for the evolution of language (Wallin et al., 2000), cooperation, and so- cial cohesion (I. Cross & Morley, 2010; Koelsch, Offermanns, & Franzke, 2010). We can also find music in every part of the globe and in every known culture with the tradition of mothers singing songs to their newborns (Trehub, 2003). In addition to being one of the first experiences we encounter in life through these lullabies, music carries important social functions. In his review, Koelsch listed seven social functions of music (Koelsch, 2014). Evidences suggest that music can improve so- cial cognition, when a listener try to understand the musician’s or the composer’s intentions (Steinbeis & Koelsch, 2008) but also "co-pathy", when the individuals of a group feel empathy for each other leading to their emotional states becoming more homogeneous. Co-pathy can increase the well-being of both musicians creat- ing music and listeners listening to it (Koelsch et al., 2010). Music creates social contact by making people interact when playing together, involves coordination of actions to synchronize with one another, and cooperation when multiple players are performing. Furthermore, music is important for communication, as much evidence point at an overlap between the brain regions implicated in the perception and pro- duction of music and the ones recruited for language such as Broca’s area (Koelsch, 2012; Kunert, Willems, Casasanto, Patel, & Hagoort, 2015; Patel, 2008). Finally, music leads to enhanced social cohesion of groups (I. Cross & Morley, 2008), leading to many benefits such as increased health and life expectancy when humans feel like they belong (Cacioppo & Patrick, 2008; House, 2001). With neuro-technologies advancing at a rapid pace, scientists have highlighted the enhanced neuro-plasticity of children playing music, leading to structural changes and functional improve- ments that can be carried into adulthood (Hyde et al., 2009; Skoe & Kraus, 2012;

Zuk, Benjamin, Kenyon, & Gaab, 2014). In clinical population (e.g. people with Parkinson’s disease, epilepsy, or multiple sclerosis), music-based interventions show evidences on supporting cognition, motor function, or emotional wellbeing (see re- view, (Sihvonen et al., 2017)). The rehabilitative effect of music is attributed to multiple possible neurobiological mechanisms (Figure A.1). Music activates the dopaminergic mesolimbic system, responsible for the regulation of various executive functions, memory, attention, mood, and motivation (Salimpoor, Benovoy, Larcher, Dagher, & Zatorre, 2011). It induces an increased blood flow through the medial

(22)

cerebral artery due to autoregulation (G. F. Meyer, Spray, Fairlie, & Uomini, 2014) associated with the widespread activation of various networks in the brain (Koelsch, 2014; Zatorre, Chen, & Penhune, 2007). Music also affects both cardiovascular and endocrine responses with reduced serum cortisol levels and inhibition of cardiovas- cular stress reactions (Bradt, Dileo, & Potvin, 2013).

Music can also influence our decisions, by affecting our readiness to participate in social activities (Wood, Saltzberg, & Goldsamt, 1990), our willingness to help others (Fried & Berkowitz, 1979; A. C. North, Tarrant, & Hargreaves, 2004), or our intention to purchase (Bruner, 1990). Finally, music can also elicit emotions, from mere arousal, chills, and ‘basic’ emotions to more ‘complex ones (P. N. Juslin, 2009).

While many describe it as their primary motive to engage with it (P. N. Juslin &

Laukka, 2004) , some even believe that it is music’s primary purpose (Cooke, 1959).

These are only some of the avenues by which music changes the world.

A.3 Musical training

Music can be created in many different ways. The simple repetition of a sentence with accented parts can eventually be perceived as a melody (Deutsch, 2010). A child with no training is easily capable of creating a rhythm by clapping his hands.

Music by definition is a “the science or art of ordering tones or sounds in succes- sion, in combination, and in temporal relationships to produce a composition having unity and continuity” [Merryam Webster]. However, when it comes to the every- day music most people listen too, it is usually performed by musicians. Musicians go through a long and dedicated training to master their craft and to be able to skillfully create music. Learning to play an instrument is a highly complex task.

It involves the interaction of multiple modalities and requires the recruitment of higher-order cognitive functions. As a result, musical training leads to behavioral, structural, and structural changes with effects lasting from days to years (Herholz

& Zatorre, 2012). Changes in brain regions involved in auditory processing are among the most characteristic effects of musical training. These changes are both functional (Elmer, Meyer, & Jäncke, 2011; Ohnishi et al., 2001; Tervaniemi, Just, Koelsch, Widmann, & Schröger, 2005) and structural (Bermudez, Lerch, Evans, &

Zatorre, 2008; Gaser & Schlaug, 2003; Hyde et al., 2009). These changes occur at different level of the auditory pathway, from the brain-stem (e.g., (Wong, Skoe, Russo, Dees, & Kraus, 2007)), to primary and surrounding auditory cortices (e.g., (Bermudez et al., 2008; Schneider et al., 2002)), to areas involved in higher-order

(23)

A.3. MUSICAL TRAINING 21

Figure A.1 – Possible neurobiological mechanisms for the rehabilitative effect of music. Orange circles and yellow arrows represent the mesolimbic system, and the green circles represent the HPA axis. ACTH=adrenocorticotropic hormone. CRH=corticotropin- releasing hormone. HPA axis=hypothalamic-pituitary-adrenal axis. 1. Adapted from Si- hvonen et al. (2017).

auditory cognition (e.g., (Lappe, Herholz, Trainor, & Pantev, 2008)). Moreover, musical training leads to changes in regions supporting cognitive control (Moreno et al., 2011; Schulze, Müller, & Koelsch, 2011; Zuk et al., 2014), coordinating fast motor movements (Münte, Altenmüller, & Jäncke, 2002). It also affects sensory-to- motor coupling mechanisms (Ellis et al., 2012; Pantev, Lappe, Herholz, & Trainor, 2009; Zatorre et al., 2007), both during the actual instrumental practice and the mere perception of music (Zatorre et al., 2007). The motor network is significantly impacted by musical practice with anatomical changes made to the anterior cor- pus callosum (Schlaug, Jäncke, Huang, Staiger, & Steinmetz, 1995), the cerebellum (Hutchinson, Lee, Gaab, & Schlaug, 2003), as well as the motor and premotor cor-

(24)

tex (Bermudez et al., 2008; Gaser & Schlaug, 2003). This impact on the motor network, however, is specific to the training received by the musician. Music perfor- mance implies greater recruitment of hand areas for instrumentalists like violinists and pianists (Lotze, Scheler, Tan, Braun, & Birbaumer, 2003) compared to singing that involves representations of the vocal tract (Kleber, Birbaumer, Veit, Trevorrow,

& Lotze, 2007).

A.4 Musical performances

Musical performances are the demonstration of everything acquired during such ded- icated training. When a musician is performing, he or she puts to the test what was learned over the years. Since we engage with music because of its ability to elicit emotions (P. N. Juslin & Laukka, 2004), it is the musician’s goal and responsibil- ity to perform in an expressive manner. However, this is not an easy task because the communication of expressive content through music is a multilevel process. It can be studied from three different perspectives: the composer’s message (Peretz, Gagnon, & Bouchard, 1998), the expressive intentions of the performer (Poli, 2004;

Todd, 1995), the listener’s perceptual (Canazza, Poli, Rodà, & Vidolin, 2003) or physiological (Palmer, 1997) experiences. These three levels are also part of a model to describe how emotions are experienced designed by Scherer and Zentner (2001) (Scherer & Zentner, 2001). In this model, the three levels are respectively called “structural features”, “performance features”, and “listener features”. The au- thors even added a fourth component: the “contextual features”. The different parts of this model can be explained as follows.

Contextual features refer to certain aspects of the performance and/or listening situation. The location of the performance (e.g. a church or a concert hall) affects the quality of the experience. Similarly, the presence of disturbances such as a person talking next to you, or your neighbor mowing his grass, might hinder the appreciation of the performance.

The structural features is associated with the composer’s message because the notes and rhythms originate directly from his/her hand. These features can be further subdivided into two categories: segmental and suprasegmental fea- tures. The former corresponds to acoustic structures of individual tones, e.g.

the duration, energy (amplitude), pitch (fundamental frequency), and tim- bre. The latter corresponds to systematic configurational changes in sound

(25)

A.4. MUSICAL PERFORMANCES 23

sequences over time, e.g. melody, tempo, harmony.

Listener features can be summarized in three categories: stable dispositions, transient listener states, and musical expertise. Stable dispositions include inter-individual differences in gender, age, personality traits, socio-cultural factors, among other things. Transient listener states comprises temporally fluctuating states of the listener such as his/her motivation, mood, and con- centration. Inducing a particular transient state, such as mood, to the listener prior to the evaluation of musical emotion has been shown to affect the sub- sequent appreciation of music (Cantor & Zillmann, 1973).

Performance features are of multiple types and once again can be linked to sta- ble disposition of the performer (e.g. attractiveness and attire (Howard, 2012;

Wapnick, Mazza, & Darrow, 1998), reputation, facial expressions), expertise and ability (e.g. technical and interpretative skills), as well as transient per- formance states (e.g. interpretation, concentration, motivation).

At both the performance and listener level, the musical expertise plays a role in the appreciation of a musical piece. As training leads to a better understanding of the musical structure, a general appreciation of the skills and style of another fellow musician, and a boosted awareness of details that the untrained ear would miss, it also affects the evaluation of the emotional content piece (even at the brain activity level: e.g., (Dellacherie, Roy, Hugueville, Peretz, & Samson, 2011)). Even so, the emotional meaning communicated by a music performance can be decoded regardless of the level of musical expertise of the listener (P. Juslin, 2005; P. N. Juslin, 1997b). In fact, this effect is thought to be caused by implicit exposure to music leading to very high levels of sophistication in the appreciation of the art (Bigand

& Poulin-Charronnat, 2006).

The performance features can be studied from two perspectives: the auditory experience and the visual experience. The auditory experience has been extensively studied. Performance expression from this perspective can be defined as “the small and large variations in timing, dynamics, timbre, and pitch that form the micro- structure of a performance and differentiate it from another performance of the same music" (Palmer (1997), p. 118). To achieve emotional expression, performers can precisely change key acoustic features such as tempo, dynamics, timing, timbre, and articulation (P. N. Juslin & Timmers, 2010). Musicians can replicate such patterns of timing and dynamics with high precision (Gabrielsson, 1987; Henderson, 1937;

Shaffer, 1987). The visual component contributes greatly to the appreciation of a

(26)

music performance (Bergeron & Lopes, 2009; Cook, 2008). Consequently, it should not be regarded as a marginal phenomenon, but rather an important factor in com- munication of meaning (for a meta-analysis, see (Platz & Kopiez, 2012)). Visual cues are essential to determine the expressiveness of a piece when audience members are not skilled at listening (J. W. Davidson & Correia, 2002) or in absence of sensory information (J. W. Davidson, 1993). Even with the presence of auditory cues, visual kinematic cues have been found to affect the transduction of musical emotions (Cha- pados & Levitin, 2008; Vines, Krumhansl, Wanderley, & Levitin, 2006), the percep- tion of emotional expression (Dahl & Friberg, 2007; J. W. Davidson, 1993; Vines, Krumhansl, Wanderley, Dalca, & Levitin, 2011; Vines et al., 2006), and the overall appreciation of the performance (Platz & Kopiez, 2012). For example, singers’ facial expressions can influence the emotion conveyed through sound (W. F. Thompson, Russo, & Quinto, 2008). In fact, the visual kinematic cues have even been shown to be more reliable predictor of the winner of a piano competition than the auditory cues (Tsay, 2014). They can also inform the listeners on auditory cues by providing a range of information on musical structure (Vines et al., 2006), and musical ideas and timing (Goebl & Palmer, 2009; Williamon & Davidson, 2002). With advances in motion-capture and video analysis techniques, enabling the recording of perform- ers’ motion, Palmer (2012) concluded that this era was an excellent time to conduct research on performing musicians, and specifically the visual aspects of it (Palmer, 2012).

A.5 Musical gestures

Under the circumstances of visual information being crucial to the appreciation of a musical performance, one might wonder about the types of visual cues created by the musician capable of triggering such a different response from the audience.

While research teams have explored stable physical attributes such as attractiveness and attire (Howard, 2012; Wapnick et al., 1998), most of the research is focused on body movements and musical gestures. Musicians can produce multiple types of ges- tures that have been subdivided into three categories: effective gestures, figurative gestures, and accompanying/ancillary gestures (Delalande, 1988). Effective gestures are responsible for producing the sound. A violinist moving his bow over the strings or a drummer hitting the snare drum with her stick are two examples. These ges- tures are highly linked to skill and expertise, as they must be mastered in order to correctly perform. Figurative gestures correspond to sonic gestures perceived by an

(27)

A.6. MUSICAL EMOTIONS 25 audience than cannot be traced back to a physical movement, for instance, changing the timbre of an instrument or a melodic balance. Lastly, accompanying or ancillary gestures are actual physical body movements that are not necessarily responsible for the production of sound. Presently, a musician could tap his foot in rhythm or make head movement as he/she is playing. They are not “technical” gestures. They are informative about the behavior of a musician, his/her mental state, presenting the musician’s interpretation a musical piece and allowing for the discrimination be- tween different performances of the same piece (Dahl, Bevilacqua, & Bresin, 2010).

Each musician depending on his/her instrument can use the body parts that are free to move to communicate expressive intents to the audience. For example, pianists uses the head and upper torso (J. W. Davidson, 1994; M. R. Thompson & Luck, 2012; W. F. Thompson et al., 2008), while clarinetists and violinists can use their full body and even instrument (Wanderley, Vines, Middleton, McKay, & Hatch, 2005). In all cases, an increase in the amount of movement is generally linked to a more expressive performance (J. W. Davidson, 1994; M. R. Thompson & Luck, 2012;

W. F. Thompson et al., 2008). Consequently, body movement is seen a prominent feature in the performance. In fact, African music is defined as the “sonic product of action, that is, of human movements” (Baily, 1985) and therefore goes beyond the importance of sound to incorporate motion. These gestures can even be perceived as more pleasurable that the produced sounds in some cultures (Cook et al. (1990), pp. 5-7). Furthermore, another example of the strong link between movement and music can found in children. It has been shown that infants and young children dis- play bodily movement patterns specific to music (Zentner & Eerola, 2010). When observing the bodily movements of seven- and eight-year old children in relation to music, researchers highlighted their ability to identify different musical concepts (e.g. duration, intensity) with kinetic patterns (Tafuri, 2017). Finally, music emerg- ing from direct bodily motion without the intermediate use of an instrument has also emerged from the aforementioned motion capture technologies. In this scenario, with the use of sensing devices, musicians are able to produce sound by simply mov- ing their body. These unbound movements give rise to a new field where all gestures (effective, figurative, and ancillary) could be mixed and “heard” in one performance.

A.6 Musical emotions

In addition to being clearly related to movement, music shows a strong ability to elicit emotions in humans. In that sense, listeners can be “moved” by music. They

(28)

perceive music as both expressive (Gabrielsson & Juslin, 2003; P. N. Juslin, 2013) and evocative of emotions (Dowling & Harwood, 1986). As mentioned before, this emotional side of the musical experience is the primary reason for people to engage with it (P. N. Juslin & Laukka, 2004). However, one should note that emotional episodes are neither reliability elicited by listening to music nor similarly distributed over the population. This is why, by recent estimates, music seems to produce emo- tions only in about 55-65% of the episodes on average, with wide inter-individual differences in overall prevalence (P. N. Juslin & Laukka, 2004; P. N. Juslin & Väst- fjäll, 2008). While some theorists even claimed that music could only induce moods, affective states created by music are more generally seen as emotions. An emotion can be defined as “an event-focused, two-step, fast process consisting of relevance- based emotion elicitation mechanisms that shape a multiple emotional response:

action tendency, autonomic reaction, expression, and feeling” (Sander (2013), p.

23). In the case of music, the induction process involves a specific ‘object’ (i.e. the music, or some specific cues in the music). The duration of the affective episode is relatively short (ca. 5–40 minutes; see (Västfjäll, 2001)). These episodes can create relatively intense feelings (e.g., (P. N. Juslin, 2009; P. N. Juslin & Västfjäll, 2008)). They include autonomic responses (Krumhansl, 1997) and cognitive changes (see review (Koelsch, 2014)). Emotionally evocative music affects psychophysiolog- ical reactions (Bartlett, 1996; Krumhansl, 1997; Nyklíček, Thayer, & Van Doornen, 1997), including sexual arousal (Mitchell, Dibartolo, Brown, & Barlow, 1998), and psycho-motor performance (Pignatiello, Camp, & Rasar, 1986). It also affects ac- tion tendencies (Fried & Berkowitz, 1979) and various evaluative judgments such as physical attractiveness (May & Hamilton, 1980), advertisement (Gorn, Pham, &

Sin, 2001), and probability of success or failure (Teasdale & Spencer, 1984). These episodes can be labelled with an emotion word. However, there is less consensus over the type of emotions elicited by music and therefore the measurement tools associated. Scherer (2004) proposed a first distinction between “utilitarian” and

“aesthetic” emotions (Scherer, 2004). While the former includes emotions like fear or anger that show an evolutionary purpose and prepare the individual to react to specific situations, the latter is related to the aesthetic characteristics of art and usually associated with more complex emotions. While this is less likely that music induces anger or disgust, music is still capable of eliciting “basic” emotions (i.e. happiness, sadness; “basic” emotions being based on (Matsumoto & Ekman, 2009)). In fact musical experience can create anything from mere arousal, chills,

“basis” emotions, to more ‘complex emotions (e.g. nostalgia, wonder), and even

(29)

A.6. MUSICAL EMOTIONS 27

‘mixed’ emotions (P. N. Juslin, 2009). In an attempt to characterize the most used emotions to describe musical experiences, Zentner and colleagues (2008) created the Geneva Emotional Music Scales (GEMS) by successively asking participants to rate lists of emotions in the context of music listening and grouping the resulting terms together (Zentner, Grandjean, & Scherer, 2008). They ended up with nine emotions scale: joyful activation, nostalgia, peacefulness, power, sadness, tenderness, tension, transcendence, and wonder. This scale finds correspondence in another study where they found that the five most frequent musical emotions were happy elated, sad- melancholic, calm-content, nostalgic-longing, and aroused-alert (P. N. Juslin, 2009).

Music therefore seems to elicit affective responses described by a more fine-grained range of positive emotions than negative ones (Zentner et al., 2008). However, it has been highlighted that despite the GEMS’s ability to provide a nuanced informa- tion about musically induced emotions, a dimensional model seems to be the most reliable and efficient way of collecting and describing musical emotions (Vuoskoski

& Eerola, 2011). A dimensional model or circumplex model proposes that emotions are represented on usually two continuous axes. These dimensions represent, for example, valence and arousal (Russell, 1980), valence and engagement (Watson &

Tellegen, 1985), or tension and energy (Thayer, 1990). Some even questioned the usual two-dimensional representations, arguing that when you adopt a theoretically based approach, four dimensions are necessary to represent similarities and differ- ences in the meaning of emotion words (Fontaine, Scherer, Roesch, & C Ellsworth, 2008). These dimensions would then be evaluation-pleasantness, potency-control, activation-arousal, and unpredictability. The debate is still going and no definitive solution is proposed so far, most likely due to the incredible variety of experiences linked to emotions and music.

One other important question is how are these emotions perceived and induced.

Perceived and felt musical emotions tend to be associated (Evans & Schubert, 2008;

Hunter, Schellenberg, & Schimmack, 2010). For example, a music perceived as happy will tend to create a positive happy feeling in the listener, or similarly for a sad song (Garrido & Schubert, 2013, 2015). However, while perception of the expres- sion of emotion in music is a bottom-up sonic-based mechanisms, linked to listener attributing emotional qualities to their auditory perception, the induction of emo- tion is a listener-focused phenomenon (Scherer, 2004). It is a personal experience relying more heavily on top-down contributions and largely affected by individual differences (Gabrielsson, 2001; Rentfrow & McDonald, 2010). In general, musical emotion are more strongly perceived than felt (Evans & Schubert, 2008; Hunter et

(30)

al., 2010; Zentner et al., 2008). The recognition of emotions in music is based on basic acoustic cues as well as music- and culture-specific cues. Different emotions are associated with different patterns of acoustic cues, similarly to speech and prosody (see review: (P. N. Juslin & Laukka, 2003), pp. 792– 795)). In fact, music perfor- mance shares largely the same emotion-specific patterns of acoustic cues with vocal expression (P. N. Juslin & Laukka, 2003). Happiness, for instance, is characterized by fast speech rate/tempo, high fundamental frequency (F0)/high pitch, as well as fast voice onset/tone attacks. Not all acoustic cues carry the same weight when it comes to helping in recognizing discreet emotions. Some cues show the higher ecological validity such as sound level and articulation (P. N. Juslin, 2000). Even at young age, emotions seem to be instinctively characterized by specific cues. When asking 5- to 8-year olds to perform a song with different emotions, the researchers found that they were modulating basic acoustic cues such as loudness, tempo, and pitch in their performance (Adachi & Trehub, 1998). The induction of emotions, on the other hand, while also partly based on the sonic attribute of the music, is more reliant on listener-based mechanisms. Juslin (2008, 2013) developed a model explaining eight mechanisms that take part in inducing emotions (P. N. Juslin, 2013;

P. N. Juslin & Västfjäll, 2008). These mechanisms included in this model are the following, ordered from the most bottom-up to the most top-down: (1) brain stem reflexes, (2) rhythmic entrainment, (3) evaluative conditioning, (4) emotional conta- gion, (5) visual imagery, (6) episodic memory, (7) musical expectancy, (8) aesthetic judgment.

Brain stem reflexes are activations of the central nervous system by sounds that meet certain criteria (e.g. high frequency, loud, noisy). They signal a po- tentially important and urgent event (Davis, 1984), can quickly increase arousal through the reticular system, and evoke feelings of surprise in the listener (P. N. Juslin, Harmat, & Eerola, 2014).

Rhythmic entrainment is a process by which two systems become synchronous with one another over time. In that sense, a powerful, external rhythm in the music can affect the internal rhythm of the body (e.g. heart rate), lead- ing to the emergence of a feeling (Clayton, Sager, & Will, 2005). Four levels of entrainment can be distinguished in humans when regarding entrainment as an affect induction mechanism: perceptual, motor, social, and autonomic physiological (Vuilleumier & Trost, 2015). Perceptual entrainment is a process necessary to perceive periodic information in auditory input (e.g. beat). This

(31)

A.6. MUSICAL EMOTIONS 29 creates a percept of periodicities and the expectancies about what is going to happen next (for a review see (Nobre & Rohenkohl, 2014)). Motor en- trainment refers to synchronization of movements to a specific beat like when dancing or bouncing. Social entrainment occurs when playing in a group of musicians. It corresponds to the coordination in time of two or more people (for a review on interpersonal coordination, see (Keller, 2014)). Finally, autonomic physiological entrainment correspond to the tendency of biological rhythms to entrain to external rhythm. However, an absence of “synchronization of body oscillators” has been multiple time observed (Ellis & Thayer, 2010), and some say that more evidence is still needed to establish this mechanism as evocative of musical emotions (Koelsch, 2015).

Evaluative conditioning is a process in which a particular song is paired repeat- edly with a positive or negative emotion so that when this song plays, the emotion is induced unconsciously. For example, if dancing makes you happy and you repeatedly dance to the same song, over time the music itself will eventually arouse happiness.

Emotional contagion is a process in which the listener perceives the emotion ex- pressed by a musical piece and mimic this expression internally. The mimicry can occur from an embodiment of peripheral feedbacks from muscles (Nieden- thal, Barsalou, Winkielman, Krauth-Gruber, & Ric, 2005), or activations of relevant emotional representations in the brain, leading to the induction of the same emotion. This process could be based on the mirror-neuron system, which remains to be proven in humans (Rizzolatti & Craighero, 2004).

Visual imagery refers to the ability to conjure up visual images while listening to music. The listener conceptualizes the musical structure with nonverbal metaphorical mapping grounded in bodily experiences (Kövecses, 2000; Lakoff

& Johnson, 1980). Music has been previously shown to stimulate visual im- agery (Osborne, 1980) and in return, such imagery can enhance the emotional response to music (Band, Quilter, & Miller, 2001; Västfjäll, 2001).

Episodic memory corresponds to the evocation of a memory of a particular event in the listener’s life because of the music played (Baumgartner, 1992). This phenomenon is often linked to emotions such as nostalgia (Janata, Tomic, &

Rakowski, 2007; P. N. Juslin & Västfjäll, 2008) and has been referred to as the

‘Darling, they are playing our tune?’ phenomenon (see (J. B. Davies, 1978)).

(32)

Musical expectancy is a process in which emotions are induced by a violation, delays, or confirmation of the listener’s expectation about the music. They are based on the listener’s previous experiences of the same musical style (Pearce, Ruiz, Kapasi, Wiggins, & Bhattacharya, 2010). Musical emotions related to violation of expectancies include surprise (Huron (2006), p. 348, p. 356), anxiety (L. B. Meyer (1956), p. 27), and thrills (Sloboda, 1991).

Aesthetic judgement involves a multi-dimensional set of subjective criteria such as beauty, novelty, skills, style, and depends on higher cognitive functions and domain-relevant knowledge. It is strongly influenced by cultural variables.

Such multi-modal systems imply that if one mechanism is failing, the others can take over (P. N. Juslin, 2013). Similarly, when multiple mechanisms are activated at the same time, this can create conflicting outputs, leading to the creation of “mixed emotions” (Griffiths, 2004).

A.7 Musical brain

The induction of musical emotions is the result in all cases of the integrated ac- tivation of multiple systems and brains regions. The exposure to pleasant and unpleasant music has been associated with activity in paralimbic and limbic regions including among others hippocampus, nucleus accumbens, ventral striatum, and in- sula (Blood, Zatorre, Bermudez, & Evans, 1999; Brown, Martinez, & Parsons, 2004;

Koelsch, Fritz, v. Cramon, Müller, & Friederici, 2006; Menon & Levitin, 2005).

In general, it has been linked to structures implicated in the processing of reward and emotions (Blood & Zatorre, 2001). In fact, pleasure induced from listening to music can also be traced back to the activation of the dopaminergic component of the reward system (Blood & Zatorre, 2001; Salimpoor et al., 2011). More than just pleasure, the affective response to music also implies the activation of a net- work of limbic region, including the amygdala, hippocampus, and hypothalamus, as well as paralimbic and neocortical regions, including frontal pole, orbitofrontal cortex, parahippocampal gyrus, superior temporal gyrus/ sulcus, cingulate and the precuneus (Blood & Zatorre, 2001; Blood et al., 1999; Koelsch et al., 2006; Koelsch

& Siebel, 2005; Menon & Levitin, 2005; Salimpoor et al., 2013). In parallel, these regions have been highlighted previously for their processing of emotional states and their evaluation of reward, especially in socially relevant cognition (Adolphs, 1999;

Adolphs, Damasio, Tranel, Cooper, & Damasio, 2000; Adolphs & Tranel, 2003). Fi-

(33)

A.7. MUSICAL BRAIN 31 nally, in a meta-analysis including studies investigating the experiences of intense pleasure, happy or sad music, joy- or fear-evoking music, music-evoked tension, emotional responses to consonant or dissonant music, and music expectancy vio- lations, Koelsch (2014) reported clusters of changes in activity in numerous core regions that underlie emotion (Koelsch, 2014). Although the studies involved were methodologically diverse, music appeared to produce neuronal activations in the superficial and latero-basal nuclei groups of the amygdala, the hippocampal forma- tion, the right ventral striatum (including the nucleus accumbens) extending into the ventral pallidum, the head of the left caudate nucleus, the auditory cortex, the pre-supplementary motor area, the cingulate cortex and the orbitofrontal cortex (FigureA.2).

However, many more regions are implicated in the experience of music than the ones implicated in the reward and emotion network. Many brain regions and networks have been highlighted in the context of music listening, from the basic auditory pathways perceiving acoustic features to the motor network responsible to moving to the beat and playing music (Figure A.3).

Numerous critical components of music have been associated with specific brain regions, including pitch in the superior temporal cortex (Zatorre, Evans, & Meyer, 1994), harmony in rostromedial prefrontal cortex (Janata et al., 2002) and left audi- tory cortex (Passynkova, Sander, & Scheich, 2005), rhythm associated with premotor cortex activation (Grahn & Rowe, 2009) and gamma-band activity in the auditory cortex (J. S. Snyder & Large, 2005), and timbre in the posterior Heschl’s gyrus and superior temporal sulcus (Menon et al., 2002). Recognizing patterns in music by sequencing structural information and forming prediction typically involves the inferior frontal gyrus (Makuuchi, Bahlmann, Anwander, & Friederici, 2009). It is also recruited when processing temporal structure (Levitin & Menon, 2005) and violations of syntactic structure (Koelsch & Siebel, 2005; Maess, Koelsch, Gunter,

& Friederici, 2001). Interestingly, congenital amusia responsible for music percep- tion deficits is characterized by a disruption of the superior temporal gyrus and the inferior temporal gyrus pathways (Hyde, Zatorre, & Peretz, 2010; Loui, Alsop, &

Schlaug, 2009).

Motor representations and actions are ubiquitous to music. While movement tim- ing are linked to the cerebellum, basal ganglia, and supplementary motor area (Za- torre et al., 2007), sensory-motor integration and motor imagery are associated to premotor cortex (PMC) and pre-central gyrus (Sammler et al., 2010; Zatorre et al., 2007). As a complex process, the music performance usually requires interactions

(34)

Figure A.2 – Neural correlates of music-evoked emotions. A meta-analysis of func- tional studies that shows several neural correlates of music-evoked emotions. The analysis indicates clusters of activity changes reported across studies in the amygdala (local maxima were located in the left superficial amygdala (SF), in the right laterobasal amygdala (LB)) and hippocampal formation (panela), the left caudate nucleus and right ventral striatum (with a local maximum in the nucleus accumbens (NAc)) (panelb), pre-supplementary mo- tor area (SMA), rostral cingulate zone (RCZ), orbitofrontal cortex (OFC) and mediodorsal thalamus (MD) (panelc), as well as in auditory regions (Heschl’s gyrus (HG) and anterior superior temporal gyrus (aSTG)) (panel d). Adapted from Koelsch (2014).

between posterior auditory cortices and premotor cortices to integrate feedback and feedforward information and create a cognitive representation of the performance.

It has also been proposed that the dorso-premotor cortex may be a crucial neural hub, due to its connectivity to both input and output systems, in order to integrate

(35)

A.8. MUSICAL MEANING 33 higher order features of a sound with the appropriately timed and organized motor response (Zatorre et al., 2007). The understanding of another musician’s gestures is supported by the suggested mirror neuron system in human. This system has been proposed as a mechanism the perceiver to understand the intention and mean- ing of a communicative signal by simulating the representation of that signal in its own brain. This system has been multiple time proven to be present in the mon- key brain (Rizzolatti & Craighero, 2004; Rizzolatti & Sinigaglia, 2010). Activity within the fronto-parietal mirror neuron system has been shown to be modulated by musical expertise (Bangert et al., 2006; Haslinger et al., 2005), dancing experi- ence (E. S. Cross, Hamilton, & Grafton, 2006), and music-related motor learning (Calvo-Merino, Glaser, Grèzes, Passingham, & Haggard, 2004). The mirror neuron system may also provide a common neural mechanism for processing domain-general rules associated with language, actions, and music, which in turn can communicate meaning and human affect (Molnar-Szakacs & Overy, 2006). It would allow human to interpret communicative signals, understanding motor action and the underly- ing intention associated. The proposal of a common neural mechanism for music, language, and motor functions is supported by evidence from studies of language disorders. For example, children suffering from dyslexia show deficits in reading and language abilities (Goswami, 2002; Tallal, Miller, & Fitch, 1993), motor con- trol (Fawcett & Nicolson, 1995; Wolff, 2002), as well as imprecise timing in the domain of music (Overy, 2003). Nowadays, one of the interventions offered to these children is music training, and more particularly rhythmic training, and it has been proved to improve language skills, such both in reading and speaking (Flaugnacco et al., 2015; Overy, 2003).

A.8 Musical meaning

In addition to expressing and eliciting emotional states, music is also thought to carry meaning, although not all scholars agree with this view (Kivy, 1991; Stravin- sky, 1975). Some report that music possesses neither grammar nor meaning (Kivy, 2002). Others point at the ineffability of music, or the impossibility to describe it into words (Raffman, 1993). More and more evidence, however, support the idea that musical information is perceived by human as meaningful (Koelsch & Siebel, 2005). It even conveys semantic meaning (Koelsch, 2009) and engages brains regions responsible for the processing of semantical information (see for a review, (Koelsch, 2011)). Meaning is an emergent property in a sense that is a global phenomenon

(36)

Figure A.3 – Key brain areas associated with music processing. Areas identified from neuroimaging studies of healthy people. Although the figure displays the lateral and medial parts of the right hemisphere, many musical processes are largely bilateral (with the exception of pitch and melody processing, which are lateralised, the activity in the right hemisphere being dominant). Adapted from Särkämö et al. (2013).

that results from the addition of smaller phenomena that is collectively created a greater that the sum of its part. Meaningfulness in music is not yet understood but multiple avenues are currently being explored. Many have and are still searching for musical meaning with the help of theories such as conceptual blending (Brandt, 2008; Fauconnier & Turner, 2008), or image schemas (Brower, 2000; Saslaw, 1996) and conceptual metaphors (Lakoff & Johnson, 1980; Zbikowski, 2002).

For long, music has been associated with metaphors. Both musicians and writers seem to agree. The composer Leonard Bernstein [1976] mentioned in this speech in Harvard, “music is a totally metaphorical language” while the philosopher, Roger Scruton (1997) simply wrote “if we take away the metaphors of movement, of space, of chords as objects, of melodies as advancing and retreating, as moving up and down – if we take those metaphors away, nothing of music remains, but only sound” (Scru- ton (1983), p106). Music and metaphors have at the center of theoretical discus- sions (Adlington, 2003; Aksnes, 2002; Brower, 2000; Cumming, 2000; M. L. Johnson

& Larson, 2003; Saslaw, 1996; Spitzer, 2003; Zbikowski, 1997, 2002) but also an- thropological studies (A. W. Cox, 1999; Feld, 1981; Perlman, 2004). Conceptual mappings, with the help of metaphors, has also become common in music the- ory (Spitzer, 2004; Zbikowski, 2002). They have been shown to affect cognition and

(37)

A.8. MUSICAL MEANING 35 sensory experiences (see for a review (Landau, Meier, & Keefer, 2010)) as well as to provide a platform for shared understanding of music’s affective quality (Pannese et al., 2016). In fact, metaphors may intervene at the inter- and intra-personal level.

The former resulting in a better communication of musical and affective meaning across individual, from the composer to the musician to the listener. The latter con- tributing possibly to the transition between emotion perception and induction (Pan- nese et al., 2016). Musical meaning is attributed to a specific type for metaphors, conceptual metaphors, based on the Conceptual Metaphor Theory (CMT, (Lakoff

& Johnson, 1980)). In this theory, Lakoff and Johnson see metaphors as a pro- cess of “mapping” between a target domain, that one tries to understand, and a source domain, well known to the instigator of the metaphor. These mappings in the case of music feel natural since subjective experiences (e.g. music, emotions) are often hard to describe in non-metaphorical thoughts (Lakoff and Johnson (1999), p. 59). Conceptual metaphors in this way allow for the expression of ideas that would be difficult to formulate in a literal way and in the meantime, they provide a compact, rich, and vivid way to communicate complex information (Ortony, 1975).

Mappings between music and language in the form of conceptual metaphors depend on image-schematic structures that are common to the two domains (Zbikowski, 2008). Image-schemas are defined as “a recurring, dynamic pattern of our percep- tual interactions and motor programs that gives coherence and structure to our experience” ((M. Johnson, 1987), xiv). They provide a theoretical basis to describe music with the use of metaphors grounded in embodied experience. However, since human nature is varied in many ways (e.g. languages, social circumstances, individ- ual propensities), metaphors are deeply subjective and show great inter-individual differences (cf. (Sinha & De López, 2000)). Nevertheless, as observed by Zbikowski, it is unlikely that a participant would describe “the first note more like an apple, the second more like a banana” (Zbikowski (2002), p. 70). Consequently, this suggests that there are constrains in the creation of a concept, and that some universality might exist. In that regard, conceptual metaphors are central to music (Zbikowski, 2008) since they combine both physiological (universal) and cultural (contextual) aspects of music experience, for they act both as a cognitive process and as a cultural process (Maccormac (1985), pp. 5–6). A very good example of such cultural differ- ences can be found in metaphors associated with pitch. In the Western culture, since at least the Middle Ages, musical pitches are usually described in terms their dis- position in space (a pitch can be “higher” or lower than another) (Zbikowski, 2008).

However, many different cultures describe pitches differently: “small” vs. “large”

Références

Documents relatifs

The submission evidences the innovative aspects of that work and, more generally, of the role of the Computer Music Designer through consideration of a number of Max patches

The second new and important result concerning the observa- tion of one’s own body in motion is that the observation of the self in motion, in a specular way (here referred to

Following the path initiated by Boucourechliev and Brown, and with the advent of new technologies, many composers sought new forms of inter- action with performers through

( WHO 1980 ), in most countries and languages, the term handicap is currently considered as offensive towards and by the “ ” people living with disabilities, and the attribute of

Although American rock and Motown soul and disco remained tremendously popular, and the biggest seller was the Swedish group ABBA, 4 British artistes more than held their

Chacun son tour, piochez un cube dans le sac et donnez le résultat pour le poser sur le. plateau en moins de

Finally the question to raise is, if musical diversity – in wind bands – should be preserved at all and if musical diversity (also in relation to wind music) should be a topic

This paper presents a pioneer study of speech pro- sody and musical prosody in modern popular music, with a specific attention to music where the voice is closer to speech than