• Aucun résultat trouvé

Supplementary material

N/A
N/A
Protected

Academic year: 2022

Partager "Supplementary material"

Copied!
14
0
0

Texte intégral

(1)

Supplementary material Details and subjective ratings of auditory material

The twelve pieces of instrumental classical music used to evoke distinct emotions in our study are listed below. For each musical excerpt, the corresponding experimental label is given in parenthesis.

Tender (TE) Excerpts

 (TE_1) Dvorak, Antonin: Symphony No. 9, Op. 95, "From the New World": II. Largo.

 (TE_2) Liszt, Franz: Bénédiction de dieu dans la solitude.

 (TE_3) Fauré, Gabriel: Ballade for piano and orchestra, Op. 19.

Tense (INQ) Excerpts

 (INQ_4) Kurtag, György: String Quartet, Op. 1: II. Con Moto.

 (INQ_5) Bartok, Bela: Piano Sonata BB88, 1st movement.

 (INQ_6) Martinů, Bothuslav: String Quartet, No 3, H 183, Vivo.

Joyful (JOY) Excerpts

 (JOY_7) Delibes, Léo: Coppélia. Ballet in 3 Acts. 1st act, Prelude-Mazurka.

 (JOY_8) Mendelson, Bartholdy Felix: Violin Sonata in F, 3. Assai vivace.

 (JOY_9) Brahms Johannes: Violin Concerto, 3rd movement.

Sad Excerpts

 (SA_11) Bach, Johann Sebastian: Musical Offering, BWX 1079. Canon a 2 per augmentationem.

 (SA_12) Schubert, Franz: String Quartet in D Minor, D.180 “Death and the Maiden”: II.

Con moto

 (SA_10) Shostakovich, Dimitri: String Quartet, No. 8 in C Minor, Op. 110: I. Largo.

For each musical excerpt, ratings of the subjective affective experience evoked during listening was obtained after scanning in each participant. Results are shown in the Supplementary Table 1. For both the valence and arousal dimensions, similar scores were found in the two age-groups (p > .05).

(2)

Behavioural Data Analysis - Alerting and Orienting component

Our analysis of executive control (see main manuscript) was completed by similar analyses for the other two attention components measured by the ANT, orienting and alerting. To this aim, AC and RT were computed in each participant and each group separately (young and older adults) for each cue condition (centre, double, spatial, and no-cue) of the ANT, during each of the four emotion conditions (joy, tenderness, tension, and sadness). AC analyses were performed using Wilcoxon rank tests to compare group and cue effects. RT scores were analysed using mixed-model repeated measure ANOVAs with the emotion and attention conditions of interest as within-subject factors (i.e. No cue and Centre cue for Alerting, Centre, Spatial valid and invalid cues for Orienting), plus group as a between-subject factor. In separate analyses, emotion conditions were considered according to four distinct categories (joy, tension, sadness, tenderness) or two basic dimensions (valence and arousal). Finally, as described in Fan’s work (Fan 2005), we calculated the efficiency of alerting (i.e. [mean RT No Cue – mean RT Centre cue]) and orienting (i.e. [mean RT Centre – mean RT Spatial valid cue] and [mean RT Spatial invalid cue – mean RT Spatial valid cue] for reorienting), which were analysed using separate ANOVAs. Because alerting and orienting efficiency indices showed no difference as a function of group or emotion (p > .05), and also because these attentional effects were relatively weak (Fan, McCandliss et al. 2002, Fan, McCandliss et al. 2005, Finucane, Whiteman et al.

2010, McConnell and Shore 2011), our findings related to alerting and orienting indices were not exhaustively investigated in the current study. All p-values were adjusted for Bonferroni corrections.

For the behavioural sub-study performed outside the scanner (see main text), we also included a silent control condition to confirm the direction of the effects of music on attention

(3)

compared to the same task performed without music (n= 46 participants). Statistical analyses for the different attention components, including alerting and orienting, were computed as in the main study (for both AC and RT).

Behavioural Results - Alerting and Orienting component

AC and RT results are presented in Supplementary Table 2. While accuracy scores showed no effect of group or cue (p > .05) for alerting, the analyses of orienting revealed that overall participants were more accurate with the centre cue (M = 97%; SD = 6.4; V = 135; p = .03) than with the spatial invalid cue (M = 98%; SD = 4.6). Besides, older were less accurate (M = 96%;

SD = 6.5; W = 204; p < .03) than younger adults (M = 99%; SD = 1.1), across all orienting conditions cues (consistent with executive control results).

For reaction times, as expected, we found a main effect of cue with both components, indicating a response facilitation when participants were temporally (i.e. alerting) or spatially (i.e. orienting) informed about the upcoming target, with respectively faster RT

(F(1,50) = 5.09 ; p = .02) with the centre cue (M = 721ms ; SD = 146.15) than no cue condition

(M = 731ms ; SD = 147.72), as well as faster RT (F(2,100) = 3.25 ; p = .04) with the spatial valid (M = 712ms; SD = 148.28) than spatial invalid cue (M = 727ms; SD = 150.75). As predicted, a main effect of group was obtained for both alerting (F(1,50) = 39.08 ; p < .001) and orienting (F(1,50) = 44.67 ; p < .001) component, due to older being slower (Alerting: M = 796ms ; SD = 129.56 and Orienting: M = 791ms; SD = 127.83) than younger adults (M = 605ms ; SD = 79.70 and M = 598ms; SD = 84.64, respectively).

(4)

The ANOVA additionally revealed a main effect of emotion for both alerting

(F(3,150) = 2.72 ; p = .04) and orienting (F(3,150) = 2.87 ; p = .03) components, with faster responses

during joyful compared to other music, similar as found in executive component. Specifically, alerting effects showed faster responses during both joyful and tense music than during tender music, although this failed to survive Bonferroni correction (t(51) = -2.02 ; p = .24 and t(51) = -2.38 ; p = .12, respectively), whereas orienting effects showed faster responses (t(51) = - -3.0 ; p = .02) only during joyful (M = 711ms; SD = 154.09) compared to sad music (M = 729ms; SD = 152.78). Replicating executive control results, both alerting

(F(1,50) = 6.41 ; p = .01) and orienting (F(1,50) = 9.105 ; p = .004) components also showed a main

effect of arousal, with faster RT during high (Alerting: M = 721ms ; SD = 145.24; Orienting:

M = 714ms; SD = 144.08) than low arousing music (Alerting: M = 731ms ; SD = 148.39;

Orienting: M = 726ms; SD = 150.32), but there was no main effect of valence (p > .05). A significant valence by group interaction (F(1,50) = 3.87 ; p = .05) was obtained for alerting only, reflecting slower RTs during pleasant than unpleasant music in older participants (M = 9ms; SD = 30.73, t(49) = 2.32 ; p = .02), while the younger exhibited the opposite pattern (M = -5ms; SD = 16.73).

Finally, in the additional behavioral sub-study, we found no effect of emotion/music conditions (p = .18) on RT performance for the alerting component. A main effect of emotion was found for the orienting component (p = .04), but additional post-hoc pairwise t-tests failed to survive Bonferroni correction.

(5)

fMRI Data analysis - Alerting and Orienting component

All analyses were similar to the approach used to assess the executive control component (see main text). The SPM design matrix comprised 12 main regressors of interest (onset of the visual stimulus with a 1700ms duration) corresponding to the 4 emotion conditions but now together with 3 types of cue (no cue, double cue, centre cue for alerting; centre, valid, invalid for orienting), separately modelled and convolved with the cHRF to assess the alerting or orienting components, respectively. Movement parameters (six realignments columns) were also entered as non-interest covariates. The second-level SPM analysis was performed using a flexible-factorial design including the emotion categories and the relevant trial conditions, for each attentional component separately. The main effects of alerting and orienting were determined by comparing different cue conditions (i.e. centre > no cue, for alerting; spatial valid > centre or spatial invalid > spatial valid, for orienting), and then contrasted between the two groups to determine any age-related effect. The influence of specific emotions on the corresponding activations was evaluated with the same inclusive masks defined by comparing one emotion, valence or arousal condition relative to the other conditions, as performed for our analysis of executive control (see main text).

Given the limited effects observed for the alerting and orienting components with a strict threshold (p < .05 FWE), for these conditions we report activations surviving a statistical peak of p < 0.001 uncorrected with a cluster size of > 50 contiguous voxels. Such combined intensity and cluster size threshold are considered to be restrictive enough to avoid false-positive activations (Lieberman and Cunningham 2009).

(6)

fMRI Results - Alerting and Orienting component

By contrasting the centre cue vs no-cue condition to highlight the alerting component of attention, across both groups (p < .001 Uncorrected with cluster extent > 50 voxels), we found activations only in the occipital cortex (MOG and IOG), presumably reflecting a sensory response to the visual cue and in the posterior middle temporal gyrus (MTG) near the temporo- parietal junction. The effect of music showed increases at uncorrected thresholds only in the left MOG during sad music (x = -30, y = -91, z = -5, Zscore = 4.21, cluster size = 42), and as a parametric function of negative valence (x = -33, y = -94, z = -5, Zscore = 4.16, cluster size = 43), but there was no other modulations and no differences between the two groups.

The main effect of orienting was assessed by comparing valid and invalid conditions to the centre cue conditions, but revealed no significant increase except uncorrected effects (p < .001 at voxel level, cluster extent > 50 voxels) for re-orienting (invalid cues) in the right frontal eye field (x = 27, y = -13, z = 70, Zscore = 3.56, cluster size = 14) and right SPL (x = 18, y = 53, z = 22, Zscore = 3.51, cluster size = 13). No influence of music was found on these components.

Analysis of musical features –and their relation to emotion and attention effects

For completeness, we verified that emotion effects on brain activity during the task were not driven by acoustic differences associated with different music contents. This analysis was performed for the executive control component only, given that this showed the most important influence of musical emotions on both behavioural performance and brain activation patterns.

Based on previous literature (Quarto, Blasi et al. 2014, Trost, Frühholz et al. 2015), we selected a set of eight musical features representing four distinct dimensions (i.e. rhythmic,

(7)

frequency-related features, timbral-related features, and beat perception). The rhythmic dimension included tempo and event density. The frequency and energy-dominant features contained inharmonicity, loudness (root mean square; rms), and dissonance. The timbre-related features comprised brightness and number of attacks. Finally, pulse clarity designated the perception of the beat.

For each of these features, the extraction of audio scores was performed on the whole set of musical excerpts (comprising three pieces for each of the four emotional categories, using the MIR toolbox (Lartillot and Toiviainen 2007) implemented in Matlab). High-level features that characterize “long-term” aspects of music (i.e., tempo, event-density, and pulse clarity) were computed over relatively large successive time-windows (i.e., 3.75sec each, corresponding to the mean duration of a single trial of the ANT task), while low-level features representing

“short-term” acoustic information (i.e., inharmonicity, loudness, dissonance, and brightness) were extracted using shorter time-windows (i.e., 50ms) with 50% overlap between the successive windows. Values extracted from the short-term features were then averaged to obtain a total of 12 values for each excerpt. The audio scores generated by the MIR toolbox and subsequently entered in parametric analysis thus allowed us to quantify the mean value of each of these musical features during each trial and each condition. Because all music excerpts had a duration of 45 seconds, there were 12 audio scores per musical excerpt for a particular musical feature during each task block. Detailed information about all acoustic features is presented below in the supplementary material section, including mean audio scores for each musical stimuli (see Supplementary Table 3) together with a description of the features characteristics (see Supplementary Table 4).

(8)

For each of the eight selected features, the audio scores were statistically compared between the four emotion categories using paired Wilcoxon rank tests (using R Software) and Bonferroni corrections, in order to define candidate musical parameters that might have an impact on attentional processes. We found that five out of the eight musical features, including event density, pulse clarity, brightness, number of attacks, and inharmonicity, showed significant differences between emotion conditions that paralleled the effects found for attention performance, namely differences between joy and tenderness, or between joy and sadness. For brightness, audio scores were higher for joyful than tender music (V = 666; p < .001), but not different between joyful and sad music. Dissonance showed higher scores for joyful than sad music (V = 460; p < .04), but no difference between joyful and tender music (see Supplementary Table 3 and Supplementary Figure 1). The remaining four musical features (i.e., event density, pulse clarity, number of attacks, and inharmonicity) showed higher scores for joy than for both tenderness and sadness (V > 520; p < .001).

These five musical features were then selected for further fMRI analyses, in addition to two features related to the rhythmic/energy-dominant dimensions, namely tempo and loudness, since the latter have consistently been associated with arousal ratings in previous studies (Schubert 2004, Trost, Frühholz et al. 2015) and might therefore also contribute to the behavioural differences observed during high-arousing conditions. This analysis allowed us to determine how brain activation associated with the executive control component of attention varied as a function of acoustic features. Details about the fMRI analysis of musical features and results are described in the main text.

(9)

Tables

Supplementary Table 1. Ratings of subjective affective experience evoked by the auditory material for the four emotions and both groups. 6-point scales were used to rate the degree of valence (0 = low pleasantness, 6 = high pleasantness) and arousal (0 = very relaxing, 6 = very stimulating). For each emotion, the mean scores are presented for valence (left) and arousal (right) dimensions, with standard deviation (SD) values in parentheses. Abbreviations:

V+ = High valence; V- = Low valence; A+ = High arousal; A- = Low arousal.

Auditory Material Ratings

Joy (V+/A+) Tenderness (V+/A-) Tension (V-/A+) Sadness (V-/A-)

Valence Arousal Valence Arousal Valence Arousal Valence Arousal

Older 5.5 (0.8) 5.3 (1.2) 5.4 (0.8) 1.2 (1.5) 2.5 (1.8) 3.9 (1.1) 4.1 (1.7) 1.7 (1.5)

Youn

g 4.6 (1.4) 5.3 (0.9) 4.6 (1.3) 1.0 (1.0) 1.9 (1.7) 4.3 (0.9) 3.7 (1.3) 1.4 (1.1)

(10)

Supplementary Table 2. Behavioural scores in the Attention Network Task for all three attentional components and both groups. For each attentional component, upper results (A) indicate the average data for each trial and each cue condition of interest, while the lower results (B) indicate average data for each emotion condition, collapsing across the corresponding stimulus and cue conditions. Mean percentage correct (accuracy (%) left panel) and mean reaction times for correct responses (milliseconds (ms), right panel). For both accuracy and reaction times, standard deviation (SD) values are given in parentheses.

Accuracy (Percentage (SD)) Reaction Times (Milliseconds (SD)) Executive control

A Con Inc Neu Con Inc Neu

Older 98% (3.6) 92% (15.4) 98% (3.5) 734ms (120.6) 933ms (160.5) 715ms (124.1)

Youn

g 99% (1.0) 98% (1.4) 99% (1.6) 577ms (80.0) 676ms (95.6) 556ms (71.2)

B Joy Tend Tens Sad Joy Tend Tens Sad

Older 95% (8.9) 95% (8.8) 95% (11.1) 95% (8.3) 820ms (137.5) 848ms (142.8) 828ms (134.0) 840ms (133.9)

Youn

g 98% (1.8) 99% (1.3) 99% (1.7) 99% (1.1) 616ms (80.1) 628ms (89.9) 628ms (89.1) 634ms (90.7)

Alerting

A No Cue Centre Cue Double Cue No Cue Centre Cue Double Cue

Older 96% (7.1) 96% (7.7) 96% (6.7) 800ms (132.4) 792ms (129.3) 791ms (129.2)

Youn

g 99% (1.0) 99% (2.5) 99% (1.3) 611ms (83.0) 600ms (77.9) 604ms (80.4)

B Joy Tend Tens Sad Joy Tend Tens Sad

Older 95% (8.7) 95% (8.2) 96% (8.8) 96% (6.0) 794ms (136.8) 808ms (137.0) 786ms (125.9) 795ms (130.1)

Youn

g 98% (2.1) 99% (2.1) 99% (2.3) 99% (1.4) 598ms (74.0) 608ms (83.4) 604ms (85.6) 612ms (84.2)

Orienting

A

Centre Cue

Sp. Valid Cue

Sp. Invalid

Cue Centre Cue Sp. Valid

Cue

Sp. Invalid Cue

Older 96% (7.7) 96% (7.1) 97% (5.4) 792ms (129.3) 780ms (131.2) 801ms (130.7)

Youn

g 99% (2.5) 99% (1.6) 99% (2.1) 600ms (77.9) 596ms (95.1) 598ms (81.2)

B Joy Tend Tens Sad Joy Tend Tens Sad

Older 96% (5.1) 96% (7.5) 96% (9.7) 96% (6.1) 785ms (137.9) 796ms (139.1) 782ms (116.9) 801ms (133.1)

Youn

g 99% (1.4) 99% (3.0) 99% (2.4) 100% (1.0) 584ms (79.3) 598ms (78.0) 605ms (87.2) 605ms (95.1)

(11)

Supplementary Table 3. Audio scores. (A) Mean value (SD) for each of the eight musical features and each of the four emotions. (B) Statistical results (from paired Wilcoxon rank tests) with p values only for the five musical features (i.e., event density pulse clarity, attack numbers, brightness, inharmonicity, loudness, dissonance), whose audio scores showed significant differences between joyful and either sad or tender music. Only these five features were subsequently used as parametric regressors in the fMRI analysis. Other features did not differ between music conditions that influenced attentional performance.

Audio scores (A) Statistical results (B)

Joy Tension Tend Sad Joy vs Tend Joy vs Sad

Tempo (45.47)129.83 (36.35)117.71 (37.56)118.72 (36.41)121.24 no sig. no sig.

Event density 3.33 (1.01) 2.71 (1.27) 2.30 (0.84) 1.82 (0.84) V = 541;

p < .001

V = 560;

p < .001 Pulse clarity 0.37 (0.12) 0.34 (0.18) 0.23 (0.08) 0.15 (0.04) V = 613;

p < .001

V = 666;

p < .001 Attack

numbers 23.25 (5.13) 20.52 (5.78) 17.61 (4.16) 18.25 (5.61) V = 590;

p < .001

V = 524;

p < .001 Brightness 0.31 (0.10) 0.33 (0.15) 0.12 (0.04) 0.29 (0.07) V = 666; p < .0

01 no sig.

Inharmonicity 0.43 (0.03) 0.41 (0.04) 0.40 (0.02) 0.36 (0.05) V = 627; p < .0

01 V = 658; p < .0 01 Loudness 0.08 (0.03) 0.05 (0.02) 0.08 (0.04) 0.07 (0.05) no sig. no sig.

Dissonance (222.10)291.23 (130.12)137.04 (216.05)244.41 (266.51)199.64 no sig. (V = 460; p = . 04)

Supplementary Table 4. Information represented by the eight acoustic features considered in our study, covering four major dimensions of music, as defined by the MIRtoolbox and the associated perceptual characteristics.

Dimension Name MIRtoolbox

function Perceptual characteristics

Rhythm Tempo mirtempo Speed at which a piece of music is played

Event density Mireventdensity Complexity of the piece; How many musical events (i.e., average of note occurrence) played in one time unit (sec) Perception of the beat Pulse clarity Mirpulseclarity How clearly the rhythm was detectable in the musical piece

Attack numbers mironsets Number of notes onset or pulses in the piece, related to the

(12)

dominant

y reflect the unpleasantness of the sound Loudness mirrms Information of the intensity of the music piece Dissonance mirroughness Roughness, and supposedly the unpleasantness the sound

Supplementary Table 5. Localization (MNI coordinates) and peak activation values (Z score) for brain areas engaged during attentional conflict (Inc > Con) or selective attention (Inc & Con > Neu) that also showed significant modulation by musical features of interest. All the peaks reported are significant at p < .05 after family-wise error (FWE) correction for multiple comparisons. Abbreviation: Con: Congruent condition. Inc: Incongruent condition.

Lat.: Hemisphere lateralisation. Z-score values refer to the activation maxima to the SPM coordinates.

Region Lat. p-value

(FWE - Peak Z score MNI Coordinates

x y z

Musical features on Attentional conflict or Selective Attention

Tempo Positive association

Con + Inc Superior Temporal gyrus R < .001 5.4 51 -16 -2

Superior Temporal gyrus L < .001 5.1 -45 -28 4

Event Density Positive association

Con + Inc Superior Temporal gyrus R .007 5.02 54 -13 4

Superior Temporal gyrus L .014 4.87 -42 -28 10

Negative association

Inc > Con Occipital gyrus - Cuneus R - 4.10 12 -70 19

Pulse clarity Positive association

Con + Inc

Superior Temporal gyrus R < .001 6.38 54 -13 -2

Superior Temporal gyrus L - 6.16 -48 -22 1

Middle Occipital gyrus R - 3.64 30 -94 4

Middle Occipital gyrus L - 3.37 -30 -94 4

Inferior Occipital gyrus R - 3.55 27 -97 -5

Inferior Occipital gyrus L - 3.55 -21 -100 -5

Brightness Positive association

Con + Inc Superior Temporal gyrus R - 4.44 69 -31 1

Attack Number

Negative association

Con + Inc Superior Temporal gyrus L - 3.96 -57 -40 7

Loudness Positive association

Con + Inc Superior Temporal gyrus R < .001 4.95 54 -13 -2

Superior Temporal gyrus L < .001 5.14 -54 -19 7

Inharmonicity Negative association

Con + Inc Superior Temporal gyrus L - 3.42 -51 -13 -8

(13)

Figures

Supplementary Figure 1. Audio scores extracted for five musical features of interest that showed significant differences between joyful and either sad or tender music. Also depicted in all graphs: standard errors of the mean (SEM) and p-values (asterisks) with the following meaning: * p-value < .05; ** p-value < .01; *** p-value < .001;

n.s. abbreviation indicates non-significant results.

(14)

References

Fan, J., B. D. McCandliss, J. Fossella, J. I. Flombaum and M. I. Posner (2005). "The activation of attentional networks." Neuroimage 26(2): 471-479.

Fan, J., B. D. McCandliss, T. Sommer, A. Raz and M. I. Posner (2002). "Testing the efficiency and independence of attentional networks." Journal of cognitive neuroscience 14(3): 340-347.

Finucane, A. M., M. C. Whiteman and M. J. Power (2010). "The effect of happiness and sadness on alerting, orienting, and executive attention." Journal of Attention Disorders 13(6): 629-639.

Lartillot, O. and P. Toiviainen (2007). A Matlab toolbox for musical feature extraction from audio.

International Conference on Digital Audio Effects.

Lieberman, M. D. and W. A. Cunningham (2009). "Type I and Type II error concerns in fMRI research: re- balancing the scale." Social cognitive and affective neuroscience 4(4): 423-428.

McConnell, M. M. and D. I. Shore (2011). "Upbeat and happy: Arousal as an important factor in studying attention." Cognition & emotion 25(7): 1184-1195.

Quarto, T., G. Blasi, K. J. Pallesen, A. Bertolino and E. Brattico (2014). "Implicit processing of visual emotions is affected by sound-induced affective states and individual affective traits." PloS one 9(7):

e103278.

Schubert, E. (2004). "Modeling perceived emotion with continuous musical features." Music Perception:

An Interdisciplinary Journal 21(4): 561-585.

Trost, W., S. Frühholz, T. Cochrane, Y. Cojan and P. Vuilleumier (2015). "Temporal dynamics of musical emotions examined through intersubject synchrony of brain activity." Social cognitive and affective neuroscience 10(12): 1705-1721.

Références

Documents relatifs

Pretreatment CT (a, axial, contrast-enhanced (split bolus)) shows a lymph node in level IIa on the left side receiving a final Node-RADS score of 5 (very high suspicion): 12 mm

The top spectrum was excited at 392 nm in the Fluorolog instrument, while the bottom trace was excited with a laser at

Three negative extraction controls (NECs) were processed in parallel with clinical samples by omitting the addition of biological material in Extract-N-Amp Plant PCR

Provide a structured summary that includes (as applicable): background, objectives, eligibility criteria, sources of evidence, charting methods, results, and conclusions that

Figure S2: Map of log10 of coda-Q (left) and its standard deviation in each cell (right).. As the distribution of log(coda-Q) is not Gaussian in the cells, the standard deviation

[r]

Yang and coworkers (4) originally proposed 14 models (M0 through M14) for their ML analysis but it became evident from the analysis of biological sequence data that a subset of

Mean daily temperature data was used to calculate the biting rates (and the interval between blood meals) which corresponded to the different days of the year..