• Aucun résultat trouvé

Hemispheric interactions during for the pre-attentive change detection of different acoustic features in speech and non-speech sounds

N/A
N/A
Protected

Academic year: 2021

Partager "Hemispheric interactions during for the pre-attentive change detection of different acoustic features in speech and non-speech sounds"

Copied!
1
0
0

Texte intégral

(1)

Hemispheric interactions during for the pre-attentive change detection of different acoustic features in speech and non-speech

sounds

C Jacquier1*, R Takegata2, S Pakarinen2, T Kujala2, R Näätänen2,3,4

1Laboratoire Dynamique Du Langage, CNRS UMR5596 and Université Louis Lumière Lyon 2, France

2Cognitive Brain Research Unit, Department of Psychology, University of Helsinki, Finland

3Department of Psychology, University of Tartu, Estonia

4 Center for Functionally Integrative Neuroscience , Aarhus University, Denmark.

Human brain shows a functional hemispheric asymmetry for auditory processing. Different acoustic features activate the left and right hemispheres differently depending on their nature.

In addition, speechness (speech vs. non-speech) may also influence the dominant hemisphere for processing the feature. Anomalous asymmetry for a certain acoustic feature, therefore, may provide a clue to identify the source of deteriorated speech perception. It was shown that dyslexic adults have an abnormal asymmetry of the efferent auditory pathways that may cause their difficulties in speech perception (e.g., voicing discrimination) (Jacquier, 2008). The present study examined the effects of the speechness (speech or non-speech) and the ear (left vs. right) of stimulation for pre-attentive change detection of four types of auditory features, using the mismatch negativity (MMN) as an index. Furthermore, speech and non-speech sounds were presented either alone (monaural condition) or in parallel (dichotic condition) to investigate possible interference effects.

Speech and non-speech stimulus sequences each comprised a frequent sound (standard: S) and four types of infrequent sounds (deviant: D) that differed from the standard in one acoustic feature: duration, frequency, intensity, or vowel (or an equivalent temporal-spectral change in non-speech stimuli). The standard and the deviant stimuli appeared alternately in one sequence (e.g., S DFrequency S DDuration S…), according to the multi-feature paradigm developed by Näätänen et al. (2004). In dichotic condition, speech and non-speech stimuli were presented alternately, with speech stimuli to the right ear and non-speech stimuli to the left ear. The stimulus-ear relation was reversed for half of the experiment. In monaural condition, either the speech or non-speech stimuli alone were presented to a single ear.

Subjects watched silent films with subtitles and ignored the stimulus sounds.

Speechness enhanced MMN amplitude for the vowel change but had no significant effect on the MMN for other features. In addition, the amplitude of the vowel-change MMN did not change between conditions although the overall MMN amplitude was significantly smaller in the dichotic than monaural condition. Whereas stimulation ear did not have influence on MMN amplitude in monaural condition, the left-ear stimuli elicited larger MMNs than right- ear in dichotic condition although this effect did not reach significant. The observed effects may reflect (1) the robustness of processing phonologically important features and (2) differential hemispheric interactions for the processing of different acoustic features embedded in speech and non-speech sounds.

Références

Documents relatifs

On the other hand, the effect of the vocal tract shape on the intrinsic variability of the speech signal between different speakers has been widely studied and many solutions to

Table II presents the results of the number of IVF cycles, the number of oocytes retrieved, the number of pronucleids obtained, the number of embryos transferred, the pregnancy

T he current study exam ined the effects of the m anner (dichotic vs. m onaural) and the ear (left vs. right) of stim ulation on the m ism atch negativity (M M N ) for

It is thus interesting to test the influence of MPEG audio coding algorithms on speech recognition performance.. Moreover, MPEG audio coding supports a variable bitrate, which allows

Global and phoneme-specific correlations between MFCC vectors and 共 a 兲 fundamental frequency and 共 b 兲 formant frequencies for male and female speakers using

von Clarmann Title Page Abstract Introduction Conclusions References Tables Figures J I J I Back Close Full Screen / Esc.. Printer-friendly Version

Il faut donc prendre en compte la probabilité que le plasmon excite les accepteurs, qui va dépendre entre autre de l’étendue en Z (direction orthogonale au plan de l’échantillon)

Figure 4.16 : Effet de l’intensité du courant de soudage sur le comportement mécanique de l’assemblage soudé pour une charge de 7 bar et un temps de soudage de 11 cycles..