• Aucun résultat trouvé

3. C. Native language benefit in speech comprehension

Previous research (Nabelek and Donahue, 1984; Takata and Nabelek, 1990; van Wijngaarden, Steeneken and Houtgast, 2002; Golestani, Rosen and Scott, 2009) has

demonstrated that effortful speech comprehension is more difficult for non-native compared to native listeners of a given language. More specifically, bilingual individuals are able to make better use of the linguistic context in their native compared to the non-native language under adverse listening conditions. Note that in this context the term “bilingual” refers to people who started speaking a second language after the age of three.

The use of contextual information in bilinguals has been extensively studied using the

14 Speech Perception in Noise (SPiN) sentences (Kalikow, Stevens & Elliott, 1977). These are affirmative sentences, in which the last word can be of either high or low predictability (high or low cloze probability sentences). SPiN sentences are embedded in different levels of noise and people are asked to listen to them and identify the last word. The rationale of this

paradigm is that, given the fact that people can usually predict the last word in the high cloze probability condition, it is usually identified faster and more accurately compared to low cloze probability sentences even when the signal to noise ratio is low.

Mayo, Florentine and Buus (1997) demonstrated that the age of second language acquisition plays a key role in understanding speech in noise in this language. The authors compared performance of four groups of listeners, monolingual English speakers, bilinguals since infancy, bilinguals who had learnt English before the age of 6 and “late” bilinguals (post-puberty) in identifying the last word of SPiN sentences of either high or low cloze probability. Results revealed a main effect of context as listeners performed better for high compared to low cloze probability sentences but this difference in performance negatively correlated with the age of second language acquisition. Hence, contextual information plays a facilitatory role in understanding speech in noise but only for the native language or, to a lesser extent, for non-native languages acquired very early in life.

Another related study (Bradlow & Alexander, 2007) tested native and non-native English speakers with high and low cloze probability sentences under two different

conditions, clear and plain speech. Clear speech was characterized by very clear intonation, as if the speaker was addressing to a person with hearing loss, while in the plain speech

condition the speaker used a more conversational style. Results revealed that native listeners benefit from acoustic (i.e. clear versus plain speech) and semantic (i.e. high versus low last word predictability) information while non-native listeners benefit from semantic information only in the clear speech condition. The authors propose that non-native listeners can also make use of context in order to facilitate speech comprehension but only under optimal listening conditions.

Although it is clear that listening to speech in one’s native language facilitates the use of contextual information, supporting speech comprehension, the question of which part of context is used remains open. Given that sentences contain semantic, syntactic and prosodic information, it is difficult to investigate the role of each one of these sources separately in a study using sentences.

The role of semantics in language comprehension has been investigated in a semantic

15 priming study by Bernstein, Bissonnette, Vyas and Barclay (1989). Using a visual retroactive semantic priming paradigm, the authors demonstrated that when a word (target), presented for a very limited amount of time, is followed by a visual mask and a semantically related word (prime), it is identified or recognized more accurately than when it is followed only by a mask. Interestingly, when the target is followed by a semantically unrelated prime, its identification deteriorated compared to the condition in which it is only followed by a mask.

In other words, priming does not work only forwards, as in the experiments presented above, but also backwards, as a word can have an impact on the perception of a previously presented one.

In order to investigate the native language benefit in speech, Golestani, Rosen and Scott (2009) used a modified, “auditory” version of the backwards semantic priming

paradigm. In this version, participants heard a word masked with noise (“target”), followed by a semantically related or unrelated word (“prime”) which was always clear (not embedded in noise). Their task was to select the target from two semantically related words, the target itself and a foil, in a two-alternative forced choice (2AFC) task taking place at the end of each trial (see figure 2 below). The experiment included native French speakers, with school knowledge of English, and took place in both English and French (i.e. participants' native and non-native language). The primes were mixed with speech-shaped noise at a range of signal to noise ratios (SNR) of -7dB, -6dB and -5dB. Decreasing SNR values were expected to produce stimuli harder to understand. An infinite SNR condition, in which primes were not mixed with noise, was also included.

16 Figure 2: Schematic representation of a trial in Golestani, Rosen and Scott (2009). The target word was embedded in noise, whose intensity varied between mini-blocks of 6 trials. It was followed by a semantically related or unrelated unmasked word (prime). In the end of each trial, a 2AFC screen appeared and participants had to recognise the prime previously presented.

Results revealed that participants were more accurate in recognizing the target in their mother tongue that in their non-native language. What is more, recognition was better for high SNR levels in both languages. Critically for the authors' hypothesis, according to which participants would make better use of semantic context in their native language, semantic relatedness played a facilitatory role in the native language condition, with higher

performance for targets followed by a semantically related word. The opposite effect was observed in the non-native language trials, in which participants demonstrated a better performance for targets followed by semantically unrelated compared to related words (see figure 3 below). This language by semantic-relatedness interaction suggests a benefit of the native language in the use of semantic context so as to understand speech in noise. The converse, observed in the English trials, although difficult to interpret, could be related to the organisation of the mental lexicon in bilinguals. For example, one could imagine that all the words composing a person’s vocabulary are grouped into clusters according to certain criteria, which could be different in the native and non-native languages.

Prime

Target

17 Figure 3: Performance in native and non-native language as a function of semantic context in a retroactive semantic priming study (taken from Golestani, Rosen and Scott, 2009).

Performance was significantly better for semantically related pairs in French (native language) and for semantically unrelated pairs in English (non-native language).

Documents relatifs