Language comprehension in bilinguals

Dans le document Prediction in Interpreting (Page 13-20)

1 Literature Review

1.2 Language comprehension in bilinguals

In many ways, comprehension seems to happen completely effortlessly (Harley, 2014). Listeners convert acoustic input into meaning by decoding phonemes and parsing them into recognizable words, while processing the syntax (and thematic roles) of these words, and extracting the meaning of the utterance by integrating pragmatic, discourse and knowledge-based factors (Cutler & Clifton, 2000). This process takes place very quickly (Marslen-Wilson, 1973; Rayner & Clifton, 2009; Swinney, 1979).

Word recognition is an interactive lexical retrieval process in which knowledge about a whole word affects perception of its individual sounds (Frauenfelder & Tyler, 1987). Two central models developed in the 1980s have influenced speech perception theory: the cohort model (Marslen-Wilson & Tyler, 1980), and the TRACE model (McClelland & Elman, 1986). According to the cohort model, as a word is spoken, the brain activates a “cohort” of possible word-level hypotheses. The word is recognized at the point at which it becomes

unique from its near neighbours, when the initial sequence of segments is common to that word and no other. This model gives bottom-up processing precedence over top-down processing, as context only becomes relevant once a cohort of words has been activated. On the other hand, the TRACE model of word recognition (McClelland & Elman, 1986) accounts for greater top-down processing by an interactive-activation account of word recognition, in which recognition at the word level feeds back to phoneme level recognition. This model thus accounts for the Ganong effect, where categorization at the word level is used to inform categorization of ambiguous phonemes (Ganong, 1980). More recent models add further nuance, placing more emphasis on the effect of semantics (Zhuang, Randall, Stamatakis, Marslen-Wilson, & Tyler, 2011) and orthographic and phonological

neighbourhoods (Grainger, Muneaux, Farioli, & Ziegler, 2005) in word recognition. However, both the cohort and TRACE models focus on word recognition, and leave aside how

sentences are parsed or interpreted by comprehenders.

But comprehension does not stop at word recognition. The comprehension process is incremental (see Rayner & Clifton, 2009 for a review), and there is evidence that context, both linguistic and extra-linguistic, influences comprehension from studies of both reading (Altarriba, Kroll, Sholl, & Rayner, 1996; Ehrlich & Rayner, 1981; Federmeier & Kutas, 1999;

Frisson, Rayner, & Pickering, 2005) and listening (Huette, Winter, Matlock, Ardell, & Spivey, 2014; Spivey, Tanenhaus, Eberhard, & Sedivy, 2002; Tanenhaus, Spivey-Knowlton, Eberhard,

& Sedivy, 1995). In fact, comprehenders do more than simply integrate information into a preceding context and their own world knowledge: they also predict what they are about to hear (see Section 1.4).

Crucially, these theories of comprehension are largely based on processes thought to take place when native speakers listen to their native language. As indicated above,

however, interpreters generally work from a non-native source language (their L2).

Language comprehension in L2 also involves word identification, parsing, semantic-syntactic representation and text representation and integration (Kroll & De Groot, 2005), just as it does in L1. Just as in L1 comprehension, different stages of this process may interact with one another (Dijkstra & van Heuven, 2002).

Importantly, however, models of comprehension in bilinguals and multilinguals must consider the potential co-activation of two or more lexicons during comprehension. The BIMOLA model for bi- and multilingual word recognition (Grosjean, 1988, 1997) is modelled on the TRACE model. It assumes that different languages are stored in separate lexicons, but that both languages share the lowest feature level and begin to separate into different networks at the phoneme and word level (Grosjean, 1988). In contrast, in their BIA+ model, Dijkstra and van Heuven (2002) propose that language activation is non-selective at the orthographic, phonological and semantic levels (in the case of interlingual homographs, both representations are activated, but these representations may nonetheless be stored separately for each language). The model accounts for top-down effects (lexical, syntactic or semantic) on word identification, as well as on the extent of activation of each language.

However, both languages are assumed to be active, to some extent, all the time, as it does not seem possible to suppress one reading of an interlingual homograph (Dijkstra & van Heuven, 2002). The SOMBIP model (Li & Farkas, 2002) also assumes an integrated lexicon, but takes into account differences in proficiency between bilinguals, and changes due to learning, thus proposing a more nuanced picture of both the separate and interactive

nature of a bilingual’s two languages. The BLINCS model (Shook & Marian, 2013) accounts for further variability by considering how both long-term features of bilingualism, such as age of acquisition or proficiency, as well as short-term features, such as recent exposure, might affect activation of languages. The BLINCs model also accommodates the integration of visual context during language comprehension.

All of these models account in some way for the parallel activation of a bilingual’s two languages. The extent to which both of a bilingual’s languages are activated has also been the subject of empirical research. However, there is still debate about the extent to which, and when, both languages are activated. For instance, Thierry and Wu (2007) found that Chinese bilinguals associated words in English when the Chinese translation of these words had related forms (e.g., huo che/train, and huo tui/ham), even although the words were not related in English. This suggests that even in a monolingual English context, Chinese was activated, and thus that lexical access is selective. However, a

non-selective account is not the only way to explain these findings: they could be due to the way in which word associations are transferred from L1 to L2 during learning, with traces of L1 remaining within the L2 lexicon (Costa, Pannunzi, Deco, & Pickering, 2019).

Another question relates to the extent that semantic and syntactic processing is shared between languages. Kroll and Stewart (1994) suggest that links are stronger between L1 lexical items and concepts than between L2 lexical items and concepts, and that, at least some of the time, conceptual links from L2 are made by first accessing the L1 translation equivalent and then the concept. Their model supposes that with increasing L2 proficiency, L2 conceptual access becomes increasingly independent from L1. The model can be used to explain translation asymmetry: translation from L2 to L1 may take place more quickly than

the reverse, because the L2 lexical item links directly to the L1 lexical item, whereas

translation in the opposite direction is conceptually mediated. However, the picture is more complex, because switching from L2 to L1 has been shown to take longer than the reverse (Meuter & Allport, 1999). These findings have been explained by an inhibition account according to which a bilingual’s stronger language must also be more strongly inhibited, and is thus more difficult to re-activate (Green, 1998; Meuter & Allport, 1999).

Further, there is now evidence that L1 and L2 are also routinely linked on the semantic level. Priming studies have consistently found that lexical decisions are made faster when a word is presented after a semantically-related word, regardless of whether the semantically-related word is presented in the same or in a different language in the presence of a prime word across languages (see Francis, 2005). Even across languages with different scripts, and in masked priming studies2, robust priming effects have been shown with an L1 prime and an L2 target (Gollan, Forster, & Frost, 1997; Jiang & Forster, 2001).

Hoversten and Traxler (2016) investigated whether semantic cues could influence lexical activation in bilinguals in a study which investigated semantic sharing in the context of a sentence. Participants read sentences in English that were semantically constraining for either the English or the Spanish meaning of an interlingual homograph, for instance “pie”

(meaning foot in Spanish). One sentence was semantically constraining for the English meaning (the congruent condition), e.g., “While eating dessert, the diner crushed his pie accidentally with his elbow.” The other sentence context was constraining for the Spanish meaning of the noun (the incongruent condition), e.g., “While carrying bricks, the mason

2 In masked priming studies, the prime is displayed on the screen for such a short time period (e.g., 60

crushed his pie accidentally with the load.” The authors found that bilinguals did not initially appear to activate the Spanish meaning of the interlingual homograph when reading in English (as measured by reading time on the interlingual homograph), even when sentence constraints encouraged this. However, the authors suggested that shorter overall reading times in bilinguals for the incongruent sentences could be due to the bilinguals integrating the Spanish meaning of the homograph later in processing, suggesting that the Spanish meaning did become available. This study lends support to a nuanced view of language activation during bilingual comprehension, in which both language environment and context influence the strength of activation at different points in the time course of lexical access.

There is also evidence that syntax and grammar may be shared across languages during comprehension. De los Santos, Boland and Lewis (2020) investigated whether grammar is shared between a bilingual’s two languages. Participants read word pairs which were either grammatical or ungrammatical, and either in the same language (English or Spanish) or mixed languages (English and Spanish). Participants read the second word (a noun) in the grammatical language pairs more quickly than in the ungrammatical language pairs. Crucially, the grammaticality effect was found across the same and mixed language conditions, indicating that syntactic representations are language independent, supporting a shared view of syntax. Studies of dialogue also show that syntactic structures may be

primed across languages (e.g., Hartsuiker, Pickering, & Veltkamp, 2004 - see section 1.3 for further discussion).

Studies of parsing preferences also support a shared view of syntax. For instance, Dussias (2001, 2003, 2004) has shown that parsing preferences in an L2 can affect parsing of the L1. Dussias (2003) exploited the fact that, broadly speaking, native English speakers

have low attachment preferences, whereas native Spanish speakers have high attachment preferences (the picture is slightly more complex; for a review, see Pickering & Van Gompel, 2006). Dussias (2003) had participants read sentences such as “Peter fell in love with the daughter of the psychologist who studied in California” in English or Spanish. English monolingual speakers tended to attach “who studied in California” to the psychologist, whereas Spanish monolingual speakers attached the relative clause to the daughter.

However, Spanish-English bilinguals exhibited low-attachment preferences even when reading in Spanish, suggesting that the L2 can affect the L1 at the grammatical level.

More recent evidence also suggests that sentence-level constraints may influence bottom-up cross-language activation. Chambers and Cooke (2009) had native English speakers listen to constraining and non-constraining sentences in their L2, French, as they looked at four objects, one of whose preferred name was “chicken” (poule in French), and one of whose preferred name was an English interlingual homophone of the French preferred name (pool) in the sentence “Marie va nourrir/décrire la poule” [Marie will feed/describe the chicken]. When participants heard the constraining verb (feed), they looked at the target object (chicken) and rarely considered the interlingual homophone (pool). This suggests that where there is greater prediction (top-down processing), language activation is more selective (see also: FitzPatrick & Indefrey, 2010).

In sum, top-down and bottom-up processes interact in language comprehension in both bilinguals and monolinguals. In bilingual comprehension, there is evidence of cross-language activation in word recognition, even in a monolingual context, and certainly there is cross-language activation in a bilingual context such as may be found in interpreting.

There is also evidence of shared representations at the syntactic and semantic levels.

Further support for language sharing comes from the language production literature.

Dans le document Prediction in Interpreting (Page 13-20)