• Aucun résultat trouvé

Prediction as a pre-training aptitude

Dans le document Prediction in Interpreting (Page 146-157)

4 Study 2. Do interpreters learn to predict differently during training?

4.1.4 Prediction as a pre-training aptitude

Of course, it may be that enhanced predictive processing is an aptitude that students possess before training. For instance, Gerver et al. (1984) found a correlation between students’ score on a cloze test pre-training and their exam result post-training (although students saw the entire text at once). Similarly, Pöchhacker (2014) found that

undergraduate students’ performance (as measured by reaction time and accuracy) on an online variation of a cloze test correlated with their exam results for an introductory course to interpreting, a result partially replicated by Chabasse and Kader (2014). Students listened to a recorded speech and “filled in the gaps” when they heard a beep (although note that prediction was not the only skill measured by this test, which also considers the number of synonyms that students are able to provide). Pöchhacker (2014) also found that students already admitted to the interpreting programme performed significantly better than untrained BA students (which may be related to development of skills during training, but which could of course still be due to aptitude).

4.2 The current study

Prediction is a key component of simultaneous interpreting. We found evidence that predictive processing during an interpreting task is related to professional practice in

simultaneous interpreting (see Chapter 3). However, it remains unclear whether

professional interpreters predicted earlier and more because of training and professional experience, or whether they self-select to the profession because they have an inherent aptitude for predictive processing.

Although there is some evidence to suggest that training in simultaneous interpreting leads to behavioural and neural changes in interpreting trainees, the exact nature of these changes, and their relevance for the simultaneous interpreting process, has not been clearly demonstrated. Some longitudinal studies on simultaneous interpreting have measured behavioural skills thought to sub-serve skills involved in simultaneous interpreting. However, it is not always clear how, on the one hand, the different processing domains measured by behavioural tasks actually sub-serve the interpreting task (Seeber,

2015), and, on the other hand, how the behavioural tasks relate to the domain that they are supposed to measure (García et al., 2019). Meanwhile, other longitudinal studies have taken neural measures, and demonstrate that training in simultaneous interpreting leads to neural changes in areas of the brain linked to language processing. However, these neural studies do not show how, or whether, these neural differences lead to language processing changes during the task of simultaneous interpreting.

In this study, we isolated one recognised component of the interpreting task, prediction, and explored whether or not it changes with training. We employed an eye-tracking paradigm which made it possible to measure prediction implicitly, while trainees were engaged in a simultaneous interpreting task (c.f. Seeber, 2015).

Our design is based on Ito et al. (2018), and on our first study. Participants listened to sentences containing a highly predictable word in English, and simultaneously interpreted these sentences into their “A” language. They viewed visual arrays containing three

distractors (pictures of words unrelated to the predictable word), and a picture of either a target object (e.g., camel), or an unrelated object (e.g., barrel). If participants fixate the image of a target object (camel) more than the image of an unrelated object (barrel) before hearing the predictable word, this would demonstrate that interpreting trainees predict upcoming words. One group of participants participated in the study twice, once before and once after two semesters’ training in simultaneous interpreting. They listened to and

viewed a different stimuli set before and after training. If interpreting trainees fixate the image of a target object earlier, and more, after training in simultaneous interpreting than before training in simultaneous interpreting, this would suggest that interpreting training leads to better predictive processing.

4.3 Methods

4.3.1 Participants

Twenty-three students completing the Masters in Conference Interpreting at Geneva University participated in the study. Ten students were tested before and after two

semesters of training in simultaneous interpreting. Thirteen students were tested before two semesters of training in simultaneous interpreting. All students had passed entrance exams before entry to the programme, which were designed to ensure that all students had sufficient language proficiency (language skills are not taught during the Masters course), and an aptitude for conference interpreting. In addition, all participants completed a language background questionnaire (see Table 1). The questionnaire was based on the Leap-Q questionnaire (Marian et al., 2007). Participants also provided information about their professional background.

The Masters in Conference Interpreting comprises three semesters, each spanning 14 weeks, with consecutive interpreting taught during the first semester, and simultaneous interpreting added in the second and third semesters. In the second and third semesters, students receive at least 10 hours per week of formal teaching in simultaneous interpreting, participate in practice groups for an additional 6-8 hours a week, and also practice

independently. Students’ performances are assessed by means of continuous assessment (including grades during the second semester) and a final examination at the end of the third semester.

All students in the study interpreted from English into their “A” language. Students’

A languages varied: French (6), German (5), Spanish (5), Italian (4) and Russian (3). Three students were studying English as a B-language, meaning they also worked simultaneously

back into English. Two students worked back into English in the consecutive mode only. The other 18 students had English classed as a C-language in their combination, meaning they worked from English into their A-language, but never back into English. Four students had some form of prior training in simultaneous interpreting. Students were 27.26 years old on average (SD: 5.26, Range: 21 – 40). They spoke a total of between three and eight languages (mean: 4.65, SD: 1.37). Around half of students had completed a Bachelors, and half had also completed a Masters.

Student interpreters (n = 23)

Background

Age (yrs) 27.26 ± 5.28 (Range: 21 – 40)

Languages spoken 4.65 ± 1.37 (Range: 3 – 8)

Highest level of education

Bachelors 47.8%

Masters 52.2%

English language

Age (yrs) of Acquisition 8.35 ± 4.10 (Range: 0 – 15)

Age (yrs) of Fluency 14.57 ± 5.35 (Range: 3 – 22)

Age Reading Proficiency acquired 14.00 ± 4.31 (Range: 6 – 22) Age Written Proficiency Acquired 15.17 ± 4.38 (Range: 6 – 25)

Exposure to English

Years living in an English-speaking area 1.95 ± 5.02 (Range: 0 – 24) Years spent in an EN school/work environment 3.27 ± 4.93 (Range: 0 – 18) Years living in an EN family environment 1.91 ± 6.09 (Range: 0 – 24)

Current exposure % 1.43 ± 9.27 (Range: 10 – 50)

Self-rated English proficiency

Speaking 4.04 ± 0.71

Reading 4.61 ± 0.50

Listening 4.61 ± 0.50

Table 1: Information from the language background questionnaire

4.3.2 Stimuli

We used the same set of 32 experimental sentences as in Study 1, but divided the stimuli set into two sets of 16 experimental sentences (see Appendix 3). Each sentence was paired with a visual array containing three distractor objects and one of two critical objects.

Critical objects appeared in each of the four quadrants equally frequently following a Latin square design. As in Study 1, experimental sentences each contained a highly predictable word (e.g., camel, in “The traveller went to the desert because he wanted to ride a camel and go exploring.”) at varied positions in the sentence but never sentence finally. Cloze ratings for the experimental sentences were taken from Study 1. Mean cloze probabilities were 89.1% for Stimuli Set 1 and 92.3% for Stimuli Set 2. The mean position of the critical word was 9.25 (SD: 1.41) in Stimuli Set 1, and 10.4 (SD: 3.44) in Stimuli Set 2. Properties of the two sets of experimental sentences are summarised in Table 2.

There were also two sets of 16 filler sentences. These sentences were paired with the same visual stimuli as the experimental sentences, but the quadrants in which the objects appeared were varied. Filler sentences mentioned distractor objects 50% of the time, so that together with the experimental sentences, which mentioned a critical object present 50% of the time (in the target condition), the sentences mentioned one of the objects in the visual array 50% of the time.

Each of the visual arrays contained four objects: a critical object and three distractor objects. In the target condition, the critical object corresponded to the predictable word (e.g., camel). In the unrelated condition, the critical object (e.g., barrel) was semantically unrelated to the predictable word, and did not phonologically overlap with the English name of the object.

Each stimuli set contained 16 experimental and 16 filler sentences (a total of 32 items), and 16 arrays containing four images. Experimental images (target/unrelated objects) were counterbalanced in each stimuli set, resulting in two experimental lists for each stimuli set. Each experimental list contained two half lists, each made up of the 16 visual arrays paired with eight experimental and eight filler sentences. Visual arrays paired with experimental items in one half-list were paired with fillers in the other half-list, and vice versa. This resulted in 2 different sets of items, and 4 different experimental lists within each stimuli set.

Stimuli Set 1 Stimuli Set 2

Mean Length (ms) 10006 ± 1450 9743 ± 2148

Mean predictable word

onset (ms) 6468 ± 1837 6086 ± 2026

Mean predictable word

offset (ms) 7084 ± 1827 6664 ± 2073

Mean cloze value (%) 89.1 ± 12.8 92.3 ± 11.1

Table 2. Properties of experimental sentences.

4.3.3 Procedure

Ten of the students were tested at two time points, once before and once after having received two semesters of training in simultaneous interpreting. Thirteen students were tested once, before receiving training in simultaneous interpreting19. The procedure followed was always the same. Students were assigned to a group which received Stimuli Set 1 before training, and Stimuli Set 2 after training, or else to a group which received

Stimuli Set 2 before training, and Stimuli Set 1 after training. Students’ A language was taken into account when assigning them to a group, such that different A languages were

represented to the same extent in both groups (as far as possible).

Before the experiment began, participants read and signed an informed consent form approved by the FTI Ethics Committee. The experiment started with a picture familiarization task. Participants saw all objects appearing in the experiment in an

automatically generated randomized order. The objects were shown on the screen one at a time above a caption showing their English name. Participants were instructed to look at the objects and listen to their English names, so that they could name the objects later. After that, they were asked to name each object using the English name that had been provided.

Before training (n23), the mean object naming accuracy was 97.62% (SD: 0.04%). After training (n10), the mean object naming accuracy was 98.75% (SD: 0.01%). Incorrectly named objects (2.38% before training; 1.25% after training) were shown again, and the

experimenter prompted participants who did not accurately name the object on the second viewing.

In the eye-tracking experiment, participants were seated in front of a computer in the experimental laboratory (LaborInt) of the Interpreting Department at the University of Geneva, in a set-up identical to that of Study 1. Participants’ dominant eye was tracked.

Participants were asked to listen to and simultaneously interpret the sentences into their A language. They then had to judge whether the sentence had mentioned any of the objects shown on the display by answering the question: “Did the sentence mention any of the pictures?”. Students answered by pressing either 1 for “Yes” or 2 for “No” on the keyboard.

After the instructions, the eye-tracker was calibrated using the nine-point calibration grid.

The experiment started with two practice trials, after which participants were given a chance to ask questions. The experimenter also checked whether participants were interpreting the sentences simultaneously. The eye-tracker was then recalibrated before participants began the experiment. For more details of the experimental procedure, please see the depiction in Figure 1 (below), and refer to Section 1.3.3. The session lasted about 30 minutes.

Figure 1. Trial sequence for the experimental trial: “The traveller went to the desert because he wanted to ride a camel and go exploring.” Participants pressed space bar to launch the drift correct. They then saw a blank screen for 500ms, and the screen remained blank as the sentence began. 1000ms before predictable word onset, they saw a display containing four images, one of which was the critical image (here, camel). The display remained until sentence offset. Participants then viewed a blank screen for 4000ms, followed by a screen displaying a question, which participants answered by pressing 1 for yes and 2 for no. The same image was also paired with the filler sentence “He could hear what they were saying because the door had been left open.”

4.4 Results

4.4.1 Comprehension question accuracy

The mean accuracy for comprehension questions in the experimental trials before training (n23) was 96.7% (SD: 4.1%). The mean accuracy for comprehension questions in the experimental trials after training (n10) was 99.4% (SD: 2.5%). Incorrectly answered trials

Drift correct

Blank screen -> Sentence onset

Picture onset

Sentence offset

Question

Did the sentence mention any of the pictures?

1 = yes 2 = no

Dans le document Prediction in Interpreting (Page 146-157)