• Aucun résultat trouvé

A quantitative and qualitative analysis of peer feedback

N/A
N/A
Protected

Academic year: 2022

Partager "A quantitative and qualitative analysis of peer feedback"

Copied!
284
0
0

Texte intégral

(1)

Master

Reference

A quantitative and qualitative analysis of peer feedback

SERRANO SOLER, Micaela

Abstract

Interpreting students often rely on the feedback of fellow students or “peer feedback” as a powerful tool that helps them assess their own performance. This MA thesis analyses the role of peer feedback as one of the metacognitive tools used in the skill acquisition of conference interpreting at the University of Geneva's Faculty of Translation and Interpreting. Thanks to a corpus of peer evaluations of consecutive interpretations carried out by a group of students during the second semester of the Master's programme in conference interpreting, and a survey on their experience of peer feedback, this thesis provides an insight into how peer feedback is used by interpreting students to acquire and develop their skills.

SERRANO SOLER, Micaela. A quantitative and qualitative analysis of peer feedback. Master : Univ. Genève, 2018

Available at:

http://archive-ouverte.unige.ch/unige:131151

Disclaimer: layout of this document may differ from the published version.

1 / 1

(2)

1 MICAELA SERRANO SOLER

A quantitative and qualitative analysis of peer feedback

(3)

MA thesis – FTI / UNIGE 2

Mémoire présenté à la Faculté de Traduction et d’Interprétation Pour l’obtention du MA en Interprétation de Conférence

Directeur de mémoire : Kilian Seeber Juré : Manuela Motta

Juillet 2017

(4)

3

STUDENT INFORMATION:

Micaela Serrano Soler

Faculté de Traduction et d'Interprétation Université de Genève

40, boulevard du Pont-d'Arve, CH-1211 Genève 4, Switzerland

(5)

4

TABLE OF CONTENTS

1. INTRODUCTION ... 12

1.2. Overview ... 12

1.1. Research Question ... 13

2. OBJECTIVES AND AIMS ... 14

2.1. Overall Objective ... 14

2.2. Specific Aims ... 14

3. BACKGROUND AND SIGNIFICANCE ... 16

3.1. Is interpreting a natural talent? ... 16

3.2. The path to expertise ... 16

3.2.1. What is deliberate practice? ... 18

3.3. Deliberate practice applied to conference interpreting... 19

3.4. Developing metacognitive skills ... 19

3.4.1. Goal-oriented practice ... 19

3.4.2. Autonomous and group learning ... 20

3.5. Feedback in deliberate practice ... 21

3.5.1. Trainer feedback and peer feedback ... 21

3.5.2. Complementarity of trainer and peer feedback ... 22

3.5.3. Peer feedback and self-assessment ... 23

3.5.4. Drawbacks of peer feedback ... 24

3.5.5. Conclusion ... 24

4. RESEARCH DESIGN AND METHODS ... 25

4.1. Overview ... 25

4.2. Population and Study Sample ... 25

4.3. Sample Size and Selection of Sample ... 25

4.4. Sources of Data ... 26

4.4.1. Corpus ... 26

4.4.2. Survey ... 28

4.5. Collection of Data ... 30

4.6. Data Analysis Strategies ... 30

4.6.1. Peer feedback analysis sheet ... 30

4.7. Ethics and Human Subjects Issues ... 32

4.8. Timeframes ... 32

(6)

5

5. DATA AND RESULTS ... 33

5.1. Data presentation ... 33

5.2. Corpus Data and Analysis ... 33

5.2.1. Aspects prior to interpreting ... 33

5.2.1.1. Personal objective... 33

5.2.1.2. Complexity of the original speech ... 34

5.2.1.3. Preparation ... 36

5.2.2. Content ... 37

5.2.2.1. Main ideas and details ... 37

5.2.2.2. Logical structure and links ... 38

5.2.2.3. Terminology ... 39

5.2.3. Delivery ... 39

5.2.4. Confidence ... 40

5.2.5. Pace of the interpretation ... 41

5.2.6. Voice and contact with audience ... 42

5.2.7. Form and style ... 43

5.2.7.1. Use of target language ... 43

5.2.7.2. Register ... 44

5.2.8. Advice and suggestions... 44

5.2.8.1. Note taking strategies ... 45

5.2.8.2. Possible suggestions and solutions ... 46

5.2.9. Comments and observations ... 47

5.3. Peer feedback questionnaire results and analysis ... 48

5.3.1. General approach to peer feedback ... 48

5.3.2. Prioritization in peer feedback ... 48

5.3.2.1. Aspects prior to interpreting ... 49

5.3.2.2. Content ... 52

5.3.2.3. Delivery ... 55

5.3.2.4. Form and style ... 59

5.3.2.5. Advice ... 61

5.3.3. Adapting peer feedback to the receiver ... 62

5.3.4. Peer feedback and learning process ... 63

5.3.5. Reaction to peer feedback ... 64

5.3.6. Peer feedback progression (questions 11 and 12) ... 65

5.3.7. Peer feedback in comparison to other kinds of feedback (questions 6 and 7) ... 65

(7)

6

5.3.8. Comments and observations ... 66

6. COMPARED ANALYSIS ... 67

6.1. Overview ... 67

6.1. Aspects prior to interpreting ... 68

6.1.1. Personal objective in peer feedback ... 68

6.1.2. Complexity of original speech ... 72

6.1.3. Preparation ... 73

6.1.4. Main ideas and details compared to logical structure and terminology ... 74

6.1.5. Delivery ... 78

6.2. Form and style ... 84

6.2.1. Use of target language ... 84

6.2.2. Register ... 85

6.3. Solutions and advice ... 86

6.3.1. Note taking strategies ... 86

6.3.2. Possible solutions and suggestions ... 88

7. CONCLUSION... 90

7.1. Overview ... 90

7.2. Findings ... 90

7.2.1. Autonomous and group learning ... 90

7.2.2. Goal-setting ... 91

7.2.3. Metacognition ... 92

7.2.4. Perspectives ... 93

7.3. Conclusions ... 93

8. BIBLIOGRAPHY ... 94

9. APPENDIXES ... 98

Appendix 1: Corpus (feedback transcripts) ... 98

Appendix 2: Peer feedback analysis sheets ... 165

Appendix 3: Peer feedback analysis sheets summary... 200

Appendix 4: Expected levels of progress for consecutive interpreting ... 207

Appendix 5: L’entrainement supervise – un guide pratique ... 209

Appendix 6: Summary of questionnaire results ... 214

Appendix 7: Peer feedback questionnaire template ... 250

Appendix 8: Peer feedback analysis sheet template ... 256

(8)

7

Appendix 9: Peer feedback categories ... 259 Appendix 10: Plan d’études ... 261

(9)

8

CHARTS INDEX

Chart 1 - Personal objective mentions in corpus ... 34

Chart 2 - Complexity of original speech mentions in corpus ... 35

Chart 3 - Preparations mentions in corpus ... 36

Chart 4 - Main ideas and details mentions in corpus ... 37

Chart 5 - logical structure and links mentions in corpus ... 38

Chart 6 - Terminology mentions in corpus ... 39

Chart 7 - Confidence mentions in corpus... 40

Chart 8 - Pace mentions in corpus ... 41

Chart 9 - Voice and contact with audience ... 42

Chart 10 - Use of target language mentions in corpus ... 43

Chart 11 - Register mentions in corpus ... 44

Chart 12 - Note taking strategies mentions in corpus ... 45

Chart 13 - Possible suggestions and solutions mentions in corpus ... 46

Chart 14 - Personal objectives in survey ... 49

Chart 15 - Complexity of original speech in survey ... 50

Chart 16 - Preparation in survey ... 51

Chart 17 - Main ideas and details in survey ... 52

Chart 18 - Logical structure and links in survey ... 53

Chart 19 - Terminology in survey ... 54

Chart 20 - Voice projection in survey ... 55

Chart 21 - Confidence in survey ... 56

Chart 22 - Eye contact with audience in survey ... 57

Chart 23 - Pace in survey ... 58

Chart 24 - Use of target language in survey... 59

Chart 25 - Register in survey ... 60

Chart 26 - Possible solutions and suggestions in survey ... 61

Chart 27 - Note taking in survey ... 62

Chart 28 - compared data on personal objective ... 68

Chart 29 - compared data on complexity of original speech ... 72

Chart 30 - compared data on preparation ... 73

(10)

9

Chart 31 - Compared data on main ideas and details ... 74

Chart 32 - Compared data on logical structures and links ... 75

Chart 33 - compared data on terminology ... 76

Chart 34 - compared data on voice ... 79

Chart 35 - compared data on confidence ... 79

Chart 36 - compared data on contact with audience ... 80

Chart 37 - compared data on pace ... 81

Chart 38 - compared data on use of target ... 84

Chart 39 - compared data on register ... 85

Chart 40 - compared data on note taking strategies ... 86

Chart 41 - compared data on possible solutions and suggestions ... 88

IMAGES INDEX

Image 1 - Peer feedback information ... 31

Image 2 - Grid example of peer feedback analysis sheet ... 31

Image 3 - Student tracker feedback ... 70

(11)

10

TABLE INDEX

Table 1- Type of feedback given at each training environment ... 23

Table 2 - Transcriptions sorted by feedback date ... 26

Table 3 - Feedback divided by source and target languages in corpus ... 27

Table 4 - Titles of corpus speeches ... 28

Table 6 - compared analysis grid ... 67

Table 5 - Fragment from Consecutive ELPs (appendix 4) ... 82

(12)

11

ABSTRACT

Interpreting students are expected to practice regularly to maintain and improve their interpreting skills. A considerable part of their preparation is built upon metacognition, their ability to reflect upon their own performance to identify challenges and systematically apply solutions. One of the tasks of interpreter trainers, therefore, is to teach their students how to critically assess and evaluate their own performance. They can, however, often rely on another powerful tool that helps them assess their output: the feedback of fellow students or “Peer feedback”. This MA thesis analyses the role of peer feedback as one of the metacognitive tools used in the skill acquisition of conference interpreting at the University of Geneva’s Faculty of Translation and Interpreting. Thanks to a corpus of peer evaluations of consecutive interpretations carried out by a group of students during the second semester of the Master’s programme in conference interpreting, and a survey on their experience of peer feedback, this thesis provides an insight into how peer feedback is used by interpreting students to acquire and develop their skills.

(13)

12

1. INTRODUCTION 1.2. Overview

Interpreter training is eminently practical. An online comparison of interpreter training programs across the world shows that many of them focus on interpreting practice. The EMCI (European Masters in Conference Interpreting) core curriculum establishes that interpreting students in member programs shall develop their interpreting skills by devoting time to group practice of simultaneous and consecutive interpreting as well as other self-directed interpreting study. Of the 400 class contact hours that programmes are required to offer, a minimum of 75% of this time is to be devoted to interpreting practice.

Students of interpreting rely heavily on metacognition, that is, on their own perception of their performance (Arumi, Dogan & Mora Rubio, 2006). As a consequence, a big part of this practice is based on the conscious analysis of their performance. To improve as interpreters, students need to be aware of where the focus of their training sessions should lie. At the FTI, there are several tools available to help understand the quality of their performance, what should be improved and how to solve difficulties. These are: feedback from teachers both in class and in the student tracker overview tool1 in the Virtual Institute2, as well as their assessments. The advice of teachers is invaluable, as they are already professional interpreters with a great deal of experience.

Nonetheless, this is not the only kind of feedback received. In the MA in conference interpreting at the University of Geneva, 6 to 8 hours a week are dedicated to what is called “entraînement supervisé”

(“supervised practice groups”, henceforth referred to as “ES” from the French) in which students, working in groups organized around their language combination, take turns giving each other speeches and interpreting them into their active languages. Although it is established in the curriculum

1 Online tool in FTI’s online student portal in which teachers provide students with assessments regarding their interpreting performance during classes, a grade if applicable and finally a request for action in which they suggest what they could do to improve their skills. This tool allows students to keep a registry of all assessments and requests for action during the entirety of the Master’s program.

2 Online platform used by the FTI’s interpreting program in which students can access different learning tools, like didactic material, course descriptions, study regulations, class preparation, sound and video recordings, an assessment overview with their grades, online forums to communicate amongst themselves and with their teachers or access to an online peer feedback tool, amongst others.

(14)

13

that 1-2 times a month, depending on the semester, one of the teaching assistants for the Master attends the session to supervise its functioning and provide feedback to the students on their performance, most of the time they work alone and they themselves give each other feedback on their peer’s performance. They do so by following criteria set by the department in a practical guide provided at the beginning of the programme.

1.1. Research Question

This thesis is set up as an analysis of the feedback received from fellow students. Peer feedback can be defined as a retrospective evaluation provided from a student to another student about their performance in class (Arumi et al., 2006). There is literature regarding deliberate practice in interpreting (Setton & Drawant, 2016; Tiselius, 2013; Motta, 2013, 2011) and while peer feedback is often mentioned as an important facet of interpreter training, part of the research focusing on peer feedback specifically belongs to other domains in education, such as foreign language learning or writing courses (Nelson & Schunn, 2008; Zhang, 1995).

What does peer feedback bring to interpreting students specifically? A more targeted look into the matter is necessary to understand its repercussions in the training of conference interpreters. This project aims to provide a better understanding of peer feedback as in real life, as well as a firmer grasp of the metacognitive processes that lie behind this pedagogical method.

(15)

14

2. OBJECTIVES AND AIMS 2.1. Overall Objective

The overall objective of this MA thesis is to provide a broad analysis of peer feedback in the field of interpreter training. I will collect and analyze information on peer feedback, organizing it and summarizing it to provide a theoretical basis to the topic in one single document. I will then attempt to provide an overview of peer feedback in interpreter training and its advantages for skill acquisition.

I will analyze both the criteria applied by interpreting students to give their own evaluations and how they perceive peer feedback. Finally, I will try to discern possible ways to improve peer feedback that would help master’s students during their ES sessions.

2.2. Specific Aims

The first part of this thesis consists of a literature review on peer feedback. First, I will define the term and I will give an overview of its evolution in the pedagogical field. As a starting point, I will collect existing research on peer feedback and its impact on the learning process, its advantages and disadvantages as well as the differences between peer and trainer feedback. Although most of the research into this topic is not in the field of interpreting, this thesis will give an overview of the use of peer feedback in interpreter training, trying to gather what has been written on the subject in one place. This paper will also detail the criteria regarding peer feedback used during ES sessions in the University of Geneva.

Secondly, this thesis will include an analysis of peer feedback of interpreting students during different phases of their training. To this end, I will compile a corpus of peer evaluations given by a group of first and second year students in the interpreting programme of the Faculty of Translation and Interpreting of the University of Geneva during ES sessions. These evaluations will be classified by language pair and by expected level of progress. Once completed, I will analyse the content, focus, wording and evolution of the evaluations to identify patterns and understand what peer feedback entails in conference interpreting training. Moreover, I will draw a comparison with trainer feedback.

I will complement the results of the corpus with a brief survey regarding the perception that interpreting students have of the incidence of peer feedback in their skill acquisition. It will focus on perceived usefulness, personal experience, evolution and possible room for improvement.

The analysis of the results of both the corpus and the survey results will provide an insight into the role of peer feedback in interpreting degrees and it will lay the foundations for further research into the topic.

(16)

15

(17)

16

3. BACKGROUND AND SIGNIFICANCE 3.1. Is interpreting a natural talent?

Interpreter training is the path chosen by many aspiring interpreters to reach their goal of becoming experts in the field, but it did not start out that way. AIIC recommends systematic training as the surest way to expertise and surveys training programs to promote best practices. Interpreting, that is

“conveying a message spoken in one language into another language” (AIIC, 2012), has been a human practice put into use whenever two or more groups of people who have spoken different languages have needed to communicate.

Nonetheless, it was not until the 20th century that it started to become the profession that it is nowadays (Blandino, 2014). At the beginning of the 1900s, the loss of influence of the French language and the establishment of English as an official language of diplomacy, led to the birth of conference interpreting: we can trace back the roots of consecutive interpreting to the Paris Peace Conference (Baigorri Jalón, 2000). At that conference, since formal training in conference interpreting as a profession did not yet exist, the organisers had to employ personnel that happened to speak both English and French and who had good general knowledge (Baigorri Jalón, 2000).

Therefore, the first interpreters were in the vast majority polyglots; people who had lived in the diplomatic world and who had the opportunity to learn different languages (Gaiba, 1998).

Something similar happened when the simultaneous mode of interpreting was officially born during the Nuremberg Trials in 1945-1946 due to the need for interpretation of the proceedings quickly into several languages (Baigorri Jalón, 2000). Interpreters learnt “on the job”, charting unexplored territory. Conference interpreting nowadays is, however, an ability developed after years of intense training and professional experience (Mackintosh, 2011).

3.2. The path to expertise

Expertise is defined as a “series of outstanding achievements under different circumstances”

(Ericsson & Smith, 1991, p. 2), that according to Ericsson and Smith reflects a set of acquired skills in a given domain which result from acquisition of specific knowledge and methods. It takes years of training and practice and not only talent (Ericsson, 1993). As in many other specialized domains, such as music or competitive sports, becoming an expert is a long and time-consuming process.

Interpreting trainees need to master many abilities that include active listening, memorizing, anticipating, summarizing, chunking, and paraphrasing, among others (Expected Levels of Progress,

(18)

17

appendix 4). Ericsson estimates that it takes around ten or more years of preparation for novices to attain exceptional performance in any given field, from writing, to music and sports, among others (Ericsson, 1999). For example, to join AIIC interpreters need to have worked for at least 150 days, at least 50 in each of their language pairs and be sponsored by at least two AIIC members in each of their language pairs who have worked with them and can vouch not only for the language and interpreting skills, but also for their respect of AIIC’s Code of Ethics and Professional Standards (AIIC, 2014). Motta (2011) also points out that, as in many other academic fields, learning continues after graduation.

Regarding interpreter training, universities that adhere to the EMCI consortium share a common curriculum and their students are taught the same skills, but the duration of the programmes can be different, ranging from nine months to two years depending on the pace each school decides to set for their students. In the case of the University of Geneva, there are over 600 class contact hours with trainers and over 400 of self-training over three semesters and 7 modules according to the FTI’s Plan d’études3 (appendix 10). Those are complemented by the number of hours that students spend in group practice, and together, they are expected to allow students to perform at the basic professional level required of them by the end of their third semester, as the Expected Levels of Progress of the FTI set out as a final goal (appendix 4). That is why group and individual practice play such an important role in the learning process and why they were introduced in the curriculum through ES sessions and through EVITA (ETI Virtual Interpreter Trainer Archives)4. Research shows that not all practice leads to maximal performance (Ericsson, Krampe, & Tesch-Römer, 1993). In fact, practice must be deliberate, that is to say, it should fulfil a series of requirements for it to be truly effective and eventually allow novices to become experts.

3 The number can vary depending on the language combination of each student.

4 EVITA is a digital archive accompanied by a reflective journal at the FTI’s Virtual Institute to which students can upload and download speeches and interpretations and receive feedback on their performance from their peers and teaching assistants (Motta, 2013).

(19)

18

3.2.1. What is deliberate practice?

According to Ericsson, Krampe, & Tesch-Römer (1993) deliberate practice is “extensive engagement in relevant practice activities supervised by teachers and coaches” (p. 392). Ericsson (1997) explains that practice becomes deliberate, firstly, when learners are motivated and willing to exert effort to improve their performance. In addition, practice must be constant, usually coached, and needs to be made up of specially designed tasks to improve the trainees’ performance. These tasks should be followed by immediate knowledge of and feedback on the results. Finally, it is imperative that trainees are both open to and willing to integrate feedback to the best of their ability (Ericsson, Krampe, &

Tesch-Römer, 1993).

As per the theories of deliberate practice, becoming an expert is not a short process: aspiring interpreters have a long road ahead of them and it extends well beyond their completion of the MA.

When graduates go out into the real world, they must have the tools to assess themselves, to learn from their mistakes and from their colleagues. During their training, they need to be taught the skills for autonomous learning (Horváth, 2005) and so learn to practice effectively both by themselves and with others. This way, once they leave university behind, they can continue walking the path to expertise and keep improving away from classrooms and trainers. In other words, they need to be able to engage in deliberate practice, but how can deliberate practice be applied to interpreter training?

(20)

19

3.3. Deliberate practice applied to conference interpreting

During the last few decades, interpreting scholars have tried to explain how the theories of expertise can be used to understand interpreting expertise acquisition. Although this project deals with peer feedback, this does not exist in a vacuum and, before focusing on it, it is important to briefly recall how deliberate practice has been tackled in interpreting studies.

Performance improvements in interpreting are progressive, as in other academic fields, because in order to acquire and improve interpreting skills, regular targeted practice is necessary. Taking this into account, Motta (2011) tries to set out a framework to specify what deliberate practice means in interpreter training in the FTI. These guidelines emphasize the importance of metacognition, goal- oriented practice and autonomous and group learning in interpreter training.

3.4.

Developing metacognitive skills

Metacognition can be described as the trainees’ knowledge and regulation of their own cognition and mental processes (Shannon, 2008). Motta (2011) lays out a framework proposal for the MA in conference interpreting at the FTI based on the deliberate practice approach and on peer feedback that sets out to help trainees at the FTI be more aware of their own performance and, therefore, be able to diagnose their difficulties and find strategies to use during practice sessions.

As is common in interpreting studies, it is suggested for students to record their performances so they can listen to themselves and to their peers. By analysing their own rendition, they can try to identify their problems and their strengths as well as develop self-evaluation skills and find inspiration in their peers’ work. Keeping track of their own progression; through re-listening to and analysing the recordings of their performances, through actions such as keeping an interpreting journal with objectives set out and results in each training session documented or relying on their peers and teacher’s feedback, can be useful. Seeing one’s own progress is very motivating for students and also helps the teacher to verify what strategies work, making it easier to guide them over various obstacles.

3.4.1. Goal-oriented practice

Deliberate practice needs to be goal-oriented. In interpreting, it means that programmes need to set goals for their students. In the long term, it means having clear milestones so the students know where they need to be at each given point and what they need to practice. In the University of Geneva, it means providing the students with expected levels of progress that they can use to structure their group training sessions if they so wish.

(21)

20

During classes, it means giving them a clear objective to work towards during each practice session and at each point in their training through the Expected Levels of Progress (appendix 4). During group training sessions, being goal-oriented means that the trainee should focus on only one aspect of their performance, such as presentation, links, main ideas, etc. As part of this approach, Motta (2011) suggests that the trainees set their own objective for each session of practice. In the University of Geneva, during ES sessions group members share their objective for their interpretations at the outset so peers can tailor feedback to the specific aim of the session. A similar approach is used for EVITA training by requiring the student to set a goal before uploading (and starting) their interpretation.

Since interpreting is a highly complex activity, picking one specific difficulty to focus on, can make practice more manageable, and it is easier, and therefore, less discouraging, to see improvements or find solutions and strategies to tackle that one aspect. By setting different objectives throughout the programme, students ensure that they can work on different facets of interpreting during their training, like content, style, presentation or delivery (Clifford, 2015).

3.4.2. Autonomous and group learning

An interpreting programme based on deliberate practice should be learner centred and collaborative (Motta, 2013). Students should be encouraged to share their knowledge and to learn by interacting with their peers. This interaction involves observing their peers work, discussing problems and strategies, trying to reach common solutions and, of course, giving peer feedback which plays a key role in their training. To do so, students need to be invited to work together regularly and they need to be given the opportunity to do so when possible: this is what is done through ES sessions in the University of Geneva.

(22)

21

3.5. Feedback in deliberate practice

Feedback plays a key role in deliberate practice. Ericsson (1993) says that “in the absence of adequate feedback, efficient learning is impossible and improvement only minimal even for highly motivated individuals” (p. 367), but why is that and how does it apply to interpreting?

Receiving feedback gives trainees the opportunity to understand their errors and their strengths, while also providing them with suggestions that could be useful to circumvent whatever complications they encounter. It is logical to deduce that if this self-awareness were lacking, students would be doomed to repeat the same mistakes and, as per Ericsson (1993), progress would then be minimal. Therefore, feedback is one of the tools students can rely on to ensure that they are not engaging in senseless practice and keep them on the right track.

In the University of Geneva’s FTI, students receive feedback from three main sources: their teachers, their peers and themselves, that is, self-assessment. This section will try to discern the differences between peer feedback and teacher feedback, and between peer feedback and self-assessment.

3.5.1. Trainer feedback and peer feedback

Trainer or teacher feedback in conference interpreter training can be defined as “the immediate or retrospective corrections, assessment and advice and the objective description of a performance given to the student, by the teacher, about his or her performance with the aim of improving it in the future”

(Arumi et al., 2006, p. 20), while “peer feedback” is defined, as a “retrospective evaluation of one or more student/s provided to the other students about their performance in class, which ultimately contributes to the student’s metacognitive analysis to ensure improvement” (ibid, p. 22-23).

Since it is an evaluation, ideally peer feedback should also contain “correction, assessment or advice”

and “a description of the performance” (ibid, p. 17). The difference between the two lies in the position of the trainers as experienced professionals and for some of them in their preparation as instructors.5 Teachers’ insight is especially valuable because as experienced professionals who have previously been interpreting students, they may have had to overcome some of the problems the student faces, through strategies and techniques that they have probably been employing for years. The insights of the teacher can help the students to identify goals, to recognize strengths and

5 The FTI offers a Master of Advanced Studies in Interpreter Training that aims to provide professional interpreters with the theoretical and pedagogical background to teach interpreting at university level (University of Geneva, 2017).

(23)

22

weaknesses so they can set new objectives and understand what it is expected of them, especially relating to the curriculum (Arumi et al., 2006).

3.5.2. Complementarity of trainer and peer feedback

Teacher and peer feedback are complementary for several reasons. Firstly, students can use teacher feedback to learn how to evaluate their own performance and that of their peers and, therefore, model their feedback on the assessments received from teachers and assistants. As it will be shown in the corpus, students often refer to their teachers’ advice and insights in their peer assessments and self- assessments. This can be very enriching in classes where students with different language combinations come together, because it allows them to share what they have learned in their own classes and it gives access to new perspectives, advice and insights indirectly from other instructors.

Peer feedback also complements teacher feedback in that it is easier for fellow students to relate to each other, since they are all following the same academic program. From my perception as a student, while trainer feedback tends to always identify the problem and the errors of the interpretation, it is not always as accurate regarding the underlying causes or, at least, they are not always put in a way that the student can relate to. Fellow students, probably aided by how well they get to know each other personally due to the long hours spent together, both in class and outside class, and aided by their similar experiences, have a good grasp of the personal circumstances that result in a particular mistake or bad habit, even though they may not be familiar with the underlying processes behind interpreting. This trust and their position as students on equal footing also makes them perhaps more willing to admit their shortcomings (unfamiliarity with concepts or lexicon, lack of preparation, etc.) without fear for their grades, compared to when they receive the similar feedback from their teachers.

Be that as it may, even if they have a better grasp on the causes and are more willing to open up about their failings with their peers, they are not always able to give an effective solution, hence why teacher and peer feedback are complementary. In the table below, the different types of feedback given in each training environment are shown. Outside of general classes, teacher feedback for a given performance comes from one single person. During ES sessions, students receive peer feedback given by different people from different backgrounds, areas of expertise and language combinations.

(24)

23

Teacher feedback Teaching Assistant feedback Peer feedback

Master classes Yes No No

Language pair classes Yes No No

ES No Yes (once a month) Yes

Evita No Yes (once a month) Yes

Table 1- Type of feedback given at each training environment

For instance, the ES group in this project is made up of five people, who come from four different countries and who have four different mother tongues and complementary language combinations. In the best case scenario for a given speech and performance, peer feedback allows the student to receive an assessment from very different points of view depending on the language combination at play:

firstly, the original speaker, who is able to judge the content and the understanding of the intended message in the source language; secondly, “pure clients” who did not understand the content of the original and can focus on the speech in the target language without comparing it to the original to give feedback on the presentation and style; thirdly, in the case of people sharing a mother tongue, assess the quality, style and nuances of the language that may escape non-native speakers, and lastly people who listened to and understood both speeches and can compare the original to the interpretation. In that sense, peer feedback is a truly enriching experience that is very different to the context of language pair classes

3.5.3. Peer feedback and self-assessment

Peer feedback serves to encourage student involvement in the learning process (Arumi et al., 2006:

p.22-23). When students are tasked with commenting on their classmates’ performance, they are forced to reflect on what makes a good interpretation and how the performance could be improved.

This kind of self-reflection is very useful because it allows the student reflect on their own progress.

It contributes to a growing awareness of the cognitive processes behind interpretation amongst trainees. In addition, it encourages trainees to extrapolate what they learn from their classmates to

(25)

24

their own interpretations and, later, they can use the insights gained to enrich their own self- assessment.

3.5.4. Drawbacks of peer feedback

One difference between trainer and peer assessment is that, as instructors, teachers have been trained to develop pedagogical methods to help the student learn. On the other hand, some peers do not know how to generate useful comments on others’ performances and they tend to start out with no experience on how to give feedback on an interpretation (Arumi et al., 2006, p.22-23). This drawback is more evident at the start of the programme, since a by-product of receiving and giving feedback is learning how to give more targeted, useful feedback. Arumi, Dogan and Mora (2006) suggest that a way to circumvent this problem is to guide the student on the assessment criteria and on the level of competence they comment on. Another option is to provide the students with peer assessment sheets to ensure the relevance of the feedback provided by the students and to help develop the students’

“judgment skills” (Adams, 2008).

Nevertheless, it is true that there is a possible disadvantage of peer assessment that has not yet been mentioned. Although the close relationship between peers make them perhaps better able to relate to the struggles of their classmate, this can make their assessment more subjective (Arumi et al., 2006, p. 22-23) since they may be overly lenient so as not to hurt each other’s feeling or so they do not draw criticism upon themselves. When there is a close relationship between the trainees, they may let their personal bias soften or change the feedback they are providing.

On the contrary, if some of them do not get along, they may not always put aside personal rivalries and give the other a fair, unbiased assessment. Naturally, it is also possible for instructors be subjective too but their role as a teacher helps them keep more distance and be more objective than students.

3.5.5. Conclusion

Peer feedback allows students to practice interpreting with their peers in a space where they feel more comfortable making mistakes and learning from them, complementing the feedback and assessment they receive from their teachers and allowing them to put it into practice, learn from their peers and support each other through their learning process. Through a corpus of peer feedback and the survey, I will try to analyse how peer feedback is conducted in an interpreting practice group.

(26)

25

4. RESEARCH DESIGN AND METHODS 4.1. Overview

The objective of this project is to carry out a quantitative and qualitative analysis on peer feedback amongst interpreting students. As pointed out in the introduction, this analysis will try to show how peer feedback is given between trainees and discern what effect it has on the way and the pace at which students develop their interpreting skills within the framework of the FTI’s MA in Conference Interpreting. This could eventually help us create peer feedback guidelines that may increase effectiveness and could be explored and expanded in future research projects. There are two sources of data for this MA thesis: a peer feedback corpus and a follow-up survey.

4.2. Population and Study Sample

The focus of this MA thesis is to analyse peer feedback in interpreter training. The population studied are interpreting students from the FTI who use peer feedback as a part of their formal training to develop their professional skills.

4.3. Sample Size and Selection of Sample

Considering that the recording of a feedback corpus is an intensive project that requires interpreting students to collaborate over a long period of time, the selection of the study sample is restricted on several levels. The corpus data was gathered during the second semester of the 2014 to 2016 student intake of the MA in Conference Interpreting of the University of Geneva over 3 months, from March 2015 until May 2015, of the ES training groups, composed of five students, including the author.

This group worked with English, French, German and Spanish, both as a source and as a target language. Furthermore, since the small size and the language combination of the group means the students are either in the booth or delivering the speech during simultaneous interpreting practice, they are seldom able to listen to each other and receive feedback for those performances. Therefore, it was deemed better to restrict the sample of peer feedback to the occasions in which they were practicing consecutive interpreting, as it would allow consistency regarding the elements judged (see Appendix 8) and also allow to gather feedback more regularly during the semester, as well as provide enough data to carry out analysis.

The survey sample focuses on the same four people, excluding the author, who were asked to give their own view on how the peer feedback received and given impacted the learning process while

(27)

26

they were completing their MA by answering an anonymous online questionnaire using Google Forms technology.

4.4. Sources of Data

In order to gather the necessary data, two research methods were chosen: a corpus of transcribed feedback and a survey. The combination of these methods is very enriching because while the corpus is based on observation, the survey is based on the students’ perception of their feedback. The comparison of the two gives us a unique opportunity to look at peer feedback from two different points of views and to draw conclusions based on the similarities and the differences in the results.

Their combined results will make it possible to carry out the desired analysis of the peer feedback in a set group of conference interpreting students.

4.4.1. Corpus

In this case, the speech material in question is a transcription of the oral feedback given amongst a group of interpreting students of the University of Geneva during their second semester ES sessions after consecutive training, from February 2015 until May 2015. However, some technical issues that persisted during the months of February, March and April, mean that there was no usable data from the first month and that the feedback samples that could be salvaged from the months of March and April are limited. Thus, the bulk of the feedback was recorded during May 2015.

Mar 5 (2) Apr 1 (2)

May 1 (2) 5 (4) 14 18 (3)

Table 2 - Transcriptions sorted by feedback date

In total, by the end of the semester, there were 16 usable, non-corrupt audio files, of varying sizes, containing peer feedback from speeches in a wide array of topics and language combinations, as shown in table 3.

(28)

27

After this selection process, all the recordings were transcribed omitting names and off-topic conversation to preserve the privacy of the participants. When possible, they have been split into different separate transcriptions to be able to analyse one piece of given feedback at a time. By the end of the process, the corpus contained 14 pieces of feedback of varying length that have been transcribed and examined using the peer feedback analysis sheet.

The language combinations for the peer feedback transcribed are very varied, as can be seen in table 2, with interpretations from and into French, English, German and Spanish, as the table below shows.

English Spanish French German

Source 4 0 6 4

Target 3 5 3 4

Table 3 - Feedback divided by source and target languages in corpus

In addition, the feedback analysed corresponds to 11 different speeches, 8 of them original and 3 relay interpretations, in which the students are faced with different difficulties and styles and must adapt to them.

Speech titles

Économie formelle et informelle à la CIT

EPU

Famille Le Pen Kolumbien

Kolumbien (relay) Health care workers

Health care workers (relay)

(29)

28

Shadow economy Informal economy Human Rights Council

Human Rights Council (relay)

Table 4 - Titles of corpus speeches

Based on this material, it is possible to examine how the student’s feedback relates to the expected levels of progress set by the Interpreting department. Although the transcriptions of the recordings are objective and only based on real examples of peer feedback, its interpretation could be subjective.

To try to reduce the risk of bias, an analysis sheet based on the expected levels of progress was created to ensure that all recordings are judged and analysed following the same criteria. Under the section

“Data Analysis Strategies”, an annotated blank version of said analysis sheet can be found; moreover, the transcription of the recordings used to fill it is included in appendix 1.

The peer feedback analysis sheets were completed following the categories set out in appendix 9 and following the models of appendix 2 and 3. As with the transcriptions, 14 analysis sheets (see appendix 2) were input into a Google Form document to be able to analyse them jointly using the same format as in the questionnaire. A summary of the results generated with Google Forms technology is available in appendix 3. The following section contains a detailed analysis of the results broken down by category and subcategory as specified in the aforementioned tables.

4.4.2. Survey

The survey will take the form of a questionnaire answered by the same students whose feedback was recorded, transcribed and analysed. This way, it will provide an insight in how the students perceived their training sessions and the feedback given, as well as how they think it helped them develop their interpreting skills. The results of the survey, the responses to the questionnaire, are based on the subjects’ experiences, memories and opinions. These answers are born of the participant’s point of view and they provide a better understanding of their reasoning and learning experience, reflecting their opinion, which will always be subjective.

(30)

29

The questionnaire is composed of 12 questions that address how students regard the feedback they give and receive and how they perceive the impact it has in how they learn. It aims to do so by asking them what their general approach is to feedback (question 1), what aspects they consider more or less important in feedback (questions 2 and 5), how they adapt peer feedback to the receiver (question 3), how it influences the way they learn (question 4 and 8), how they react to it (questions 9 and 10), how they feel it has changed during the course of their studies (questions 11 and 12) and how they think peer feedback compares to feedback from other sources (questions 6 and 7).

(31)

30

4.5. Collection of Data

As discussed above, the data has been collected using two different methods: a corpus and a survey.

The data for the corpus was collected through an audio recording of the peer feedback. The survey will take the form of an online questionnaire sent to participants who allowed me to record their peer feedback for the corpus.

At the end of this section, there is a preliminary example of the questionnaire sent using Google Forms. In the questionnaire, there are sixteen questions: all of them refer to peer feedback given during consecutive ES sessions. In addition, the questionnaire is divided into three sections: questions 1 to 7 refer to how students give peer feedback, the process they follow, how they perceive their contribution to the group depending on different factors and the rationale behind it; questions 8 to 12 deal with how trainees receive peer feedback and, finally, questions 13 to 16 are meant to provide an insight on the respondents’ perception of peer feedback in general and how it has evolved since they started their training. In the three sections, there are two types of questions: open questions that allow respondents to formulate an answer in their own words (Groves, Floyd J. Fowler, Couper, Lepkowski, Singer, & Tourangeau, 2011), as these have been shown to produce fewer missing data and more elaborate answers (Keush, 2014), and rating scale questions structured similarly to the peer feedback analysis sheet, which will allow me to contrast the survey answers to the corpus data. The results of the corpus and those of the survey serve as a basis for the analysis.

4.6. Data Analysis Strategies

As the collected data correspond to two different methods, a corpus and a survey, they must be analysed differently to ensure consistency in the analysis. To render the data more accessible, I have created an “analysis sheet” based on the expected levels of progress of the department for consecutive interpreting (appendix 4) for each of them, included in appendix 2.

4.6.1. Peer feedback analysis sheet

The purpose of the first section of the analysis sheet is to gather information to identify each piece of feedback: transcription number, date of the feedback, original language, target language, title of speech and number of people providing feedback for the performance.

(32)

31

Image 1 - Peer feedback information

All the peer feedback analysis sheets are filled in manually one by one, checking the wording used by the participants, trying to detect references to main categories and subcategories being discussed.

The questionnaire is designed to try to grasp what importance participants attach to certain criteria when providing their feedback.

Since under normal circumstances, feedback providers do not always openly state how much importance they attach to each point they raise or organise it hierarchically, the analysis sheets will try to quantify how often each sub-category comes up in the individual pieces of feedback and how in depth it is discussed each time. Therefore, there are three rows in each grid ranging from “not mentioned at all” to “mentioned in detail”, as seen in the table below.

Image 2 - Grid example of peer feedback analysis sheet

This allows for retrospective comparison how much students perceive a point to be important versus how much they focus on it when giving actual feedback to their peers.

Below, the different aspects of peer feedback examined in this project are explained. The analysis sheet is based on them, as well as questions two and five of the questionnaire. They can be divided into five different categories for the analysis, but this information will not be present in the questionnaire to allow the participants to answer freely and to avoid the task becoming too time- consuming.

Several categories have been established based on the Consecutive Interpreting ELPs for the academic year 2014-2015, which can be found in appendix 4: aspects prior to interpreting, content, delivery, form/style and possible solutions/suggestions. All categories are broken down in different

(33)

32

subcategories that are listed and defined as set out in Appendix 9 and the complete Peer Feedback Analysis Sheet can be found in Appendix 8.

4.7. Ethics and Human Subjects Issues

To ensure confidentiality in the corpus data, the recordings will remain anonymous, instead providing a transcription of the peer feedback in which all sensitive information, like names or any personal data, will be omitted. I have also assigned them a code in the transcription. Moreover, the participants have signed a voluntary consent form that will provide them with the necessary information regarding data confidentiality and privacy so they understand the principles to which the study will adhere (see Appendix 11). The recordings were made during regular ES sessions.

Furthermore, the author belongs to the ES group studied and also gives feedback when my input is necessary. To ensure objectivity, even though my comments are included in the transcription so the conversation is complete, they are not taken into account in the analysis. The comments given to my performances will take part in the analysis, since I am not in control of the feedback given to me. I have however discarded all recordings in which I give the bulk of the feedback, which tends to apply to most interpretations from Spanish speeches, as I am the only native Spanish speaker in the group.

Regarding the survey, it will be sent online through Google Forms and it will be anonymous. Nobody will know who answered each form, although answers will be restricted to one per person.

4.8. Timeframes

The corpus data was gathered from March to May 2015 and all the participants belonged to the 2014 to 2016 student intake of the University of Geneva MA in Conference Interpreting. Initially the span was to be from February to May and have a similar number of recordings for each month, but due to technical problems with the recording equipment some of the recordings during the first three months were unusable and the issue wasn’t resolved until mid-April. This is the reason why the majority of the transcriptions are from May 2015 and there are fewer examples from March and April. The questionnaire was sent, once approved, in February 2015, after the participants had finished the January exams and had a complete view of their experience of the MA.

(34)

33

5. DATA AND RESULTS 5.1. Data presentation

As explained in the methodology section, this project draws upon two sources of data: a feedback corpus analysed using an especially designed template denominated “peer feedback analysis sheet”

and a follow-up survey in the form of an online questionnaire taken by all the participants in the peer feedback corpus. This section contains a separate presentation of the data gathered using both methods to later on analyse them together, to compare the actual feedback and the perception the students have of it through their answers. The full summary of the corpus data can be found on appendix 3, the transcription it is based on appendix 1 and the questionnaire results summary in appendix 6.

5.2. Corpus Data and Analysis

5.2.1. Aspects prior to interpreting

This field evaluates how many times peer feedback focused on factors preceding the performances.

5.2.1.1. Personal objective

As seen in the graphic below, only in 7.1% of the evaluated feedback cases is “personal objective”

specifically mentioned and discussed during peer feedback.

(35)

34

Chart 1 - Personal objective mentions in corpus

5.2.1.2. Complexity of the original speech

0 20 40 60 80 100

Not mentioned Mentioned in passing

Mentioned in detail

Personal objective in corpus (%) 92 0 7.1

Personal objective in corpus

Personal objective in corpus (%)

(36)

35

Chart 2 - Complexity of original speech mentions in corpus

In comparison, “complexity of the original speech”, in this case interpreted as the level of difficulty perceived by the students regarding the original speech, is mentioned in 75.1% of cases.

This subcategory is often used to put into perspective the performance for both the person receiving peer feedback and for the person providing it, either by deeming the original easier to take on or by clarifying the difficulties it posed for the interpreter, for example “P4.: Moi, j'ai trouvé en tout cas pour le deux que c'était beaucoup mieux organisé, ça avait l'air beaucoup plus logique que mon discours. Donc déjà ça c'est très bien.” (appendix 1).

0 10 20 30 40 50 60 70 80 90 100

Not mentioned Mentioned in passing

Mentioned in detail

Complexity of original speech in corpus (%) 28.6 42.9 28.6

Complexity of original speech in corpus

Complexity of original speech in corpus (%)

(37)

36

5.2.1.3. Preparation

Chart 3 - Preparations mentions in corpus

“Preparation” is discussed during feedback a total of 35.7% of times. The cases in which it is mentioned mostly refer to when further preparation would have been key to overcoming certain difficulties (example 1), as well as to solving possible doubts that come up for the members of the group (example 2), appendix 1.

Preparation, example 1:

“L.: (...) You could have noted that at the end of my speech I didn’t talk about the report, because you read the report.

J.: Well, not all of it, unfortunately.”

Preparation, example 2:

“P5.: Est-ce que en français tu sais quand il faut dire BIT ou OIT ?

A.: BIT c'est a priori c'est quand tu parles vraiment du secrétariat et donc de tout ce que fait le Secrétariat, s'ils publient un rapport, a priori c'est le BIT, mais... c'est assez difficile. Je suis tombé sur un document qui expliquait justement pour les traducteurs vers le français quand utiliser BIT ou OIT parce que...

0 10 20 30 40 50 60 70 80 90 100

Not mentioned Mentioned in Passing

Mentioned in detail

Preparation in corpus (%) 64.3 28.6 7.1

Preparation in corpus

Preparation in corpus (%)

(38)

37

P5.: Il faut un document pour l'expliquer !”

5.2.2. Content

5.2.2.1. Main ideas and details

Chart 4 - Main ideas and details mentions in corpus

As the graphics above show, “main ideas” and “content” are mentioned in all pieces of feedback (92.9% in detail, 7.1% in passing). The practical guide provided to students at the beginning of the MA program (appendix 5), informs students that juries at the final examination will be evaluating them based on 3 criteria, the first of which being “la fidélité (restitution complète, précise, claire, sans ajouts ni omissions)”. In the same document, students are encouraged to base the peer feedback they provide on accuracy, structure, language, presentation and posture (appendix 5):

Again, the first two concepts students are instructed to consider are related to content and specifically to “main ideas and details”, so it is logical that when they give their feedback, their comments focus on accuracy and fidelity to the original speech. In fact, the corpus shows that long portions feedback are devoted to detailing chronologically what was missed or distorted, as well as explaining

0 10 20 30 40 50 60 70 80 90 100

Not mentioned Mentioned in Passing Mentioned in detail

Main ideas (%) 0 7.1 92.9

Details (%) 0 7.1 92.9

Main ideas and details in corpus

Main ideas (%) Details (%)

(39)

38

underlying concepts that were not understood from the original speech or were not accurate in the interpretation, as it can be seen in the fragment included below:

“When I talked about the sharia law, actually in my speech that was just an example. I said there is always the, there's problems with the Islamic States, because they sometimes block debates. (..) And in your speech it was all about the sharia law.” (appendix 1).

5.2.2.2. Logical structure and links

Chart 5 - logical structure and links mentions in corpus

Although “logical structure and links” are also a main part of the content, they are not alluded to as ubiquitously as main ideas and details. In fact, they are mentioned 35.7% of instances in detail and 21.4% in passing, whereas in 42.9% of instances they were not mentioned at all. Links vertebrate the speech, as they are the elements that connect main ideas and therefore determine the overall meaning of a speech (Jones, 2014). If students are choosing, as it seems to be the case, to prioritise content in their feedback, then it is interesting to note the gap between the number of times they comment on the restitution of main and secondary ideas in the interpretation compared to the relationship that ties them together.

0 10 20 30 40 50 60 70 80 90 100

Not mentioned Mentioned in Passing

Mentioned in detail

Logical structure and links in corpus (%) 42.9 21.4 35.7

Logical structure and links in corpus

Logical structure and links in corpus (%)

(40)

39

5.2.2.3. Terminology

The last aspect in the category of content, “terminology”, is mentioned 85.7% of the time. It should be noted that when a peer offers feedback relating to terminology, which is one part of the speech that in theory could be prepared if the topic is known in advance, there seems to be no relation in this corpus to feedback on preparation.

Chart 6 - Terminology mentions in corpus

5.2.3. Delivery

Delivery is a combination of presentation and posture, the perception somebody could have of a performance without knowing what the original says. Although it admittedly can be subjective, some aspects seem to come into play recurrently. For the purpose of this MA Thesis, the focus regarding this aspect is on “voice, confidence, pace and contact with audience” (see Appendix 5). According to the results of the corpus analysis, confidence is the delivery aspect mentioned more times when giving feedback by peers, followed by pace, while contact with audience or voice are mentioned none of the recorded feedback examples.

0 10 20 30 40 50 60 70 80 90 100

Not mentioned Mentioned in Passing

Mentioned in detail

Terminology in corpus (%) 14.3 35.7 50

Terminology in corpus

Terminology in corpus (%)

(41)

40

5.2.4. Confidence

Chart 7 - Confidence mentions in corpus

Out of the 14 feedback samples, confidence was mentioned five times (4 in detail, 1 in passing); while 9 times, it was not mentioned at all. In the instances in which they mention confidence, students establish a relation between how convincing the performance depending on how sure of themselves the interpreters seemed.

0 10 20 30 40 50 60 70 80 90 100

Not mentioned Mentioned in Passing

Mentioned in detail

Confidence in corpus (%) 64.3 7.1 28.6

Confidence in corpus

Confidence in corpus (%)

(42)

41

5.2.5. Pace of the interpretation

Chart 8 - Pace mentions in corpus

The pace of the interpretation is mentioned 21.4% of instances, only 7.1% in detail. When it is, students use it to refer to the duration of the speech and to whether it was longer or shorter than the original speech.

0 10 20 30 40 50 60 70 80 90 100

Not mentioned Mentioned in passing Mentioned in detail

Pace of the interpretation

Pace (%)

(43)

42

5.2.6. Voice and contact with audience

Chart 9 - Voice and contact with audience

As the figures below show, neither “voice” nor “contact with audience” were mentioned in any of the recorded and analysed peer assessments.

0 10 20 30 40 50 60 70 80 90 100

Not mentioned

Mentioned in passing

Mentioned in detail Voice and contact with audience

(%) 100 0 0

Voice and contact with audience

Voice and contact with audience (%)

Références

Documents relatifs

3 that finding an appropriate governance solution for a blockchain enabled eco-system is a design problem, meaning that several artefacts have to be designed that reflect the

Participants are divided randomly in two groups: participants in the control group (G1) do not receive any feedback, and try to improve the song by themselves, whereas participants

Studies Context Nature of feedback Incentives Main effects on performance.. Bandiera,Larcinese,

Based on the results obtained, we can conclude that the traffic shape for the Gnutella protocol is self-similar in the area of lower m for all message types.. Indeed, the

The vast majority of field studies find evidence of positive exogenous and endogenous peer effects at work: higher average peers’ ability and effort increase individual’s

Based on an existing cognitive and affective feedback model (Nelson & Schunn, 2009) and open-coding of feedback that was not covered by that model, we identified four types

Two important properties that should be targeted as much as possible when designing content dispatching algorithm are: (i) the peers should be equally popular with the overall

• handle a request from a Storer that declares it stores a fragment for a file (lines 9–10): the request is received line 9 and processed line 10 by getting the current