• Aucun résultat trouvé

Matthew Cotter 1 and Don Hinkelman 2

2.3. Data collection

Both quantitative and qualitative data were collected in 2019, as done in previous cycles. Overall scores could be downloaded from the Moodle gradebook and the VAM directly and statistical analysis performed via Excel as shown in Table 1 below.

In addition, end of course voluntary student satisfaction surveys (supplementary materials, part B) were given using the questionnaire module in Moodle. Qualitative data from students was also collected through the surveys and the Moodle Forum module helped collate qualitative data for the focus group of the teachers by recording weekly comments in a teaching journal.

Table 1. Self, peer, teacher, and overall assessment average scores (out of 100) with teacher variances

Presentation Average Scores (Presentations 1-5)

2014 (n=55~63) 2017 (n=34~49) 2019 (n=34) SelfVariance with teacher 66.0

* In 2017, a timetable change required peer assessment to be dropped in order to reduce student workload

97 Video assessment module: self, peer, and teacher post-performance...

Figure 1. VAM rubric and comment feedback interface

3. Discussion

Results from online learner assessment scores are consistent with those of previous years. Students continue to score themselves lower on post-performance assessment tasks than teachers on all presentations. An average total over the five presentations saw self assessments 8.2-9.0% lower than teacher assessments in the respective years, compared to only a 1.3-1.5 lower average difference by peers respectively.

Students did not try to raise their score, but graded themselves more severely than their teachers. This is consistent with the general tendency of Japanese students to rate themselves modestly (Hinkelman & Cotter, 2018). Due to this high variance between teacher and self-ratings, a lower weighting of 20% was assigned to self assessment scores than to teacher scores (80%).

From the 2019 student satisfaction surveys (supplementary materials, part B) we can see that 92% of students responded positively (agree or strongly agree) to watching their own videos, and 77% valued rating their own presentations respectively. 73%

found value in classmates rating their presentations, which shows strong support for using the VAM tool for assessment and learning. 92% of students also regarded feedback from the teacher as helping them improve their presentations which could portray perceived teacher expertise, experience, or comparatively more detailed feedback by teachers compared to classmates on the rubric. Interestingly, the highest value of 96% was achieved on the survey by students agreeing that watching live presentations of their classmates was helpful to improving their presentations.

4. Conclusions

Over the ten cycles (years) of action research on this oral presentation course, the evolution of post-performance video watching, along with self and peer assessment, has proven to be a successful formative tool. This most recent 2019 cycle has been no different, results showing that the VAM draws the students into a more learner-focused mode of assessment, putting Gardner’s (2012) theory of ‘assessment for learning’ into practice. Students reported that being part of the assessment process through using the tool had helped them improve for future performances. Taking the role of ‘evaluators’ by using the VAM ultimately requires the students to first revisit the presentations again by viewing the videos, go through the cognitive process of scoring and giving feedback to their peers and to themselves, and finally reflect on all feedback received. We can also see that, as part of the assessment process, a complex rubric with specific criteria can be understood and used by intermediate-level students, in this case using their L2, to evaluate video-recorded student

99 Video assessment module: self, peer, and teacher post-performance...

performances in an oral presentation course. Although some cultural modesty took place, students placed enough importance on the task as not to try and purposefully score themselves or their peers higher than teachers, or wantonly assign grades due to lack of motivation or time. The convenience of the VAM being able to be used during class or out-of-class, and having the ability to create rubrics to match the assessment criteria and level of students, may have had a part to play in this.

It is our view that future cycles of this research area need to concentrate on determining the most appropriate rubric language and rubric length to match learners and also to investigate whether students themselves have any ideas on how they would like to participate in the evaluation process.

5. Acknowledgments

We would like to thank all the teachers who have contributed to the teaching and curriculum of this course, the students themselves who did ‘assessment for learning’, and finally the plugin designers and programmers who continually updated this tool for ever changing video formats and standards.

6. Supplementary materials

https://research-publishing.box.com/s/w4ts3e0auk2pw6p60n8krb59tod4sxrd

References

Gardner, J. (2012). Assessment and learning. Sage. https://doi.org/10.4135/9781446250808 Hinkelman, D., & Cotter, M. (2018). Balancing real-time vs. post-performance feedback for EFL

presentation classes. In P. Clements, A. Krause & P. Bennett (Eds), Language teaching in a global age: shaping the classroom, shaping the world. JALT.

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education:

a peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102-122.

Rian, J. P., Hinkelman, D., & Cotter, M. (2015). Self-, peer, and teacher rubric assessments of student presentation videos. In P. Clemens, A. Krause & H. Brown (Eds), JALT2014 Conference Proceedings (pp. 688-697). JALT.

Rian, J. P., Hinkelman, D., & McGarty, G. (2012). Integrating video assessment into an oral presentation course. In A. Stewart & N. Sonda (Eds), JALT2011 Conference Proceedings (pp. 416-425). JALT.