Peer marking of talks in a small, second year biosciences course
AbstractPeer assessment is one way to motivate students to take responsibility for their learning, by increasing engagement with the particular assessment task (Orsmond, 2004). Peer-marking extends the learning possibilities of a particular assessment task by facilitating “learning by teaching”; students learn not just from their own effort, but from the efforts of those whose work they are assessing (Topping, 1998). The current study reports the implementation and evaluation of peer marking of short talks by second year science students. Peer assessment is of particular relevance here. Firstly, the objectives of the assignment explicitly state that the target audience for the talk is “your colleagues in this class”, thus peer marking validates the assessment. Second, one aim of the assessment was to develop strong scientific oral communications skills, thus peer marking encouraged students to engage with multiple talks and hence better understand the components of a good talk. Previously, two staff members would mark each talk, and the average mark would be the final mark for the talk. Now, each talk is marked by a single staff member and the other students. A detailed rubric is used, tailor-designed for the assessment. The final mark for each talk is now comprised: 50% staff mark and 50% average peer mark for that talk. Thus, the staff mark balances any concerns about the peer mark, and likewise the peer mark contributes to fairness, by maintaining multiple markers for every talk. Students’ experience of the peer marking exercise was recorded by a short survey administered after the talk marks were returned. Students were happy enough with the process (certainly there were no complaints), and found the rubric useful. However, they were ambiguous about whether the peer marking exercise increased their engagement with the exercise; some reported that marking talks forced them to pay more attention, while others stated that having to mark a talk distracted them from learning anything by listening to the talk. However, some students reported that marking the work of others did help them understand what was required, and predicted the experience would be of benefit when they next had a similar assessment task. Comparison of peer marks with staff marks shows that generally staff and peers ranked talks in the same order, indicating that the assessment was reasonably reliable. However, it was observed that talks marked highly by the staff marker had an average peer mark lower than the staff mark, while the average peer mark was higher than the staff mark for talks marked lowly by the staff marker. This was because peer marks were generally more clustered than the staff mark, i.e. the difference between highest and lowest peer marks across talks is less than for the staff marks. Overall, implementation of peer marking of talks was quite smooth, marks are sufficiently reliable for a low-stakes assessment piece, and students were either positive or neutral; talks within these courses in the future will continue to be marked by peers plus a single staff member.