Assessing the quality of peer feedback in an online peer learning community
Keywords:
Peer review, evaluative judgement, learning communityAbstract
A goal of higher education is the development of students’ evaluative judgment, that is, the capacity to make judgements of the quality of work done by oneself and others. Over-reliance on judgements of teachers, for example by focusing on grades, can inhibit students developing this ability. Peer review is one approach that can develop evaluative judgement but an issue that may prevent its implementation is the perceived quality of peer feedback. We used an online platform, PeerWise, in a second year genetics course to investigate the quality of peer feedback and whether evaluative judgment was evident. PeerWise enables students to write, comment on, answer and rate multiple choice questions. We found it necessary to allocate some marks as an incentive for participation but 5% of the course grade resulted in high engagement with minimal intervention from the convenor. This set-up allowed students to make and share judgements anonymously in a low-stakes context. Maximum marks were awarded for writing four accurate questions, covering at least three of the four content modules, and submitting four meaningful comments. This resulted in 959 questions and 1529 comments from 248 students. A qualitative analysis of all comments led to the identification of five criteria used to assess questions: knowledge, clarity, complexity, engagement and explanation. Students demonstrated the ability to make complex and reasonable judgements about question quality invoking multiple criteria. They made constructive suggestions for improvement and reflected on their own learning, consistent with the development of evaluative judgement.