Are students reading my feedback? Using a feedback analytics capture system to understand how large cohorts of biomedical science students use feedback

Kirsten Zimbardi, Andrew Dekker, Andrea Bugarcic, Kay Colthorpe, Prasad Chunduri, Judit Kibedi, Lesley Lluka, Craig Engstrom, Peter Worthy, Phil Long

Abstract


Feedback is one of the most potent teaching strategies known to produce student learning gains (Hattie, 2009). However, the provision of feedback has been identified as one of the weakest elements of university practices (Graduate Careers Australia, 2012). Although there are many theoretical frameworks for improving feedback provision (Hattie & Timperley, 2007; Nicol & Macfarlane Dick, 2006; Sadler, 2010), little is known about how students actually use feedback (Jonsson, 2013). Many authors contend that students commonly ignore feedback (Boud & Molloy, 2013), with some empirical evidence that students do not collect or read written feedback (Sinclair & Cleland, 2007), or ignore it when they do not understand what it means (Still & Koerber, 2010). The increasingly widespread adoption of online marking and feedback tools facilitates students’ access to their feedback, but until now there has been no systematic characterise the patterns of student access of this feedback, nor how this impacts on their subsequent performance (Ellis, 2013).

We have developed, and extensively trialled, a Feedback Analytics Capture System (FACS, previously called UQMarkUP) which synthesises large-scale data on digital feedback provision, how students access feedback, and changes in students’ academic performance (Zimbardi et al., 2013). Specifically, FACS captures detailed information about the audio, typed and hand-drawn annotations markers insert in situ in electronic assessment submissions, and the marks awarded across a variety of systems, including detailed criteria-standards rubrics. FACS also collects detailed information about how students access this feedback, logging the timing and nature of every mouse click a student uses to interact with the feedback-embedded document.

In this exploratory study, we investigated the frequency, timing, and patterns in how students access their feedback. Analyses of FACS data from laboratory reports submitted for summative assessment in two biomedical science courses in level 1 (n=1781 students) and level 2 (n=389), in Semesters 1 and 2, 2013, revealed that the vast majority of students opened their feedback. In the level 1 course 93% students opened Report 1, 92% opened Report 2, 87% opened Report 3 and 85% opened Report 4. In contrast, far fewer students in the level 2 course opened their feedback, and fewer students opened Report 1 (68%) than Report 2 (82%). Although a similar pattern existed for how long students had their feedback open (level 1 Report 1: 12±8 hours; Report 2: 3.4±1.6 hours; Report 3: 2.1±1.4 hours; Report 4: 43±7 minutes), the level 2 reports now reverted to greater duration of interaction with Report 1 (5.6±0.6 hours) than Report 2 (1.2±0.3 hours). The number of students accessing feedback surges 1-2 days after feedback release, followed by a persistent tail of students accessing the feedback for the subsequent two months. In this context of undergraduate biomedical science laboratory assessments, students are not only collecting and reading their feedback, but they are interacting with it extensively. There may also be potential maturational, course-specific, and interaction effects that shape feedback use, and require further exploration as we expand this feedback analytics approach across a broader range of educational contexts.

Full Text:

PDF