BEYOND THE GRADEBOOK: ASSESSING LARGE STUDENT COHORTS AND REIMAGINING THE FUTURE OF ASSESSMENT IN THE AGE OF AI

Authors

Keywords:

Large-scale teaching, artificial intelligence, assessment design, student engagement, academic integrity

Abstract

 

How does academic integrity shift in an era where Artificial Intelligence (AI) generated text is increasingly difficult to distinguish from student-authored work?  How do we design assessments that remain meaningful and engaging for thousands of students? The University of Melbourne’s first year subject; Today’s Science, Tomorrow’s World, Australia’s largest undergraduate science subject, has provided a unique opportunity to investigate these questions at scale.  Since launching in 2022, the subject has enrolled over 13,000 students across Semesters 1 and 2, supported each year by more than 70 tutors and 25 academic staff.

 

Our results draw on diverse datasets, including Turnitin AI detection and plagiarism scores, Learning Management System (LMS) engagement metrics, student reflections, feedback surveys, tutor evaluations, and assessment rubric data.  We examined trends in engagement, academic integrity, and student experience to identify practical implications for large-scale assessment.  Rubric-level analysis explores how AI-assisted submissions perform across different types of assessment criteria, from factual explanation to those demanding originality and critical thinking.  Several patterns emerged across semesters.  While AI detection tools provided useful signals, manual review was essential to ensure fair interpretation.  Online engagement metrics showed varied alignment with the quality and originality of submitted work, suggesting a more complex relationship than expected.  Structured reflection and AI-based interpretation tasks encouraged deeper thinking but proved most effective when supported by clear scaffolding and active tutor guidance.  What we have learned continues to inform our approach to assessment, workshop facilitation, and tutor training in large cohorts.

 

Our findings offer practical insights drawn from thousands of student assessments, supporting both academic integrity and the broader learning experience.  They contribute to the continuous improvement of reflective tasks, active learning activities, AI-integrated exercises, and group investigations.  This work aligns with the need for thoughtful AI integration within LMS-supported systems (Alotaibi, 2024), and supports research advocating for ethical, equitable, and sustainable approaches to AI use in science education (Almasri, 2024; Kamalov, Santandreu Calonge & Gurrib, 2023).  Our ongoing goal is to foster meaningful learning habits and develop assessment and engagement models that prioritise process over product.

 

Published

2025-09-22