New Metrics for Analysing Multiple-choice Questions
A Window into Examination Design and Curriculum Alignment
DOI:
https://doi.org/10.30722/IJISME.32.05.004Abstract
While the ideal of constructive alignment in curriculum is well established, and the importance of evaluating learning and teaching is well known, evaluating assessments remains a complex task and gaps can arise between learning outcomes, learning activities and assessments.
Our study outlines an innovative way of analysing multiple-choice question (MCQ) examinations, which reveals possible weaknesses in the examination design and gaps in the alignment of curriculum. Individual examination questions are analysed through traditional metrics of the Discrimination Index (DI) and Difficulty Index (DIFF), combined with two novel metrics called the Grade Inversion Score (GIS) and Association with Total Score (ATS).
Focusing on examination marks from a data science examination from The University of Sydney, we perform two investigations. First, we identify poorly designed questions using DI, DIFF, GIS and ATS. Second, as multiple questions arise as deficient in these metrics, we explore specific areas in the curriculum where there may be misalignment of course material with the learning outcomes.
Our analysis provides a simple visual way for examiners to inspect the validity of an examination design and encourages an evidence-based approach to exploring curriculum alignment using multiple-choice assessments.