New Metrics for Analysing Multiple-choice Questions

A Window into Examination Design and Curriculum Alignment

Authors

  • Diana Warren Sydney University
  • Joshua Pople University of Sydney
  • Rashika Agarwal University of Sydney

DOI:

https://doi.org/10.30722/IJISME.32.05.004

Abstract

While the ideal of constructive alignment in curriculum is well established, and the importance of evaluating learning and teaching is well known, evaluating assessments remains a complex task and gaps can arise between learning outcomes, learning activities and assessments.

Our study outlines an innovative way of analysing multiple-choice question (MCQ) examinations, which reveals possible weaknesses in the examination design and gaps in the alignment of curriculum. Individual examination questions are analysed through traditional metrics of the Discrimination Index (DI) and Difficulty Index (DIFF), combined with two novel metrics called the Grade Inversion Score (GIS) and Association with Total Score (ATS).

Focusing on examination marks from a data science examination from The University of Sydney, we perform two investigations. First, we identify poorly designed questions using DI, DIFF, GIS and ATS. Second, as multiple questions arise as deficient in these metrics, we explore specific areas in the curriculum where there may be misalignment of course material with the learning outcomes.

Our analysis provides a simple visual way for examiners to inspect the validity of an examination design and encourages an evidence-based approach to exploring curriculum alignment using multiple-choice assessments.

Author Biographies

  • Joshua Pople, University of Sydney

    Summer Scholar and Honours Student.

    School of Mathematics and Statistics.

  • Rashika Agarwal, University of Sydney

    Summer Scholar.

    School of Mathematics and Statistics.

Downloads

Published

17-09-2024

Issue

Section

Research Articles