DISTRACTING DISTRACTORS: EXPLORING THE ROLE OF OPTIONS IN A MULTIPLE- CHOICE QUESTION EXAM

Authors

  • Rashika Agarwal University of Sydney
  • Di Warren
  • Joshua Pople

Keywords:

Distractors, Multiple-Choice Questions, Assessment

Abstract

While basing an assessment on multiple-choice questions (MCQs) makes it efficient to deliver and mark, the designing of the questions is surprisingly complex. Choosing the distractors to be educationally rich, rather than arbitrary alternatives, requires care and rigour. However, the literature on distractors design is fairly limited (Gierl et al., 2017), with existing methods including distractor efficiency and plausibility.

 

Our study aims to categorise multiple-choice options (correct answer and distractors) based on the popularity across students of different grades (F, P, CR, DI, HD), so that we understand how each option functions in context. Using results from the final examination of a large first year data science course (N = 1234), we propose three methods of labelling distractors: Proportional Option Labelling (POL), Weighted Option Labelling (WOL), and Weighted Distractor Labelling (WDL). A non-functioning distractor (NFD) category is defined for those options chosen by less than 5% of the cohort.

 

For each method, we study student performance, including the analysis of how students from different grade levels select options, revealing different types of confusion they experience. Moreover, this provides insight into the effect of making changes in the curriculum or teaching style by tracking whether students avoid "weak” distractors overtime. Finally, this framework can be used to flag poorly designed questions in the case where the correct answer is not primarily selected by HD/D students since we expect that most of the high performing students answer correctly.

 

Comparison of the three methods reveals their relative strengths and limitations and suggests more complex approaches such as identifying features of a particular distractor category or finding patterns across questions to explore potential relationships with question difficulty. Our analysis enables a deeper understanding of MCQs and student behaviour across proficiency levels, supporting improved question design and allowing us to learn more about the cohort.

Published

2025-09-22