Reliability of grading using a rubric versus a traditional marking scheme in statistics



assessment, rubrics, reliability, statistics


Assessment grading in statistics and mathematics has often been approached in an ad-hoc manner, using marking schemes that attach marks to specific steps of a model solution and often do not explicitly reference assessment criteria. Another approach for grading is to use rubrics. Rubrics are recognised to have several advantages for assessment, but research on the reliability of grading with rubrics is equivocal and mostly conducted in less quantitative disciplines. We present a direct comparison of the reliability of marking of a written statistics assignment using a rubric and using the traditional marking scheme approach. We use a Bayesian statistical analysis and find that both methods yield similar levels of inter-rater and intra-rater reliability.