Trialling criterion-referenced marking in an undergraduate statistics unit
Abstract
Feedback from trials of a criterion-referenced marking approach was sought from both markers and the students assessed. Preliminary results indicate that the most important feature of a marking rubric for markers is the availability of very detailed and task specific descriptors. Students identified feedback as the most valuable feature. The context of this study is an assessment task in a large, first-year statistics unit that involves basic statistical analysis and the submission of a written scientific report with a formal structure. The workload marking these reports is considerable and involves many tutors. There are two purposes for the developing a criterion-referenced marking rubric. The first purpose is tutor focused. The use of a rubric aims to (i) standardise the marking, (ii) provide consistency between markers, and (iii) increase the efficiency of marking. Secondly, (i) the rubric signals to students, before the assessment task, what is expected of them, and (ii) provides feedback to students about their individual achievement afterwards. Critique from tutors after each trial helped the continuing refinement of the criterion-referenced marking rubric. Detailed descriptors for each achievement level were developed. These descriptors are specific to each part of the task and to each assessment criteria. Tutor feedback on the detailed rubric has been extremely positive. Marking has been quicker and showed improved consistency between tutors from previous semesters. These improvements have been seen to be due to a common acceptance of what was expected at each level of achievement. Less individual tutor decision making on standards was observed. Tutors also used the rubric to give feedback to the student by circling the descriptions on the rubric appropriate for the student work.Downloads
Published
2014-09-04
Issue
Section
Abstracts