Do science students graduate knowing what they know and don’t know: The case of quantitative skills

Authors

  • Kelly E. Matthews
  • Peter Adams
  • Merrilyn Goos

Abstract

In the sciences, the undergraduate curriculum has often come under scrutiny by scientists like Carl Wieman and Jo Handelsman for being content-focused with rote learning assessment that does not prepare graduates for the scientific workforce. There has been an international push to reform undergraduate science education to better align curricula with the capabilities required of modern scientists. In Australia, there has been a focus on degree program curriculum with the Science Threshold Learning Outcomes, which are intended to guide holistic curriculum development to enable students to graduate with the appropriate level of skills, knowledge and attributes needed of modern day scientists. Agreeing on science graduate outcomes was an essential first step for the Australian science higher education sector. As Beverly Oliver’s Assuring Graduate Outcomes Office for Learning and Teaching Guide indicates, the challenge now is to assess these outcomes. Assessing such outcomes is a challenge because graduate level learning outcomes are complex, inexplicably linked with disciplinary contexts and instruments do not exist for most outcomes. For students to be motivated, engaged and ready for the workforce, be it a science-related career or not, we need to start understanding what students can do and what they know they can do. In the context of a particular graduate learning outcome, we pose the question: do students actually know what they know? We initiated a pilot study with 211 final year biomedical science students to investigate students’ ability to self-assess effectively their acquisition of quantitative skills. The Quantitative Skills Assessment of Science Students was developed to gather this data, which comprised questions from existing performance assessment tasks developed as part of National Science Foundation funded projects (ARTIST, CAOS and Mathbench projects). In total, the instrument included 35 questions across mathematical (10 questions) and statistical (25 questions) topics. The questions were further organised into sub-topics that examined students’ understanding of typical quantitative skills in the biosciences (e.g. serial dilutions, probability, metric conversions, correlation and causation). Following completion of the questions for each sub-topic, students were asked to indicate their level of task-specific confidence using a four point, alpha Likert scale. Bandura’s task specific self-efficacy theory was adopted to measure students’ self-assessment via the confidence scale. We explored the alignment between students’ performance and confidence, drawing on Sadler’s notion of evaluative expertise to interpret the results. Performance scores and confidence indicators were categorised and students allocated to one of four categories: high performance-high confidence, high performance-low confidence, low performance-low confidence and low performance-high confidence. Overall results revealed that approximately half of the students fell into one of the two aligned categories, high performance-high confidence or low performance-low confidence, suggesting they were reasonable evaluator experts (effectively self-assessed). Analysis by sub-topics displayed wider ranges of distributions across categories. Findings will be presented along with broad implications for how the science higher education might begin to tackle the challenge of assessing outcomes and assuring our graduates are aware of the learning outcomes gained during their undergraduate degree programs.

Downloads

Published

2014-09-04