Assessing AI: the academic integrity of using artificial intelligence (Chat GPT) in assessment
Keywords:Chat GPT, scientific reports, assessment, academic integrity
Writing scientific reports is a key skill that needs to be taught and developed in our undergraduate students. In December 2022, there was a rise in awareness of widespread readily-available artificial intelligence language models like Chat-GPT. This awareness caused academics teaching in the tertiary space to reconsider how they could continue to assess skills like scientific writing, in an environment where programs could create large blocks of coherent text within minutes. To address this issue, we created a first-year assessment where students needed to critique an introduction for a scientific report written by Chat GPT4, and then continue to write their own results and discussion for the scientific report. We synthesize how students critiqued the Chat-GPT4-written introduction with regards to structure, scientific writing, use of scientific literature and relevance of subject matter. With the exception of scientific writing style, our academic marker consensus group (n=21) thought that all other parts of the introduction could be improved. Many students appeared unable to critique the introduction appropriately, and found most components of the introduction satisfactory. We discuss the subsequent academic integrity issues associated with the apparent inappropriate use of Chat-GPT in the discussion for some students. We identified apparent inappropriate use of Chat-GPT4 by the finding of fake references in the reference list. Given that artificial intelligence language models like Chat-GPT are not going away, we use our reflections from this assessment to propose a new way forward for teaching first year undergraduate science students to be more skeptical of their use of Chat-GPT4 in scientific report writing.