EMBRACING NOT AVOIDING GENERATIVE AI TO ENHANCE SCIENTIFIC WRITING EDUCATION
Keywords:
Assessment, Generative AI, Scientific Writing, FeedbackAbstract
PROBLEMThe rise of generative AI (GenAI) tools is reshaping how students in higher education approach writing and feedback. While these tools have powerful potential to support student learning they also pose significant challenges to the authenticity and effectiveness of traditional assessment practices. Studies indicate that use of GenAI in assessment is largely undetectable (Scarfe, Watchman, Clarke and Roesch, 2024) which places renewed responsibility on educators to design assessments that both assure learning and model ethical engagement with emerging technologies.
PLAN
Integrating GenAI into assessment design acknowledges its widespread use and helps create a more equitable learning environment, ensuring all students have access to, and support in using these tools effectively and ethically. A process-focused assessment that requires the use of GenAI to edit writing provides an opportunity for education in the use of GenAI as well as a more transparent window into student thinking to assure learning outcomes.
ACTION
Educating students on ethical use of AI in a ‘teach me’ approach to improving scientific writing and assessment was the basis of a series of workshops in a 3rd year Cell and Molecular Biology class that built on a 6-week laboratory project. The workshops and associated assessment deliberately shifted the focus from a final product to the process of learning through prompting, revising, and reflecting. In the first stage, students produced a small scientific report in pairs and provided feedback to other groups based on a simple rubric. The second stage involved prompt engineering at various ‘levels’ of complexity (‘do’, ‘evaluate’ and ‘teach’) to obtain GenAI feedback on scientific writing. Lastly, students were required to compare and reflect on human and GenAI feedback and edit their scientific report accordingly. The submitted assessment included a human feedback summary, GenAI prompting logs, GenAI transcripts, a revised report with tracked changes, and a critical reflection of human versus AI feedback.
REFLECTION
Majority of students reflected on AI providing better feedback than peers, particularly at the contextualised ‘teach me’ level, and commented that they would use this method again in future writing assessments. Integrating workshops alongside practicals, provided the scaffold to teach students how to use GenAI effectively and ethically improving confidence and equity while maintaining assessment integrity. One area identified for improvement was the peer marking component to include a more defined rubric for improved human feedback and diversity of marks. It would also be beneficial to embed this design in other subjects across degree programs at UOW and beyond and to increase the specificity of the student survey to strengthen the data collected.
REFERENCES
Scarfe, P., Watchman, K., Clarke, A and Roesch, E. (2024) A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study. https://doi.org/10.1371/journal.pone.0305354