Leveraging ChatGPT to develop plain language health information

Authors

  • Julie Ayre University of Sydney
  • Olivia Mac University of Sydney
  • Kirsten McCaffery University of Sydney
  • Brad McKay University of Sydney
  • Mingyi Liu University of Sydney
  • Yi Shi University of Sydney
  • Atria Rezwan University of Sydney
  • Adam Dunn University of Sydney

Abstract

Background: Most health information does not meet the health literacy needs of our communities. Tools such as ChatGPT may help health information providers more easily develop health information that is written in plain language. This study aimed to investigate the capacity for ChatGPT to produce plain language versions of health texts.   Methods: ChatGPT was prompted to ‘rewrite the text for people with low literacy’ across 26 texts from reputable health websites Researchers captured three revised versions of each original text and assessed them for grade reading score, proportion of the text that contains complex language (%), number of instances of passive voice, and subjective ratings of key messages retained (%).   Results: On average, original texts were written at Grade 12.8 (SD=2.2) and revised to Grade 11.0 (SD=1.2), p<0.001. Original texts contained on average 22.8% complex words (SD=7.5%) compared to 14.4% (SD=5.6%) in revised texts, p<0.001. Original texts had on average 4.7 passive voice constructions (SD=3.2) compared to 1.7 (SD=1.2) in revised texts, p<0.001. On average 80% of key messages were retained (SD=15.0). Original texts that were more complex showed greater improvement than less complex original texts.   Conclusions: This study used multiple objective assessments of health literacy to demonstrate that ChatGPT can simplify health information while retaining most key messages. Human oversight is needed to ensure safety, accuracy, completeness, and effective application of health literacy guidelines.  

Published

2025-01-23

Issue

Section

Oral Presentations