Safe and responsible AI in health services
Abstract
Artificial Intelligence (AI) technologies have the potential to transform care delivery, yet their adoption in health service organisations remains slow. A major challenge to successful implementation and meaningful adoption of AI applications is their governance at the health service level. Robust governance is crucial not only to ensure safe and effective deployment, but also to foster clinician trust, which is essential for driving adoption and improving care delivery and patient outcomes. Currently, there is no guidance for Australian health services on how to govern the safe and responsible implementation and use of AI. While numerous theoretical frameworks for AI ethics exist, there is limited literature on how these frameworks are operationalised in health services. Compounding this, organisations vary in technical maturity, and AI utilises diverse computational reasoning methods, including traditional and generative AI and can be used across a wide range of clinical and non-clinical domains. This project developed and tested a framework for the governance of AI applications at the health service level, piloting three real-world AI use cases. It is based on a collaboration between Macquarie University, Alfred Health and the Digital Health CRC. The presentation will cover the development and integration process of the AI governance framework. This will provide oversight of AI applications at the health service level and covers both research and clinical as well as non-clinical operational applications across the Alfred Health network. This network includes three hospital campuses, large scale community programs and 18 statewide services.Published
2025-09-29
Issue
Section
Oral Presentations