Salt Lake City-based Huntsman Mental Health Institute has published a framework designed to guide the ethical development and deployment of AI systems used in healthcare.
The framework — Scalable Agile Framework for Execution in AI, or SAFE AI — was published in the Journal of Medical Internet Research, according to a March 12 news release from the institute, which is part of University of Utah Health. It is intended to help organizations integrate ethical safeguards directly into AI development workflows.
The framework was developed in collaboration with healthcare AI partners and provides guidance for small and medium-size enterprises building medical AI technologies. It aims to help organizations identify and mitigate potential bias in AI systems before the tools are used in patient care.
SAFE AI also includes processes for monitoring “bias drift,” evaluating performance across patient subgroups and communication limitations of AI tools to clinicians.
“AI is increasingly shaping how clinicians make decisions in mental health care, from crisis triage to treatment recommendations,” Warren Pettine, MD, a researcher at the institute and senior author of the publication, said in the release. “With SAFE AI, we provide a roadmap that ensures these systems are not only effective but also fair, transparent, and continuously monitored.”
