The National Academy of Medicine released a framework in May 2025 to guide the development and use of trustworthy, human-centered AI across healthcare, specifically telehealth and behavioral health settings.
“An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action” outlines 10 principles — engaged, safe, effective, equitable, efficient, accessible, transparent, accountable, secure and adaptive — and broader commitments to advancing humanity and ensuring equity, according to a Feb. 24 report. While not regulatory, the framework is intended to align stakeholders, such as developers, health systems, regulators and clinicians, around expectations for responsible AI use.
The release comes as AI tools — such as risk-screening algorithms, clinical documentation aids, conversational agents, and scheduling and triage systems — become more common in telehealth and behavioral health settings.
Here are the seven suggestions for behavioral health leaders to consider from the organization:
- Identify whether and where AI is being used in screening, documentation, scheduling or patient communication.
- Revise consent forms to describe AI use and clinician oversight.
- Ask vendors for documentation on clinical validation, bias testing and monitoring processes before integrating AI into care.
- Clinicians should monitor outcomes and patient satisfaction to identify disparities or unintended effects associated with AI tools.
- Support ongoing education related to AI ethics, governance and regulation.
- Account for equity and access through implementation strategies when deploying AI-enabled tools, particularly in telehealth and behavioral health settings.
- AI should support — not replace — clinical judgement, and strong human oversight is necessary, particularly in sensitive areas such as substance use treatment.
Read the full report here.
