The FDA’s Digital Health Advisory Committee met to address generative AI in patient-facing mental health applications, building on prior 2024 discussions on broader AI-enabled medical devices.
Here are nine things to know, according to a Nov. 6 report from the agency:
- The committee meeting focused on generative AI-enabled digital mental health medical devices — such as “AI therapists” — that may diagnose or treat psychiatric conditions.
- Nearly 58 million adults in the U.S. have been diagnosed with mental illness. The percentage of patients with mental health diagnoses jumped 39.8% from 2019 to 2023, according to the report.
- Most AI-powered mental health tools available are not regulated by the FDA and many are marketed as wellness apps, not medical devices.
- As of the meeting date, the FDA had authorized no AI-enabled medical devices for mental health use, though more than 1,200 AI-enabled devices have been approved in other areas.
- Generative AI mental health devices pose novel risk, including hallucinations, biased content or symptom worsening, especially without human oversights.
- Devices can be approved via premarket approval, 510(k) or de novo pathways, depending on risk. Digital therapeutics and diagnostics typically are prescription devices.
- Evaluating performance requires rigorous testing, especially for chatbot-like products. Postmarket monitoring for misuse is critical to safety.
- The FDA is exploring Predetermined Change Control Panels to allow safer updates to adaptive AI models without full re-review, while maintaining guardrails.
- The FDA asked the committee for guidance on best practices for evidence, trial design, over-the-counter use, integration of diagnostics and therapeutics, and safety risks for children.
