State regulation of mental health and AI: 4 things to know

Advertisement

A total of 143 bills across all 50 states were found to affect mental health-related AI, but only 28 explicitly mentioned mental health. The remainder had indirect or substantial implications, and just 20 were enacted across 11 states as of May 2025, according to a legislative review published in JMIR Mental Health.

Researchers from Boston-based Beth Israel Deaconess Medical Center analyzed legislation introduced in all 50 states between Jan. 1, 2022, and May 19, 2025. Using the LegiScan database, they screened 793 bills referencing AI and health-related terms. Bills were categorized by their relevance to mental health AI using a four-tiered system and tagged by topic. Legally trained reviewers conducted the final reviews and quality control to ensure consistency.

Outliers include California, with 19 bills fulfilling the inclusion criteria, and the 12 states with none: Oregon, Michigan, Kansas, Tennessee, Idaho, Iowa, Delaware, Arizona, Wisconsin, West Virginia, Wyoming and South Dakota. 

Here are four things to know:

  1. Four common policy areas appeared across the proposed legislation: professional oversight, harm prevention, patient autonomy and data governance. However, most state efforts were fragmented and often did not address mental health use cases directly, according to the researchers. 
  1. Only 16 of the reviewed bills addressed malpractice or liability for harms caused by mental health AI systems. For example, North Carolina’s HB 934 places responsibility on clinicians, even when they had limited control over how the tools functioned, according to the review. 
  1. While 96 bills include some language around transparency or consent, only four required informed consent specifically for mental health AI. A small number called for repeated or interactive disclosures in clinical settings. 
  1. Few bills offered data privacy protections tailored to mental health AI tools. Most existing frameworks, including HIPAA, do not cover AI-generated data from wellness apps or chatbots, and only a handful of laws addressed user rights to access, delete or limit the use of such data — leaving clinicians and patients vulnerable to data misuse.

Read the full review here

Advertisement

Next Up in AI

Advertisement