3 states regulating AI and mental health

Advertisement

From requiring companies to disclose that AI chatbots are not human to restricting the technology’s use in treating students, the following states have passed laws and created agencies to address AI’s role in mental health.

Here are three states and their responses:

  1. A New York law that goes into effect Nov. 5 will require tech companies to release a disclaimer stating their AI bots are not human. Users who turn to a bot for emotional support and express suicidal thoughts will be prompted to reach out to the country’s 988 support line or another network. Companies that fail to comply will be fined. 
  1. Nevada adopted a statewide policy on the ethical use of AI in November. Since then, a law focusing on mental health and AI has imposed restrictions on behavioral healthcare providers from using AI systems while treating patients. It also bans the programming of AI to act as a mental health professional and prohibits school counselors and psychologists from using AI to perform their work.
  1. Utah in July formed the state Office of Artificial Intelligence to focus on AI policy, regulation and innovation. The office plans to address the mental health crisis across the state, radio station KUER reported July 9. The office launched an AI laboratory program that focuses on creating regulatory solutions for AI applications to ensure community safety while still encouraging innovation. In addition, a new state law requires tech companies to disclose that AI is not human. 

Advertisement

Next Up in AI

Advertisement