New coalition aims to set AI standards in mental healthcare 

Advertisement

New York City-based Spring Health is supporting a new industry coalition that aims to establish universal standards for the ethical and clinically safe use of artificial intelligence in mental healthcare. 

The AI in Mental Health Safety and Ethics Council includes leaders from academia, healthcare, technology and employee benefits, according to an Oct. 1 news release. Its mission is to guide the responsible development of AI tools through the creation of an open-sourced, automated framework to evaluate safety, efficacy and ethics in mental health applications, which will be called the “Validation of Ethical and Responsible AI in Mental Health,” or “VERA-MH.” 

Council members include experts from Stanford, Yale, Dartmouth, the University of Münster, Charité – Berlin University Hospital, Harvard, Microsoft, JPMorgan Chase, Kaiser Permanente, UnitedHealthcare and Evernorth Health Services, the news release said.

The initiative provides urgently needed guardrails to ensure AI tools are clinically validated, Spring Health CMO Mill Brown, MD, said. Council member Nina Vasan, MD, said the effort aims to protect patients while enabling responsible innovation. 

The council will meet regularly to refine and release the VERA-MH framework for community use. 

Advertisement

Next Up in Mental Health

Advertisement