AI chatbots fail teen mental health tests: Study

Advertisement

As long wait times, dissatisfaction with clinical interaction and high healthcare costs continue to plague behavioral health, teens are tapping on AI chatbots for mental health support. But without the proper guardrails, the bots may suggest harmful activities instead of guiding teens toward appropriate help, according to a Nov. 20 report from The Wall Street Journal.

Common Sense Media and Stanford (Calif.) Medicine’s Brainstorm Lab for Mental Health Innovation collaborated to test how well four AI chatbots handled mental health conversations with distressed teens. 

Here are six things to know:

  1. Many teens use generative AI chatbots like ChatGPT, Claude, Gemini and Meta AI as emotional outlets or substitutes for limited access to therapy. 
  1. In the study, researchers posing as teens shared symptoms of self-harm, disordered eating, mania, hallucinations and paranoia. The chatbots frequently failed to recognize the severity of these signs or escalate users to professional help. 
  1. In several simulated conversations, chatbots offered advice on hiding scars from self-injury or suggested diet and exercise tips to teens showing signs of eating disorders. 
  1. While chatbots responded appropriately to direct, one-off questions, their ability to maintain safety guardrails declined in longer, more realistic interactions that mimicked how teens actually use the tools. 
  1. OpenAI, Google, Meta and Anthropic all acknowledged the issue. Some said the study predated key safety updates, while others emphasized their tools are not designed for minors or that they have added age-based protections. 
  1. Despite some improvement, Common Sense Media concluded that AI chatbots are still unsafe for teens seeking mental health help, especially given how easily teens may mistake chatbot validation for clinical support. 
Advertisement

Next Up in AI

Advertisement