As AI chatbot use accelerates, behavioral health clinicians are reporting concrete high-risk cases tied to patient destabilization and even treating patients as if they have an addiction, The New York Times reported Jan. 26.
More than 100 therapists and psychiatrists told the Times about their positive and — overwhelmingly negative — experiences navigating mental health challenges created or worsened by AI chatbot use.
In 2026, the misuse of AI chatbots in healthcare is the leading health technology hazard.
Here are 10 things behavioral health leaders should know:
- A forensic psychiatrist at University of California Davis Health in Sacramento said two of about 30 people she evaluated last year for violent felony charges had delusional thinking that was intensified by AI chatbot use before the crimes occurred. Both developed messianic beliefs, and in one case, the chatbot expanded on psychotic thinking.
- A psychologist at Vanderbilt University Medical Center in Nashville, Tenn., reported seven patients in one year whose delusions escalated after sustained conversations with AI — including patients with no prior mental illness history.
- The same clinician described patients who came to believe romantic interests were sending secret spiritual messages or that deceased relatives were communicating through digital signals after prolonged AI conversations.
- Some clinicians viewed patients’ interactions with AI as an addiction, with one clinician describing it as a patient’s “meth,” according to the report.
- A psychiatrist documented a case in which a healthcare professional, experiencing restlessness and taking ADHD medication, became convinced her deceased brother was communicating through chatbot-generated digital footprints after two nights of use.
- Clinicians acknowledged positive uses, including skills practice and using it as a nonjudgmental sounding board.
- OpenAI estimated 0.07% of users showed signs of psychosis or mania in a given month. At current usage levels, that equates to hundreds of thousands of people globally.
In October 2025, ChatGPT updated its default GPT-5 model to better recognize signs of mental distress, de-esculate sensitive conversations and direct users toward professional support when appropriate.
Now, more than 170 clinicians from OpenAI’s Global Physician Network received over 1,800 model responses and contributed guidance on safer behaviors in distress-related conversations.
