AI psychotherapy tools raise automation, risk questions: Study

Advertisement

Salt Lake City-based University of Utah researchers outlined a framework for automation — defined as when machines perform tasks humans have previously done — in psychotherapy, identifying varying levels of AI involvement in care delivery. 

The framework categorizes automation into four levels: scripted systems that deliver prewritten content via chatbots; AI tools that evaluate therapists; AI systems that assist therapists with suggestions; and autonomous agents that provide therapy directly with patients, according to an April 7 news release from the university. 

The group examined how each level of automation carries different levels of risk and utility, particularly around consent, responsibility and potential errors.

The team is focused on using AI to support — rather than replace — clinicians, particularly in evaluating and training therapists. The researchers are partnering with SafeUT, a statewide crisis line, to develop tools that analyze counseling sessions and provide feedback to improve care.

Researchers highlighted applications such as note-taking tools that maintain notes across sessions and systems that “analyze sessions and provide feedback to clinicians, while noting risks including AI systems that “fabricate information, encode biases and respond unpredictably.” 

At the Becker's Fall Behavioral Health Summit, taking place November 4–5 in Chicago, behavioral health leaders and executives will explore strategies for expanding access to care, integrating services, addressing workforce challenges and leveraging innovation to improve outcomes across the behavioral health continuum. Apply for complimentary registration now.

Advertisement

Next Up in Research & Analysis

Advertisement