·

Psychologists Sound the Alarm on AI Chatbots for Mental Health

By Leah Harris

CN: mention of suicide

In a time where generative AI is increasingly permeating our day-to-day lives, young people are seeking support from chatbots representing themselves as “companions” and even “therapists.” Now, the American Psychological Association is warning that these AI therapy chatbots may be detrimental to our mental health. 

In a December 2024 letter to the Federal Trade Commission, APA CEO Arthur Evans, Phd, cited a current Florida court case against generative AI fantasy platform Character.io, involving a 14 year-old boy who tragically died by suicide after interacting with a bot on the platform that claimed to be a licensed therapist. Character.io’s users, who largely fall within the 18-25 age demographic, connect with chatbots based on their favorite characters, or they can create their own. The platform is meant to be used for entertainment purposes only, but the CEO of Character.io’s parent company admitted to CBS News that upon testing, the therapy bot would tell users that it was a human behind a screen, prompting debates and conspiracy theories.

This is not the first time that generative AI chatbots have been found to mislead users or share harmful information with them. In 2023, the National Eating Disorders Association pulled its chatbot, Tessa, after it was discovered that the AI was giving users weight loss tips. The December 2024 APA letter to the FTC raised concerns about the generative AI chatbots as “not FDA-cleared digital health tools” that  “are not subject to HIPAA compliance, or required to demonstrate any evidence base supporting their efficacy or safety.” 

“It is our concern that this will have a profoundly adverse societal effect exacerbating a mental

health crisis made worse by the pandemic,” Dr. Evans stated in the letter. “Rather than using the tools responsibly to help increase access to much needed services in tandem with licensed psychologists and other providers, this technology in certain applications is having the opposite effect, endangering many already vulnerable individuals.”

As these kinds of platforms proliferate, alarm is rising worldwide. In an interview with independent consumer news agency Choice, Australian researcher Piers Gooding said mental health tech is a booming business, citing a market report stating that venture capitalists had invested five billion dollars in mental health tech in 2021—a figure twice the investment in any other health condition, and the trend continues.

It’s clear that further regulation is needed to protect the public from the unchecked growth of generative AI in the mental health space. “Chatbots can’t capture nuance, and they can be easily programmed to have addictive elements and pretend to be real people. That poses a real danger in the mental health context,” Gooding told Choice. He holds out hope that the people will ultimately reject this tech, resulting in “some kind of reckoning with some of the over-claiming about what they can do, and that might come in the form of people just voting with their feet and realising that it’s not quite what it’s cracked up to be.” 

Resources for further exploration:


Leah Harris is a non-binary, queer, neurodivergent, disabled Jewish writer, facilitator, and organizer working in the service of truth-telling, justice-doing, and liberation. They’ve had work published in the New York Times, CNN, and Pacific Standard. You can learn more about their work at their website and follow them on Instagram.

More from the blog

Discover more from PEERS Envisioning and Engaging in Recovery Services

Subscribe now to keep reading and get access to the full archive.

Continue reading