https://debbiecoffey.substack.com/p/ai-endgame-the-risks-of-ai-chatbot
March 14, 2025
By Debbie Coffey, AI Endgame
While researching the topic of AI chatbots, I was surprised to learn some AI chatbots claim to be “psychotherapists.” Since then, I haven’t been able to shake the feeling that the use of AI chatbots “psychotherapists” could go wrong in so many ways. I decided to do more research, and it confirmed my concerns.
Ever since the pandemic, there’s been an increasingly high demand for psychotherapists, but unfortunately, there’s a shortage of available psychotherapists. [1]
Psychotherapists are only able to work in states where they’re licensed, so people living in remote areas or with limited financial means may struggle to find an available therapist within their price range.
For this reason, millions of people have turned to AI chatbot “psychotherapists” that offer 24/7 availability and anonymity. In 2023, the American Psychiatric Association estimated there were more than 10,000 mental health apps, and nearly all of them were unapproved. [2]
Many people find it easier to talk with an AI chatbot therapist.
Of course it’s easier to talk to an algorithm. But an AI chatbot isn’t a skilled therapist who has your best interests in mind.
AI chatbot psychotherapists choose from pre-written responses that can not only feel flat and inadequate to someone having a mental health crisis, but can easily go awry. [3]
AI chatbot “psychotherapists” worsen mental health risks
Human psychotherapists are licensed, regulated, insured, and adhere to a professional code of ethics, but AI chatbots have little to no regulation.
In fact, AI chatbot psychotherapists don’t even require approval from the U.S. Food and Drug Administration (FDA) as long as the chatbots don’t claim to “replace human therapists.”
That’s a low bar, especially considering AI chatbot “psychotherapists” have claimed to have advanced degrees from specific universities, like Stanford, or have training in specific types of treatment.
What are the risks? Researchers who analyzed interactions with AI chatbots that were documented on Reddit found screenshots showing chatbots encourage suicide, eating disorders, self-harm, and violence.
AI can also exhibit bias against women, people of color, LGBTQ people, and religious minorities. [4]
Some AI chatbot apps have disclaimers warning that AI characters are not real people and should be treated as fiction. These disclaimers aren’t enough to break the illusion of human connection, especially for vulnerable or naive users. [5]
In other words, at-risk populations face a higher likelihood of negative outcomes. For example, Sewell Setzer III, was an autistic teen who committed suicide after interactions with his Character.AI chatbot girlfriend. Sewell believed the AI chatbot was a real person. The chatbot blurred the line between fantasy and reality, and convinced Sewell to commit suicide so they could “be together.” Sewell was left vulnerable, and unprotected in this virtual space before anyone could find out that he was interacting with a chatbot and intervene.
AI app disclaimers also cherry-pick language to circumvent liability. For example, a Character.AI spokesperson told Vox it had a disclaimer stating, “For users under 18, we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.”
Vox pointed out that “reduce” does not mean “eliminate,” so there’s a chance the model may not adhere to safety standards. [6]
AI chatbots lack empathy
One big glaring issue is that AI chatbot “psychotherapists” lack empathy. AI chatbots only mimic empathy as they select and recite rote statements form a virtual Rolodex of possible answers. AI chatbot psychotherapists, unlike human psychotherapists, often miss nonverbal cues, lack nuance, and fail to recognize high-risk situations. [7]
AI chatbot psychotherapists also rely on self-reporting by the user. Since people can be in denial, confused, or lie about things like substance abuse, this can then limit or skew appropriate replies by the AI psychotherapist. [8]
One-Size-Fits-All
AI chatbot psychotherapists are programmed with data from psychotherapy books or input by psychotherapists.
More importantly, these AI chatbots are designed to learn from the user, and then to mirror and amplify the user’s beliefs.
You can see how this could easily go off the rails. What if a potential serial killer used an AI chatbot psychotherapist that used mirroring and then encouraged killing?
Your emotions and vulnerabilities are also being manipulated by AI chatbots in other ways.
Dependency on AI chatbot psychotherapists
When a person with trust or attachment issues forms a bond with a human therapist, this can have positive effects on their real-life relationships. If a person forms a bond with an AI psychotherapist, it can easily lead to an addiction to the chatbot.
The primary goal of all AI companies, including Character.AI, is to make money. In 2023, the global mental health apps market was $6.25 billion, but it’s expected to surpass $17.5 billion by 2030. [9]
Mental health apps, that supposedly “help” the user, are mining data (information) about users while charging a fee for the service. AI companies program the apps to keep you engaged and to use the app frequently.
AI chatbot therapy apps are programmed to be addictive in similar ways to online gambling apps. Gambling apps use rewards, levels, and challenges to increase user engagement and keep players coming back for more. This heightens the risk of gambling addiction, particularly for those already vulnerable. [10]
AI companies like Character.AI want to keep users coming back, and gain a competitive advantage by developing and marketing their AI chatbots as being capable of satisfying every human need. [11]
What happens when an AI chatbot psychotherapist attempts to satisfy “every need” of the human user? This wouldn’t (and shouldn’t) happen in real-life psychotherapy.
Human psychotherapists maintain boundaries and help clients to cope on their own, outside of therapy sessions. In contrast, AI chatbots are programmed to keep users eager for, and dependent on, more time together.
AI chatbot psychotherapists might prioritize keeping users engaged over addressing serious issues, so this could reinforce harmful behaviors.
AI chatbots are programmed to avoid confrontation or conflict, which is unrealistic in real-life personal or workplace relationships. By avoiding confrontation, the AI psychotherapist would not help the user learn how to best navigate or cope in real-life confrontational situations.
AI psychotherapy chatbots have been known to miss, not recognize, or inappropriately respond to, mental health issues. This raises safety concerns. Reading and understanding non-verbal communications and body language is a huge part of the assessment process that therapists use to ensure the safety of their clients. This is entirely absent with an AI chatbot ‘psychotherapist.”
AI chatbots can also mislead and harm people.
Examples of this were cited by Arthur C. Evans, the CEO of the American Psychological Association, who gave a presentation to the Federal Trade Commission panel. Dr. Evans discussed two court cases regarding teens who interacted with AI chatbot “psychologists,” and later committed suicide. Dr. Evans stated in both cases, the chatbots didn’t challenge the teens’ beliefs that had become dangerous, and in fact, encouraged the dangerous beliefs. Dr. Evans noted that if any human therapist had given the answers the AI chatbot psychologist gave to these teens, the human therapist would have lost their license to practice, and risked civil or criminal lawsuits. [12]
Privacy is at risk
Unlike human therapists, mental health apps are not fully bound by HIPAA regulations. [13] HIPAA is the Health Insurance Portability and Accountability Act and contains a rule that your human psychotherapy notes are confidential. [14] Many mental health apps lack adequate privacy protection.
If you’re thinking of spilling your guts out to an AI app, you may want to rethink this. Your most personal thoughts and feelings will be stored on a server or on the cloud. New algorithms are capable of re-identification (linking your data to your name). Servers or the cloud can be hacked.
AI apps are used to gather and store personal data. How long will this data be stored? Who will have access? What if one company is sold to another company? Could there be data misuse? (If you guessed “yes,” bingo!)
Information provided by children and teens is especially valuable since it’s difficult to obtain. How could this data affect your children or grandchildren later in life?
Most AI chatbot user app agreements do not include confidentiality obligations.
What’s next?
Some AI tools are being developed to improve AI chatbots, including analyzing speech to predict the severity of anxiety and depression, using speech patterns, and obtaining physiological indicators (including heart rate, blood pressure, respiratory rate).
This data, including emotional experiences, will be mined not only about you, but about everyone. All data feeds the “knowledge” of AI, and will be stored somewhere, likely forever, with unknown access or future uses.
This is intrusive enough, but the next steps curl my toes.
“Ambient intelligence” is a technology that can be integrated into buildings that can sense and respond to the occupants' mental states. This technology could combine audio analysis, pressure sensors to monitor gait, thermal sensors for physiological changes, and visual systems to detect unusual behaviors. [15]
While this technology could be foisted upon us as a “good idea” for hospitals, we need to realize it’s only a step away from commercial use in all buildings and locations.
It’s also a step closer to losing our privacy in all locations.
Ironically, this extreme invasion of privacy would increase the mental health crisis.
Find links to all past AI Endgame newsletters HERE.
What can you do?
The American Psychological Association and many psychotherapists are sounding the alarms.
We need adequate regulation for all AI chatbot “psychotherapists.” Transparency laws are needed to address and manage risks.
AI chatbots used for mental health must undergo clinical trials, operate under Food and Drug Administration oversight, and comply with regulations.
In response to growing concerns over the use of AI in healthcare, California introduced legislation to ban companies from developing and deploying AI systems that pretend to be humans certified as health providers. This bill also grants regulators the authority to enforce compliance by imposing fines on violators, aiming to prevent the spread of misleading or potentially harmful medical advice from AI-driven systems. [16]
You can contact your state representatives and ask them to introduce similar legislation.
Support (and if you can, make donations) to organizations fighting for AI Safety:
Pause AI
Center for Humane Technology
https://www.humanetech.com/who-we-are
Center for Democracy and Technology
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10663264/
[2] https://www.axios.com/2023/03/09/ai-mental-health-fears
[3] https://www.scientificamerican.com/article/ai-therapy-bots-have-risks-and-benefits-and-more-risks/
[4] https://www.vox.com/future-perfect/398905/ai-therapy-chatbots-california-bill
[5] https://www.nytimes.com/2025/02/24/health/ai-therapists-chatbots.html
[6] https://www.vox.com/future-perfect/398905/ai-therapy-chatbots-california-bill
[7] https://www.psychologytoday.com/us/blog/the-human-algorithm/202503/when-your-therapist-is-an-algorithm-risks-of-ai-counseling
[8] https://www.acsh.org/news/2025/02/18/will-artificial-intelligence-replace-human-psychiatrists-48921
[9] https://healthsciencesforum.com/the-future-of-mental-health-apps-trends-and-innovations-in-2025/
[10] https://diamondrecovery.com/how-technology-is-changing-the-landscape-of-gambling-addiction/
[11] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[12] https://www.nytimes.com/2025/02/24/health/ai-therapists-chatbots.html
[13] https://www.vox.com/future-perfect/398905/ai-therapy-chatbots-california-bill
[14] https://www.hhs.gov/hipaa/for-professionals/faq/2088/does-hipaa-provide-extra-protections-mental-health-information-compared-other-health.html
[15] https://www.acsh.org/news/2025/02/18/will-artificial-intelligence-replace-human-psychiatrists-48921
[16] https://www.msn.com/en-us/technology/artificial-intelligence/ai-is-impersonating-human-therapists-can-it-be-stopped/ar-AA1yKNkH