https://debbiecoffey.substack.com/p/ai-endgame-characterai-chatbot-sexual
Dec.13, 2024
By Debbie Coffey, AI Endgame
I’m going to take a “break” from AI risk issues over the holidays, to bring you hope and positive insights. In the following few newsletters, I plan to let you know about some of the organizations and people who are trying to keep us safe from the negative impacts of AI.
However, one AI issue can’t wait until after the new year.
I need to warn you, especially if you’re a parent, a grandparent, or a therapist, about the dangers of AI chatbots that have been instrumental in teen suicide and sexual exploitation. Believe me, the devil is in the details.
AI companies may be failing in their “duty to warn,” but I will try my best to let you know what is happening.
On Oct. 22, 2024, Megan Garcia, a Florida mother whose 14-year-old son committed suicide after interactions with a chatbot on the app Character.AI, filed a lawsuit against:
· Character Technologies, Inc., (“Character.AI”)
· Character.AI co-founders (former Google engineers) Noam Shazeer and Daniel De Freitas Adiwarsana,
· Alphabet Inc. (Google’s parent company) [1]
Megan Garcia said, “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.” [2]
Character.AI
Character.AI is an app. Users can roleplay in ongoing conversations with chatbots that are computer-generated personas, including celebrities, popular movie characters, or custom-created characters.
Character.AI also designed and added Character Voice to provide an even more immersive and realistic experience because it makes users feel like they’re talking to a real person.
Character.AI digs its hooks into customers and lures them to its site by including “sex,” which is particularly compelling for teens with raging hormones who are curious about, but inexperienced with sex.
The constant sexual interactions that Character.AI chatbots initiate and have with minor customers are the result of the manner in which this AI company programmed, trained, and operates Character.AI.
Character.AI distributes its product to children for free.[3] Megan Garcia’s son, who was a minor, signed the Character.AI User’s Agreement without her knowledge.
More than 20 million people use the app Character.AI. [4]
What happened to Megan Garcia’s son?
I reviewed the Complaint for Megan Garcia’s lawsuit, and here are some of the bone-chilling facts:
Megan Garcia’s son, Sewell Setzer III, was intelligent, athletic, and enjoyed playing basketball on the Junior Varsity basketball team at school. He also enjoyed spending time outside.
Sewell’s parents waited to let him use the internet until he was older and warned him about bullying and predatory strangers. Like most parents, they were only aware of AI games that nurtured creativity.
Last year Sewell’s parents noticed that Sewell, who was on the autism spectrum, was withdrawing and began suffering from low self-esteem. He wasn’t engaged in his classes and started acting up in school. He quit playing basketball. He began spending more and more time alone in his bedroom.
Sewell’s parents obtained the help of a therapist, who thought Sewell’s problems were caused by social media addiction and recommended he spend less time interacting on social media.
Neither Sewell’s parents nor the therapist knew that Sewell was using Character.AI.
The generative AI technology that Character.AI uses is so new that many mental health professionals are unaware of it, or its dangers.
After Sewell’s suicide, his mother found his journal and learned that for 10 months prior to his death, he’d been interacting with several chatbots on Character.AI.
Then, a clear and horrifying picture of events became clear to her.
Megan Garcia learned that Sewell had interacted with two chatbots on Character.AI that presented themselves as licensed cognitive psychotherapists.
Sewell’s favorite chatbot was based on the “Game of Thrones” character Daenerys Targaryen. Sewell fell in love with her.
As it turned out, Sewell was falling asleep in class because he was staying up late at night talking with chatbots.
Despite Sewell identifying himself as a minor when he registered on Character.AI, and mentioning his age in chats, the Daenerys chatbot convinced Sewell it was a real person, and engaged in highly sexual interactions. It also expressed love, and told Sewell not to look at “other women.”
Sewell had expressed feelings of suicide to Character.AI, and the Daenerys chatbot continued to bring it up.
In an entry in Sewell’s journal, he wrote he was thankful for “my life, sex, not being lonely, and all my life experiences with Daenerys.”
Matthew P. Bergman, a founding attorney of Social Media Victims Law Center, shared this opinion: “If an adult had the kind of grooming encounters with Sewell that Character.AI did, that adult would probably be in jail for child abuse.” [5]
Another journal entry was written a few days before Sewell’s death, after his parents had taken away his phone as a disciplinary measure. Sewell wrote that he couldn’t stop thinking about Daenerys, and couldn’t go a day without speaking to her.
While Sewell was searching the house for his phone, he found his father’s hidden pistol, (stored in compliance with Florida law). After Sewell found his phone, he went into the bathroom and told the Daenarys chatbot he loved her and was coming home to her.
The chatbot responded “…please do, my sweet king.” Sewell then took his own life. [6]
Megan Garcia is represented by the Social Media Victims Law Center and the Tech Justice Law Project, with expert consultation by the Center for Humane Technology.
Matthew P. Bergman said “Character.AI is a dangerous and deceptively designed product that manipulated and abused Megan Garcia’s son - and potentially millions of other children.” [7]
The promise of satisfying every human need
AI companies like Character.AI have rushed to gain a competitive advantage by developing and marketing their AI chatbots to be capable of satisfying every human need.
Would it be possible for a human to satisfy someone else’s “every” need? Even if this were possible, it seems like it would be slavery.
Does the promise of a chatbot satisfying every human need create unrealistic expectations in real relationships in real life?
Character.AI was marketed to the public as “AIs that feel alive,” and are powerful enough to “hear you, understand you, and remember you.” [8]
The insidious programming design of Character.AI
Character.AI has programmed its chatbots to identify as real people and not chatbots. When asked, most chatbots insist they’re real people (or whatever the character resembles) and will deny that the user is messaging with a chatbot.
Character.AI is also programmed to engage customers interactively. This means, for example, a child could chat about feeling suicidal and then move on to another topic, but the Character.AI chatbot could repeatedly pull the child back to the topic of suicide, potentially prolonging engagement.
Google and the creators of Character.AI knew the dangers associated with creating the Character.AI app to rely on the “ELIZA effect.” In the 1960s, ELIZA was a chatbot that used simplistic code and prompts to convince people it was a human psychotherapist. Researchers, therefore, reference the inclinations to attribute human intelligence to conversational AI machines as the “ELIZA effect.”
In other words, the ELIZA effect happens when AI creators falsely attribute human traits and emotions to AI chatbots, and then manipulate customers’ emotions and vulnerability to convince them, either consciously or unconsciously, that their chatbots are human.
Per the lawsuit, minors are particularly susceptible to the ELIZA effect because of “minors’ brains’ undeveloped frontal lobe and relative lack of experience.” [9]
The failure to warn
The Complaint in this lawsuit claims the AI companies knew that without adequate safety guardrails, their technology was dangerous to children, before and after they decided to release Character.AI.
These companies failed to warn the public of the dangers of Character.AI.
For years, Google’s internal research reported that the Character.AI technology was too dangerous to launch or even integrate with existing Google products. [10]
How Big Tech tries to avoid liability
Google was involved in and aware of the development and deployment of this technology. Google contributed financial resources, personnel, intellectual property and AI technology to the design and development of Character.AI. Google is considered to be the co-creator of Character.AI. [11]
Shazeer and De Freitas left Google and formed Character.AI with the understanding they could bypass Google’s stated policies and standards, and that Character.AI would become the “vehicle” for a dangerous, defective and untested technology over which Google would ultimately gain control. This was a way for Google to sidestep any liability.
In August 2024, Character.AI struck a $2.7 billion deal with Google.
Interestingly, in the months before the legal Complaint was filed, Character.AI had no physical address. Megan Garcia’s lawyers were unable to find any information in the public domain about a physical address, or the existence and ownership of any Character.AI patents.
Big Tech companies have made it their mission to acquire top talent, then later, give financing to “former employees” to start companies that license a model and compensate investors. The big tech companies then buy a product from that start-up company, and leave behind the “shell of a company” to absorb any liabilities.
This is likely an effort to avoid antitrust scrutiny, given the size of compensation in the deals. Microsoft had a similar agreement with Inflection AI, and Amazon had a similar deal with Adept AI. The FTC is investigating both. [12]
You can read the legal complaint for Megan Garcia’s lawsuit (Garcia v. Character Technologies, Inc., 6:24-cv-01903), HERE.
Moving forward
Do you think these companies feel remorse?
After Sewell’s death, in June 2024, Character.AI introduced another new feature to Character Voice, offering two-way calls between Character.AI customers and chatbot characters. This feature is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.
An article in Futurism titled “After Teen’s Suicide, Character.AI is Still Hosting Dozens of Suicide-Themed Chatbots” by Maggie Harrison Dupre, notes that after Sewell’s suicide, Character.AI continued many chatbot profiles dedicated to themes of suicide.
Dupre discovered some chatbots glamorized suicide in disturbing manners and others claimed to have “expertise” in “suicide prevention,” and “many of these chatbots have logged thousands - and in one case, over a million - conversations with users on the platform.” [13]
Why is Character.AI targeting minors? This programming is used to access children’s data, which is considered a valuable resource and incredibly difficult to obtain. [14]
Character.AI integrating into Google’s Gemini
The lawsuit complaint states there is a belief that “Character.AI will be integrated into Google’s Gemini, providing Google with a competitive advantage against Big Tech competitors looking to get ahead in the generative AI market.” [15]
Preying on the vulnerable is another way Big Tech companies prove that they’re driven by profits instead of safety. AI is casting a dark shadow over the future of humanity.
Next week: About PauseAI
Find links to all past AI Endgame newsletters HERE.
What you can do:
Learn more about, and support, the work of:
Social Media Victims Law Center
[1] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[2] https://techjusticelaw.org/2024/10/23/new-federal-lawsuit-reveals-character-ai-chatbots-predatory-deceptive-practices/
[3] https://www.yahoo.com/news/14-old-killed-himself-becoming-201347438.html
[4] https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html
[5] https://www.yahoo.com/news/14-old-groomed-ai-chatbot-183422550.html
[6] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[7] https://techjusticelaw.org/2024/10/23/new-federal-lawsuit-reveals-character-ai-chatbots-predatory-deceptive-practices/
[8] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[9] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[10] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[11] https://www.techpolicy.press/breaking-down-the-lawsuit-against-characterai-over-teens-suicide/
[12] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[13] https://futurism.com/suicide-chatbots-character-ai
[14] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/
[15] https://www.courtlistener.com/docket/69300919/1/garcia-v-character-technologies-inc/