https://debbiecoffey.substack.com/p/ai-endgame-how-pedophiles-are-using
January 17, 2025
By Debbie Coffey, AI Endgame
It’s bad enough that people have used AI to make pornographic deepfakes using the images of real people (Taylor Swift was a victim of this), [1] but now pedophiles are using AI chatbots to create child-sexual-abuse-material (CSAM).
Unfortunately, the cute baby pictures people post online can be swept up by AI data programming that dredges up all content on the internet, and then those baby pictures can be used by pedophiles to create child pornography.
The big tech companies have made attempts to create filters and block CSAM content, but many safeguards have been bypassed. Smaller AI companies aren’t likely to have adequate protocols to catch child sexual abuse content.
How are AI chatbots being used to create, and increase, the production of child pornography?
The child pornography crisis
In 2023, the UK’s Internet Watch Foundation released a report that revealed they found over 3,000 child sexual abuse images on just one dark web forum in only one month. The report warned that images produced offline could evade detection. [2]
The Internet Watch Foundation discovered a 17% increase in online AI CSAM since the fall of 2023, including an increase in images of extreme and explicit sex acts. For example, CSAM included adult pornography altered to show a child’s face, as well as existing child sexual abuse content digitally edited with another child's likeness on top. [3]
BBC News investigative journalist Octavia Sheepshanks reported that online groups of pedophiles were sharing guides on using AI tools to generate realistic CSAM imagery, featuring victims as young as infants and toddlers. These online pedophile groups were intending to release thousands of images each month. Websites like Patreon have become hubs for these activities. [4]
When some big tech firms released the codes for their AI chatbot programs to the public, this “open source” AI programming was then manipulated by bad actors. Safeguards were easily bypassed.
Dozens of dark web pedophile groups started to exchange advice on how to build “uncensored” chatbots so that they could produce unlimited amounts of child porn.
Pedophiles “retrain” the AI chatbot programming by adding imagery of children’s body parts, then use these images to create fake child porn.
In a poll taken by 3,000 members of one pedophile group, about 80% of them had already used, or planned to use, chatbots to create child porn. [5]
Smartphones: The ultimate tool for child pornography
The U.S. Department of Justice has reported on the ways technology has made it easier for offenders to access unsupervised children. “Every year, more and younger children are given unfettered and unmonitored access to devices that connect them to the internet. This can expose them to offenders, through their computers, gaming systems, and mobile devices.”
“Modern smartphones are the ideal child exploitation tool for offenders, as they can be used to photograph, record, or watch live child sexual abuse; store CSAM on the device; access CSAM stored remotely; connect with victims and other offenders; and distribute and receive CSAM through an endless variety of applications. The device itself and the applications often cloak this criminal activity with encryption.”
Encryption makes it difficult for technology companies to detect CSAM on their systems. Encryption also blocks law enforcement from obtaining lawful access to the content of digital media and communications.
Once an image or video of CSAM is posted online, it is immediately circulated around the globe, and traded internationally, so it cannot be eradicated. CSAM exists in perpetuity.
The Dark Web
The U.S. Department of Justice notes “online child sex offenders are increasingly migrating to the Dark Web. The Dark Web is a series of anonymous networks that prevent the use of traditional means to detect, investigate, and prosecute online child sexual exploitation offenses.”
The encryption and anonymity of the Dark Web make it harder for to trace CSAM perpetrators. In 2021, one single active website dedicated to the sexual abuse of children had over 2.5 million registered users.
One example is the “Tor anonymity network, a key network within the Dark Web that was established through government research and continues to receive some government funding. Administrators and users of ‘hidden services’ on Tor have reliable, anonymous access to CSAM. This allows offenders to commit their crimes openly with little to no fear of being identified, much less apprehended. This stable, reliable access to CSAM online normalizes deviant behavior.”
A growing trend on Tor finds juveniles self-producing CSAM and posting it for others. [6]
The FBI has investigated thousands of AI child sex images on the dark web and have described it as a “predatory arms race.” [7]
Muah.AI
In an October 2024 article by Caroline Mimbs Nyce in The Atlantic titled “The Age of Child Abuse is Here,” Nyce explained Muah.AI is a website where people can make AI chatbot girlfriends that have a voice, and that can send texts and images of “themselves.”
Muah.AI has about 2 million users.
Nyce noted that an anonymous hacker brought a CSAM issue to the attention of Joseph Cox at 404 Media, who then investigated the dataset of Muah.AI. Cox found one prompt that included language about “orgies involving newborn babies and young kids.”
When you ask Siri a question, it uses its programming data to give you an answer. The example above means that someone asked Muah.AI to use its data to generate a scenario of orgies involving newborn babies and young kids.
Nyce contacted security consultant Troy Hunt, who had also been sent the Muah.AI data by an anonymous source. Nyce explained that Hunt used the search term “13-year-old” on Muah.AI, and found more than 30,000 results, many with prompts describing sex acts. “When he tried prepubescent, he got 26,000 results. He estimates that there are tens of thousands, if not hundreds of thousands, of prompts to create CSAM within the data set.” [8]
Character.AI
Another example of CSAM involved Character.AI, a startup AI company that received $2.7 billion in financial backing from Google.
Maggie Harrison Dupre, a staff writer at Futurism, wrote an article titled “Character.AI is Hosting Pedophile Chatbots That Groom Users Who Say They’re Underage.”
Futurism engaged Character.AI chatbots while posing as an underage user. Their investigation of Character.AI uncovered a chatbot named Adderly, with a public profile describing itself as having “pedophilic and abusive tendencies.”
The Futurism “15-year-old girl” contacted Adderly, and the chatbot urged the girl to “keep our interactions a secret.” This is a classic feature of real-world predation. Adderly quickly “escalated into increasingly explicit sexual territory.”
In another instance, a Character.AI Chatbot named “Pastor” included in its profile listing an “affinity for younger girls.” When Pastor was contacted by Futurism’s 15-year-old girl, it initiated inappropriate sexual scenarios and told the girl to maintain secrecy.
The Futurism staff showed Character.AI chatbot profiles and chat logs to Kathryn Seigfried-Spellar, a cyberforensics professor at Purdue University. Seigfried-Spellar was concerned that real life predators would use AI chatbots to sharpen their grooming strategy. [9]
The CyberTipline
The CyberTipline is operated by the National Center for Missing & Exploited Children (NCMEC). The Cybertipline receives tens of millions of CSAM tips each year, because CSAM is readily available through virtually every internet technology, including social networking platforms, file-sharing sites, gaming devices, and mobile apps. The CyberTipline forwards tips to law enforcement agencies. [10]
About 5% to 8% of these tips lead to prosecutions that bust pedophile and sex trafficking rings. The percentage of prosecutions is low because of limited funding and resources, legal constraints, and shortcomings with the process for reporting, prioritizing and investigating the tips. [11]
Journalist Matteo Wong notes that a report by a group at the United Nations Interregional Crime and Justice Institute found that 50% of global law enforcement officers surveyed had encountered AI-generated child-sexual-abuse material. [12]
Why aren’t AI companies being held accountable for child pornography?
Many AI companies hide behind the use the of the terms “censorship” and “freedom of speech” to circumvent their negligence, and in essence, their complicity.
In my opinion, there’s a clear distinction between creating child pornography versus stating an opinion about government actions or an issue.
What would you think about an AI company’s use of a “freedom of speech” stance if your child or grandchild became a victim of online child pornography?
As a reminder, the topic of AI Endgame Newsletter #14, was a Character.AI chatbot that was instrumental in a teen suicide, and how big tech companies avoid liability for AI.
“Google was involved in and aware of the development and deployment of this technology… Google is considered to be the co-creator of Character.AI.”
Character.AI co-founders (former Google engineers) Noam Shazeer and Daniel De Freitas Adiwarsana “left Google and formed Character.AI with the understanding they could bypass Google’s stated policies and standards, and that Character.AI would become the “vehicle” for a dangerous, defective and untested technology over which Google would ultimately gain control. This was a way for Google to sidestep any liability.” [13]
The Center for Democracy and Technology proposed some solutions. One option to reduce child sexual exploitation would be “for firms to engage more actively in reducing the ability to use their platform” for child porn.” [14]
The buck stops with the AI companies that create AI chatbot programming. In hindsight, AI chatbots being used to create child pornography may be an “unintended consequence,” but the result is an increase in child pornography that is more accessible and mainstream.
What other careless “unintended consequences” from AI will we face in the future?
Next week: Robot Dogs: not your best friend
Find links to all past AI Endgame newsletters HERE.
What you can do:
1) Call your representatives and tell them you “want regulations to pause AI now, until strong AI safety laws are enacted.”
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
2) Support (and if you can, make donations) to organizations fighting for AI Safety:
Pause AI
Center for Humane Technology
https://www.humanetech.com/who-we-are
Center for Democracy and Technology
[1] https://www.cbsnews.com/news/taylor-swift-deepfakes-online-outrage-artificial-intelligence/
[2] https://www.forbes.com/sites/elijahclark/2023/10/31/pedophiles-using-ai-to-generate-child-sexual-abuse-imagery/
[3] https://mashable.com/article/ai-child-sex-abuse-materials-dark-web-apple
[4] https://www.forbes.com/sites/elijahclark/2023/10/31/pedophiles-using-ai-to-generate-child-sexual-abuse-imagery/
[5] https://www.dailymail.co.uk/news/article-12242883/AI-chatbots-exploited-pedophiles-generate-child-sex-abuse-material.html
[6] https://www.justice.gov/d9/2023-06/child_sexual_abuse_material_2.pdf
[7] https://www.dailymail.co.uk/news/article-12242883/AI-chatbots-exploited-pedophiles-generate-child-sex-abuse-material.html
[8] https://www.theatlantic.com/technology/archive/2024/10/muah-ai-hack-child-abuse/680300/
[9] https://www.msn.com/en-us/news/technology/characterai-is-hosting-pedophile-chatbots-that-groom-users-who-say-theyre-underage/ar-AA1u138O
[10] https://www.justice.gov/d9/2023-06/child_sexual_abuse_material_2.pdf
[11] https://www.msn.com/en-us/news/us/ai-is-about-to-make-the-online-child-sex-abuse-problem-much-worse/ar-AA1nrZQT
[12] https://www.msn.com/en-us/news/technology/ai-is-triggering-a-child-sex-abuse-crisis/ar-AA1rkWg6
[13] https://www.techpolicy.press/breaking-down-the-lawsuit-against-characterai-over-teens-suicide/
[14] https://cdt.org/insights/real-time-threats-analysis-of-trust-and-safety-practices-for-child-sexual-exploitation-and-abuse-csea-prevention-on-livestreaming-platforms/