https://debbiecoffey.substack.com/p/ai-endgame-how-extremists-are-using
January 11, 2025
By Debbie Coffey, AI Endgame
I planned to write about a different topic this week, but recent events captured my attention. You may have heard about these events on the news, so I wanted to give you additional information about how they relate to AI.
On Jan. 1, 2025, a Cybertruck exploded outside the main entrance of the Trump International Hotel in Las Vegas. Las Vegas Sheriff Kevin McMahill said the driver, Matthew Livelsberger, an Army Green Beret from Colorado, shot himself in the head prior to the detonation of the blast that injured seven people outside the Trump hotel. [1]
The same day, U.S. Army veteran Shamsud-Din Jabbar drove a rented pickup truck into a crowd during the New Year celebration. Before this incident, Jabbar used Meta's Ray-Ban smart glasses, powered by Meta AI, to conduct detailed reconnaissance of his target area. [2]
(photo: Indiatoday.in)
Six days prior to the explosion in Las Vegas, as reported by investigative reporter Dell Cameron in a WIRED magazine article titled “Before Las Vegas, Intel Analysts Warned That Bomb Makers Were Turning to AI,” Cameron reveals Livelsberger had used AI to research ways to turn the rented Cybertruck into a big explosive device. [3]
According to Sheriff McMahill, Livelsberger used Open AI’s ChatGPT to calculate precise explosive quantities, research firework procurement channels, and explore methods for purchasing phones without leaving digital traces. [4]
Cameron notes, “…documents obtained by WIRED show that concerns about the threat of AI being used to help commit serious crimes, including terrorism, have been circulating among US law enforcement. They reveal that the Department of Homeland Security has persistently issued warnings about domestic extremists who are relying on the technology to ‘generate bomb-making instructions’ and develop “general tactics for conducting attacks against the United States.”
“…federal intelligence analysts say extremists associated with white supremacist and accelerationist movements online are now frequently sharing access to hacked versions of AI chatbots in an effort to construct bombs with an eye to carrying out attacks against law enforcement, government facilities, and critical infrastructure.”
“In one example provided by an intelligence analyst, a user is shown requesting details on “the most effective physical attack against the power grid.” [5]
The use of AI for terrorism isn’t limited to domestic terrorism.
How are extremists using OpenAI’s ChatGPT and similar AI to obtain directions for acts of violence?
Big tech companies have tried to filter out content that could be used for violent acts, but bad actors (people with harmful intent) have found ways to get around this, including:
Direct prompts
Direct prompt injections happen when someone tries to make AI behave in an unintended way, for example, by making it spout hate speech or give harmful answers.
Indirect prompts
Bad actors can create indirect prompt injections, by creating a PDF or website (a “third party”) that contains hidden instructions for the AI system to follow. When the AI system analyzes this concealed information, it can make the AI behave in unintended ways. [6]
Jailbreaks
AI jailbreaks are techniques that can cause the failure of guardrails, designed to prevent AI systems from producing harmful content or executing malicious instructions. [7]
How could AI chatbots enable terrorists?
According to a report by West Point’s Combating Terrorism Center, “AI can be used to generate and distribute propaganda content faster and more efficiently than ever before. This can be used for recruitment purposes or to spread hate speech and radical ideologies. AI-powered bots can also amplify this content, making it harder to detect and respond to.”
Per the Global Network on Extremism and Technology (GNET), AI chatbots can be used as a source to create terror manuals. For example, Bing Chat and ChatGPT could provide step-by-step information to terrorists on ways to protect their operations, like detailed instructions on how to efficiently remove online traces and information and which types of software may mitigate the risk of being detected by law enforcement. AI chatbots could also be used to help generate simple scripts to allow the removal of data tracking features of operating systems, and to learn how to avoid content takedowns. [8]
GNET also warns that combining ChatGPT with deepfake technology could enable the production of prolific amounts of artificially generated extremist content. [9]
A 2024 article in the Homeland Security News Wire noted, “In February, a group affiliated with al-Qaeda announced it would start holding AI workshops online The Washington Post reported.
Later, the same group released a guide on using AI chatbots.” [10]
AI is developing so rapidly, that in our efforts to learn about the risks of AI and our efforts to minimize and contain the risks, it can feel like we’re running up a sand dune and taking one step forward, then falling two steps back.
Read all AI Endgame newsletters HERE.
What you can do:
Ask your representatives to hold AI companies accountable and to tell them you “want regulations to pause AI now, until strong AI safety laws are enacted.”
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
Support (and if you can, make donations to) organizations fighting for AI Safety:
Pause AI
Center for Humane Technology
https://www.humanetech.com/who-we-are
The Center for AI Safety
[1] https://www.cnn.com/2025/01/02/us/tesla-cybertruck-trump-hotel-wwk-hnk/index.html
[2] https://decrypt.co/300074/how-criminals-used-chatgpt-meta-ai-us-terror-attacks
[3] https://www.wired.com/story/las-vegas-bombing-cybertruck-trump-intel-dhs-ai/
[4] https://decrypt.co/300074/how-criminals-used-chatgpt-meta-ai-us-terror-attacks
[5] https://www.wired.com/story/las-vegas-bombing-cybertruck-trump-intel-dhs-ai/
[6] https://www.wired.com/story/generative-ai-prompt-injection-hacking/
[7] https://aisecuritycentral.com/ms-ai-jailbreaks/
[8] https://gnet-research.org/2023/12/15/artificial-intelligence-as-a-terrorism-enabler-understanding-the-potential-impact-of-chatbots-and-image-generators-on-online-terrorist-activities/
[9] https://gnet-research.org/2023/02/17/weapons-of-mass-disruption-artificial-intelligence-and-the-production-of-extremist-propaganda/
[10] https://www.homelandsecuritynewswire.com/dr20240716-how-islamic-state-uses-ai-to-spread-extremist-propaganda