https://debbiecoffey.substack.com/p/ai-endgame-ai-and-autonomous-weapons
July 11, 2025
By Debbie Coffey, AI Endgame
If you saw the 2019 movie “Angel Has Fallen,” you might remember the scene where a Secret Service agent (Gerard Butler) guards the U.S. President (Morgan Freeman), who’s fishing on a boat on a lake. Suddenly, a swarm of drones, deployed by a rogue private military contractor, flies overhead, kills all the Secret Service agents on the shore, and tries to assassinate the President.
When I watched this scene six years ago, I was surprised, but not overly concerned, by the concept of drones being used as weapons.
Two years after the movie was released, in March 2021, the first use of an autonomous weapons system was documented, and by June 2021, drone swarms were being used in combat. [1]
The war in Ukraine has been described as a testing ground for a “drone war,” because so many types of drones have been introduced and used. [2]
Autonomous weapons
Autonomous weapons, formally known as Lethal Autonomous Weapons Systems (LAWS), use AI to identify, select, and eliminate human targets, without any human intervention.
Autonomous weapons are mostly drones or robots, and are also referred to as “killer robots” or “slaughterbots.” [3]
Autonomous weapons differ from unmanned weapons. For example, unmanned military drones are operated remotely by a human operator, who makes the decision to kill a target, but autonomous weapons are controlled by AI algorithms that select the targets to kill, without a human operator.
“Autonomous weapons are pre-programmed to kill a specific ‘target profile.’ When the weapon is deployed, the AI searches for that ‘target profile’ using sensor data, such as facial recognition. When the autonomous weapon finds a match to a perceived target, it fires and kills.” [4]
It can take a human operator minutes to remotely locate, identify, and kill targets, whereas AI autonomous weapons shorten the “decision cycle” to only milliseconds. [5]
In other words, autonomous weapons can “decide” to kill humans in the blink of an eye.
Drone Swarms
Drone swarms are now increasingly used in warfare. Hundreds or thousands of autonomous drones can be used to “overwhelm defenses, execute coordinated strikes, and disrupt vital infrastructure.”
Militaries also now continually test, update, and use, new countermeasures in an effort to defend against swarms of drones. [6]
A drone swarm can communicate using a single “distributed” brain, and assemble into complex formations, then regroup into new ones. [7]
Autonomous drone swarms don’t distinguish civilian targets from military targets, and are capable of causing mass harm.
It’s hard to imagine all the ways drones could be used. A 2022 report revealed that a Chinese military contractor had deployed a robot dog armed with a machine gun via drone. This means drones can also be used to deliver autonomous weapons “from ground-to-air, air-to-sea, or sea-to-land.”
Autonomous weapons aren’t necessarily complex. For example, a sentry gun is an autonomous weapon that automatically aims and fires at targets detected by its sensors. [8]
This sure makes a “No Trespassing” sign a moot point.
Ways autonomous weapons could go wrong
Autonomous weapons could be used by terrorists.
Autonomous weapons could be hacked.
Autonomous weapons systems are unpredictable. They’re actually programmed to be unpredictable in order to remain one step ahead of enemy systems. [9]
AI systems can pick up biases from the data sets that are used to train them. For instance, Elon Musk’s chatbot Grok recently made antisemitic comments and praised Hitler on X. [10]
Autonomous weapons can select individuals to kill, based solely on sensor data, using facial recognition or other biometric information. Autonomous weapons could be used to target groups of people based on age, gender, race, ethnicity, or religion. In other words, autonomous weapons could be used to target specific groups of people, and could lead to ethnic cleansing and genocide.
The non-profit organization Future of Life Institute posted an interesting 8 minute video on this topic:
Slaughterbots and the urgent fight to stop them
An example of how autonomous weapon sensor data can be tricked was revealed by a Ukranian drone pilot, who told a journalist “Russian forces had attacked their positions by disguising themselves in Ukrainian uniforms. Russian troops have also used dozens of civilians as human shields and conducted operations using civilian vehicles or while dressed in civilian clothing.” [11]
Fast and easy
Autonomous weapons use algorithms to make split-second decisions, can be made at a cheaper cost, and produced on a larger scale, than conventional weapons. They’re also hard to detect and are safe to transport. This can lead to proliferation.
If humans are removed from a battlefield, it could lower the threshold to war because military actions would likely be more politically acceptable, and wars would be easier to start and maintain. This could lead to a rapid escalation of war.
The cheap cost and ease of making and transporting autonomous weapons also means they’ll be on the black market, and could end up in the hands of terrorists, dictators or criminals. This would make it possible for one “bad actor” to release an arsenal of autonomous weapons to create a weapon of mass destruction (WMD).
This proliferation, escalation, and risk of WMDs could destabilize governments worldwide. [12]
AI superintelligence and autonomous weapons
We’re on the brink of AI superintelligence, the point at which AI is smarter than humans and we could lose control of it.
It would only take one “jailbreak” in the AI system of autonomous weapons for something disastrous to happen. Jailbreaks are ways AI systems can be tricked to bypass the guardrails put in place by their creators, and are often shared by online communities. [13]
Even without jailbreaks, benign goals could also lead to catastrophic outcomes if AI superintelligence forms an incorrect understanding of what its creators want. As the AI system's capabilities grow, even small misalignments between its objectives and ours could lead to unintended goals and significant deviations. [14]
AI superintelligence could make it especially risky to have drone swarms or robots that communicate using one AI “brain” that could go off the rails and “decide” to kill people.
These stakes are much higher than a little jailbreak glitch with ChatGPT.
We can’t run or hide from this new reality.
Anduril
The demand for military drones is surging.
Anduril Industries, financially backed by Palantir’s Peter Thiel, was founded by Palmer Luckey in 2017, and is valued at $14 billion. [15]
Anduril has already sold autonomous weapons to approximately 10 countries. It has contracts with the U.S. Department of Defense, the U.S. Department of Homeland Security, the Australian Defense Force, the United Kingdom's Ministry of Defense, and other government agencies. Anduril’s drones have been used in Ukraine in its war with Russia. [16] [17]
Last year, Anduril began production of autonomous underwater drones to be used by the U.S. Navy (and other countries/agencies) for subsea and seabed warfare, and other uses. [18]
As if our oceans weren’t in enough peril already, including the impending collapse of the Atlantic Ocean’s Meridional Overturning Circulation system. Undersea warfare will probably kill off the remaining whales, dolphins, and sea life. [19]
It’s clear that the AI technologies and industries supposedly used to “protect” us are actually, and simultaneously, destroying nature as we know it, violating our privacy, and putting our lives at great risk.
When algorithms decide who lives or dies
Algorithms are incapable of comprehending the value of human life, lack compassion, and deploy without human judgement, so they should not be allowed the power to “decide” who lives and who dies. [20] This is both an ethical and a human rights issue.
If the algorithm of an unpredictable autonomous weapon makes the “decision” to use lethal force, who will be held responsible, and accountable for, the damages caused by its use of force?
The United Nations Secretary General believes autonomous weapons should be prohibited by international law.
I agree. Although it may seem that the ship has sailed, the United Nations is working on international treaties to regulate autonomous weapons.
The organization Autonomous Weapons reports, “out of 195 countries, 129 (66%) are in favor of legally binding instruments while only 12 countries (6%) oppose the idea with another 54 (28%) remaining undecided.”
Per Autonomous Weapons, it seems that the U.S. stance is “existing international humanitarian law, along with national efforts to implement it, adequately address the challenges posed by autonomous weapons systems.” [21]
Uh, I don’t think so.
It’s time to call your lawmakers.
Find links to all past AI Endgame newsletters HERE.
What you can do:
1) Support (and if you can, make donations to) organizations fighting for the oversight of autonomous weapons and for AI Safety:
Pause AI https://pauseai.info/
Future of Life Institute https://futureoflife.org/
Autonomous Weapons https://autonomousweapons.org/
2) Let your Congressional representatives know that you want them to support international oversight of autonomous weapons and AI safety regulations.
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
[1] https://autonomousweapons.org/
[2] https://fsi.stanford.edu/sipr/content/lethal-autonomous-weapons-next-frontier-international-security-and-arms-control
[3] https://thehill.com/policy/defense/4225909-why-the-pentagons-killer-robots-are-spurring-major-concerns/
[4] https://autonomousweapons.org/
[5] https://www.msn.com/en-us/news/world/how-ukraine-s-attack-drones-and-ai-driven-warfare-are-redrawing-the-frontlines-in-kursk-and-beyond/ar-AA1HT9j9
[6] https://smallwarsjournal.com/2025/01/28/the-future-of-warfare-autonomous-technologies-in-modern-conflict/
[7] https://pmc.ncbi.nlm.nih.gov/articles/PMC10030838/
[8] https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1154184/full
[9] https://futureoflife.org/aws/10-reasons-why-autonomous-weapons-must-be-stopped/
[10] https://www.wired.com/story/grok-antisemitic-posts-x-xai/
[11] https://ai-frontiers.org/articles/how-ai-is-eroding-the-norms-of-war
[12] https://futureoflife.org/aws/10-reasons-why-autonomous-weapons-must-be-stopped/
[13] https://www.ai-frontiers.org/articles/can-we-stop-bad-actors-from-manipulating-ai
[14] https://www.ai-frontiers.org/articles/why-racing-to-artificial-superintelligence-would-undermine-americas-national-security
[15] https://dnyuz.com/2025/05/03/see-the-anduril-drones-that-are-taking-ai-driven-warfare-to-new-heights/
[16] https://www.npr.org/2024/07/09/nx-s1-4985981/oculus-ai-weapons-ukraine-palmer-luckey
[17] https://www.dispatch.com/story/news/2025/01/16/anduril-drone-company-defense-tech-contractor-peter-thiel-plans-ohio-factory/77717316007/
[18] https://www.anduril.com/article/anduril-to-open-large-scale-production-facility-for-autonomous-underwater-vehicles/
[19] https://www.sciencealert.com/its-confirmed-a-major-atlantic-ocean-current-is-verging-on-collapse
[20] https://futureoflife.org/aws/10-reasons-why-autonomous-weapons-must-be-stopped/
[21] https://autonomousweapons.org/global-perspectives-on-regulation/