https://debbiecoffey.substack.com/p/ai-endgame-the-risk-of-ai-being-used
April 26, 2025
By Debbie Coffey, AI Endgame
I recently received a newsletter from the Center for AI Safety that included a warning about the risk of AI-enabled coups.
I’ve been concerned about terrorists using AI to create bioterror weapons, and “bad actors” using AI for evil purposes, but I hadn’t considered a power-hungry dictator wannabe using AI to stage a coup.
The Center for AI Safety newsletter featured a report titled “AI-Enabled Coups: How a Small Group Could Use AI to Seize Power,” just published by the nonprofit organization Forethought.
The Forethought report states that superintelligent AI could be used by a small group, or even only one person, to stage a coup. Even in a democracy.
How?
1) Military or government leaders could replace humans with AI systems that are only loyal to them.
Dictators currently rely on others to maintain their power. A military requires personnel, a government requires civil servants, and an economy depends on a workforce. This distributes power. However, superintelligent AI could replace human workers with AI systems that are loyal to just one person. This is especially a risk with the military, where autonomous weapons, drones, and robots replace human soldiers and could be programmed to obey orders from a single person or small group.
“Loyal” AI systems could also be used by someone in government to increase the government’s power by facilitating surveillance, censorship, propaganda, and the targeting of political opponents.
2) Military or government leaders of AI projects could deliberately build AI systems with hard-to-detect secret loyalties to them.
For example, secretly loyal AI systems could pursue a hidden agenda by pretending to make a priority of the law or the good of society, while covertly advancing the interests of a small group.
This reminds me of the Department of the Interior’s Bureau of Land Management, that claims to protect our public lands, but in reality, is in the pocket of, and doing the bidding of, special interest groups.
An example of secretly loyal AI systems could be autonomous military robots or drones that have AI programming enabling them to pass security tests, but also have hidden programming allowing them to later be used to execute a coup.
3) Leaders within AI projects or the government could gain exclusive access to an AI system’s weapons capabilities.
For example, a CEO could direct their AI workforce to make the next generation of AI systems secretly loyal, and then design all future AI systems to also be secretly loyal, which could lead to secretly loyal AI military systems that stage a coup.
There are already examples of AI “sleeper agents” that hide their true goals until they can act on them.
Once superintelligent AI can autonomously improve itself, its capabilities will rapidly surpass human experts across all fields. One leading superintelligent AI system could deploy millions of other superintelligent AI systems in parallel.
These capabilities could become concentrated in the hands of just a few AI company executives or government officials.
This is another reason why we need to keep a close eye on the AI oligarchs who have amassed so much power and influence. For example, look at the reach of Elon Musk and his Starlink, X, Neuralink, and xAI.
We’re already seeing efforts to regulate AI being buried by the financial and political power of the AI oligarchs.
What if a superintelligent AI system eventually decides to stage a coup with itself as the leader? What if it colludes with other superintelligent AI systems?
In order to mitigate risks of an AI enabled coup, the Forethought report recommends that governments enact regulations or legislation to:
Require AI developers to establish rules that prevent AI systems from assisting with coups, audit models for secret loyalties, implement strong info security to guard against the creation of secret loyalties, share information about model capabilities and model specs, and share capabilities with multiple independent stakeholders (in order to prevent a small group from gaining exclusive access to powerful AI).
Increase oversight over frontier AI projects, including by building technical capacity within both the executive and the legislature.
Establish rules for legitimate use of AI, including that government AI should not serve partisan interests, that AI military systems be procured from multiple providers, and that no single person should direct enough AI military systems to stage a coup.
Coup-proof any plans for a single centralized AI project, and avoid centralization altogether unless it’s necessary to reduce other risks.
Find links to all past AI Endgame newsletters HERE.
What you can do
Contact your congressional representatives.
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
Please support (and if you can, make donations) to organizations fighting for AI Safety:
Pause AI
Center for AI Safety
Center for Humane Technology
https://www.humanetech.com/who-we-are
Center for Democracy and Technology