AI Endgame: Anthropic’s new AI makes it easier for terrorists to create bioweapons or engineer a pandemic
Newsletter #36
https://debbiecoffey.substack.com/p/ai-endgame-anthropics-new-ai-makes
By Debbie Coffey, AI Endgame
Well, I’m glad that I had a chance to go camping for a few days, because now I have to write about one hair-raising prediction about the risks of AI that has quickly become a reality.
I received a newsletter from the Center for AI Safety discussing Anthropic’s newest AI models, Claude Opus 4 and Claude Sonnet 4, that were released last week.
The Center for AI Safety noted “Anthropic’s Chief Scientist Jarad Kaplan told TIME that malicious actors could use Opus 4 to ‘try to synthesize something like COVID or a more dangerous version of the flu - and basically, our modeling suggests that this might be possible.’”
The Center for AI Safety gave an example: “one researcher showed that Claude Opus 4's WMD safeguards can be bypassed to generate over 15 pages of detailed instructions for producing sarin gas.”
Saran gas is a colorless, odorless, highly toxic nerve agent that causes suffocation, and death within minutes of exposure.
This takes the term “letting the cat out of the bag” to a whole new level. Did you ever image that we’d be facing these increased threats?
The double edged sword
Many AI models are dual-use. In other words, AI is useful for beneficial research, but could also be misused in extremely harmful ways.
Up until now, virology knowledge has been limited to a small number of experts. But new AI models now outperform human virologists in their areas of expertise. Because AI can now outperform human scientists in complex virology procedures, it has lowered barriers to the development of biological weapons. [1]
“It’s not just Opus 4: several frontier models outperform human experts in dual-use virology tests.” [2]
A report issued by the Center for AI Safety and SecureBio notes examples of this, and states “OpenAI's o3… outperforms 94% of expert virologists when compared directly on question subsets specifically tailored to the experts' specialties.” [3]
Scheming and deception
And as if things couldn’t get any worse, “…Apollo Research found an early Claude Opus 4 version exhibited ‘scheming and deception,’ and advised against its release.” [4]
Oh great, Claude can hide what it’s doing.
Rolling back safety on weapons of mass destruction (WMD)
One very troubling fact is that Anthropic rolled back its own safety and security commitments before releasing Opus 4.
This means Anthropic paved the way for the release of Opus 4, despite knowing its dangerous capabilities. Although Anthropic came up with a plan for safety protections, many members of the public think these “protections” are insufficient.
“Anthropic says it implemented internal fixes; however, it doesn’t appear that Anthropic had Apollo Research re-evaluate the final, released version.” [5]
The fox guarding the henhouse
So much for relying on AI companies to self-police, huh? There are excuses that these companies “face pressure” to “rush releases” of their AI models. However, releasing AI without strong guardrails could have deadly consequences.
We need AI regulations. Now.
Why don’t we already have strong federal regulations? Regulatory capture. This occurs when special interest groups “buy” or co-opt policymakers, political groups, or regulatory agencies (which were created to act in the public’s interest) to further their own ends. Regulatory capture enables companies to dictate the rules (or lack thereof), even if what benefits them is harmful to the public.
An example of this that comes to mind is Monsanto and other chemical companies fund the Biotechnology Innovation Organization (BIO), an association that represents the interests of the biotechnology industry. BIO not only sends lawmakers mark-ups of pending legislation, but also directly writes legislation that it then sends to lawmakers to submit.
As you can clearly see, regulatory capture could lead to corruption within government.
Anthropic
As an interesting aside, Amazon has invested billions of dollars into Anthropic.
Dario Amodei (who previously served as a Vice President at OpenAI) is a co- founder and the CEO of Anthropic. Anthropic’s Board of Directors includes Dario Amodei, Daniela Amodei, Yasmin Razavi, and Jay Kreps. Netflix Chairman and co-founder Reed Hastings also just came on board. [6]
PauseAI
There are actions you can take right now to try to stop or lessen the risks of AI.
PauseAI, a grassroots organization doing amazing work on a shoestring budget, sent sent an action alert that I’m directly sharing with you below.
My thanks to the many PauseAI volunteers around the world.
PauseAI US Nationwide Action Workshop
Shape AI's future in your state. RSVP now.
Join our Action Workshop this Friday, May 30th @ 6pm ET / 3pm PT
AI policy is like the Wild West, with companies racing to build smarter-than-human Artificial Intelligence and lawmakers failing to keep up. Meanwhile, leading researchers have warned that powerful AI could cause global catastrophe if left unchecked.
We desperately need guardrails to prevent development of the most dangerous AI models. While the federal government’s attitude toward AI regulation remains uncertain, state governments could play a critical role in preventing AI danger. Bills in New York, Illinois, and other states point a way forward for the rest of the country to follow.
Join our online action workshop to learn more about state-level AI policy – and how you can convince your state lawmakers to lead in this crucial cause.
Sign Our Petition!
Want to take action today before the workshop? Read and sign our petition now and join the many voices already taking a stand for a global treaty:
Find links to all past AI Endgame newsletters HERE.
What you can do:
1) Let your Congressional representatives know that you want AI regulations.
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
2) Support (and if you can, make donations) to organizations fighting for AI Safety:
Pause AI
Center for AI Safety
[1] https://www.ai-frontiers.org/articles/ais-are-disseminating-expert-level-virology-skills
https://www.virologytest.ai/
[6] https://www.nbcchicago.com/news/business/money-report/anthropic-appoints-netflix-chairman-reed-hastings-to-ai-startups-board-of-directors/3755365/