AI Endgame: Will lax AI safety regulations by the Trump Administration doom humanity?
Newsletter #11
November 22, 2024
https://debbiecoffey.substack.com/p/ai-endgame-will-lax-ai-safety-regulations
By Debbie Coffey, AI Endgame
Since the new Trump Administration will bring many changes, I’ve been wondering about plans to maintain a balance between AI development and AI safety.
As discussed in AI Endgame newsletters #1 and #2, over 600 AI experts have warned that there’s a very real and imminent risk that AI superintelligence could lead to human extinction.
The AI “arms race”
Despite this great risk to humanity, AI safety regulations are lagging. The biggest obstacle to strong AI safety regulation is the global AI “arms race” (particularly between the U.S. and China). [1]
The nation that controls the future of AI is likely to have unrivaled economic and military power. [2]
AI is a matter of national security because AI technology is increasingly used in warfare. AI can be used for cyber weapons (software, virus, or intrusion devices that can disrupt critical infrastructures, military defense systems, communications, electric power, smart grids, financial systems and air traffic control), autonomous drone swarms, and autonomous weapons (weapons that select and engage targets without human intervention). [3]
Countries want to dominate AI technology for national security and economic reasons. And it’s no wonder why AI companies are motivated to be the first to develop AI technologies and dominate the market. They want to rake in trillions of dollars.
Let’s “follow the money”
It’s estimated that AI could increase the global economy by over $15 trillion by 2030. By 2030, China will have an estimated 26% boost to gross domestic product (GDP), and North America will have an estimated 14.5% boost. (This $10.7 trillion will be almost 70% of the global economic gains.) [4]
As AI use increases in all aspects of our lives, the profits of countries and private companies will continue to increase.
With trillions of dollars on the line, AI safety is taking the back seat.
This is especially alarming because in 2024, the U.S. State Department issued a report warning that AI could become uncontrollable. [5]
Regulation is needed because most AI is in the hands of private companies. The U.S. government funds or owns very few key AI technologies. Although the Department of Defense and other agencies funded the start of most AI development, private companies have controlled and pushed AI development for the past decade. [6]
We’re standing on the cliff of AI superintelligence. AI will be smarter than humans and we could lose control. Lax regulations at this critical point in time could easily lead to human extinction.
What are the odds?
President-elect Trump has voiced concern that deepfakes could trigger a nuclear war. However, when I searched online for his thoughts about the risk of AI causing human extinction, I could only find one reference, described as: “Trump ‘gestured’ to the idea that an AI system could ‘go rogue’ and overpower humanity.” [7]
Elon Musk (who’s been joined at the hip with President-elect Trump), is aware of the risks of AI and has warned that AI can cause human extinction. [8] Musk estimated a 20% chance of this happening, but his estimate is much lower than the estimates of many other AI experts. (20% is still a very huge risk.)
For example, Roman Yampolskiy, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, estimates the risk of AI causing human extinction is 99.999999%. [9]
To put this into perspective, if someone warned you there was “only” a 20% chance that food on your plate was poisonous and if you ate one bite, you’d die, would you eat it? Would you feed this food to your kids or grandkids?
Elon Musk also acknowledged that superintelligence (the point where we could lose control of AI) will likely be reached by next year (2025). [10]
No wonder Musk has been building rockets and wants to colonize Mars.
What are current U.S. AI safety regulations & policies?
In October 2024, the White House released the first National Security Memorandum on AI, “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence” [11] and a fact sheet. [12]
The priorities of this strategy are:
· The United States must lead the world’s development of AI.
· There must be appropriate safeguards while achieving national security objectives.
· The U.S. must work to advance international AI governance to manage AI risks.
This Memorandum also has a provision to prohibit the deployment of nuclear weapons without human oversight.
On the last point, we do have a bit of good news (if you want to call it that). On Nov. 16, President Biden met with Chinese President Xi Jinping and they agreed that any decision to use nuclear weapons should be controlled by humans, not by artificial intelligence. [13]
Other U.S. AI Safety actions have included:
In October 2023, President Biden signed an Executive Order that required AI developers to share safety test results with the government, set standards to help ensure that AI systems are safe, and add protections against the use of AI to engineer dangerous biological materials. [14]
In February 2024, the U.S. Department of Commerce formed the Artificial Intelligence Safety Institute Consortium (AISIC). [15]
Will the Trump Administration turn a blind eye to AI safety?
What can we expect from the new Trump Administration? President-elect Trump has a history of cutting regulations. [16]
The Brookings Institution speculates that the Trump Administration will relax agency regulation on AI issues, including, “a less risk-averse approach than is reflected in the Biden administration’s October 2024 National Security Memorandum on AI.”
This means there will be less focus on AI safety. With the threat of human extinction looming, it’s urgent that you tell your government representatives to study the risks of AI, push for AI safety regulations, and make AI safety a priority in all negotiations with other countries.
The Brookings Institute also states “President Trump will repeal the AI Executive Order (EO) issued by President Biden in October 2023,” and “The incoming administration may also have a different posture regarding voluntary ‘soft law’ frameworks.” [17]
“Soft law” means that instead of any legally binding regulations, there will only be recommendations (that will be voluntary) and will likely involve “self-policing.” Again, think about how self-policing has worked with Boeing.
The Brookings Institute also speculates that the Trump Administration may reduce anti-trust actions related to AI (making the tech giants even more powerful), and may also continue or increase Biden-era AI regulations regarding export control.
Rep. Jay Obernolte, a California Republican, and the only member of Congress with a master’s degree in artificial intelligence, believes one problem is that most lawmakers don’t even know what AI is. Obernolte stated, “Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what A.I. is.” [18]
The risks of AI superintelligence causing human extinction are more pressing than all other issues being addressed in Congress (since the other issues won’t matter if we’re all dead).
Even if congressional representatives do have an inkling about the dangers of AI, it’s hard for legislators and regulators to keep up with the rapid development of AI technology. By the time a congressional committee can hold hearings or draft legislation, AI will likely already be several generations ahead. [19]
Vice President-elect J.D. Vance, whose political career has been bankrolled by tech billionaire Peter Thiel (a founder and Chairman of Palantir), does not support AI regulation. [20
For now, our hopes for any AI safety “guardrails” may be riding on Elon Musk, who has the ear of President-elect Trump.
The risk of AI superintelligence causing human extinction is the most critical issue in human history
As much as we’d like to stick our heads in the sand and ignore the risks of AI, these risks are happening here and now.
If you consider how people were up in arms about Covid vaccines and wearing masks, it’s ironic there’s almost a total blackout on social media about the risk of AI causing human extinction.
Who would ever have imagined that we’d reach a point in history where nuclear war would seem like the lesser of two evils (compared to AI superintelligence causing human extinction)?
I’m writing this newsletter to help you keep a close eye on AI issues in the news. Please share important AI information with others and help spread the word.
NEXT WEEK: Undersea Datacenters Boost the Industrialization of Oceans
Find links to all past AI Endgame newsletters HERE.
(NOTE: After writing this article, I received a newsletter from the Center for AI Safety and was relieved to learn there are a handful of other people (besides Musk) in Trump’s inner circle who are concerned about AI safety. You can read this Center for AI Safety newsletter HERE.)
What you can do:
1) Call your representatives and tell them you “want regulations to pause AI now, until strong AI safety laws are enacted.”
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
2) Support (and if you can, make donations to) organizations fighting for AI Safety:
Pause AI
Center for Humane Technology
https://www.humanetech.com/who-we-are
The Center for AI Safety
[1] https://www.msn.com/en-us/news/technology/biden-administration-fast-tracks-ai-national-security-citing-global-arms-race-with-china/ar-AA1sUGP9
[2] https://www.forbes.com/sites/drewbernstein/2024/08/28/who-is-winning-the-ai-arms-race/
[3] https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world
[4] https://www.pwc.com/gx/en/issues/artificial-intelligence/publications/artificial-intelligence-study.html?src_trk=em669a9ac9d8fbb0.441422611426182161
[5] https://www.cnn.com/2024/03/12/business/artificial-intelligence-ai-report-extinction/index.html
[6] https://www.nytimes.com/2024/10/24/us/politics/biden-government-guidelines-ai.html
[7] https://time.com/7174210/what-donald-trump-win-means-for-ai/
[8] https://www.businessinsider.com/elon-musk-20-percent-chance-ai-destroys-humanity-2024-3?op=1
[9] https://pauseai.info/pdoom
[10] https://www.businessinsider.com/elon-musk-20-percent-chance-ai-destroys-humanity-2024-3?op=1
[11] https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/
[12] https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/
[13] https://www.npr.org/2024/11/16/nx-s1-5193893/xi-trump-biden-ai-export-controls-tariffs
[14] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[15] https://www.nist.gov/news-events/news/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated-ai
[16] https://www.brookings.edu/articles/examining-some-of-trumps-deregulation-efforts-lessons-from-the-brookings-regulatory-tracker/
[17] https://www.brookings.edu/articles/ai-policy-directions-in-the-new-trump-administration/
[18] https://www.nytimes.com/2023/03/03/technology/artificial-intelligence-regulation-congress.html
[19] https://www.forbes.com/sites/drewbernstein/2024/08/28/who-is-winning-the-ai-arms-race/
[20] https://www.nytimes.com/2024/07/17/technology/vance-ai-regulation.html