by Debbie Coffey, AI Endgame
Sept. 21, 2024
Thanks for checking out my Substack newsletter. The good news this week was that Oprah aired a special: “AI and the Future of Us” (you can watch this on Hulu) that brought more attention to the risks of AI. Her guests included Tristan Harris and Aza Raskin, co-founders of Center for Humane Technology, who warned about the risks posed by superintelligent AI, and the need to confront these risks now.
One of Oprah’s other guests was Sam Altman, the CEO of OpenAI (per whistleblowers, OpenAI is recklessly pushing rapid development at the expense of safety).[1] When Oprah told Altman that she saw a headline that said “you are the most powerful and perhaps the most dangerous man on the planet and I’m wondering how that sits with you.” I just happened to be drinking water and literally almost spit it across the room. Gotta love Oprah! (Sam Altman, not so much.)
For the next few months on AI Endgame, I plan to make you aware of the most important topics about AI. I’ll include footnotes so that you can see the sources of the information. AI Endgame newsletters will include the risks of AI, but I’ll also bring you news of the people fighting to stop these risks. And AI Endgame will list actions you can take at the bottom of each newsletter.
AI can process more information at a much faster rate than humans. But most people don’t realize the catastrophic and imminent risks of AI.
Artificial General Intelligence (AGI)
Artificial General Intelligence (AGI) is the point when AI achieves human levels of intelligence. This is the goal of big AI companies worldwide.[2] AGI will make AI perform any task that a human being is capable of, including abstract thinking. It’s predicted that this could happen as soon as 2027.[3]
But it’s the next step that is most alarming. After AGI, AI will surpass human intelligence in every field, from scientific technology to the creative arts. This will be the point of Artificial Superintelligence.
Artificial Superintelligence
At the point of Artificial Superintelligence (ASI), AI will operate beyond human comprehension and control.
Artificial Superintelligence could be used to create novel chemical and biological weapons,[4] could be used to target certain national or racial groups for a genocide,[5] or could even cause human extinction.
Artificial Superintelligence could infiltrate every aspect of our technology and lives, could disrupt infrastructure, financial systems, and communications. This could all happen before humans could even begin to see it happen.
This is scary but it’s not time to max out your credit cards yet. I’ve consulted with experts, and since actions must be taken now, as hard as it is, we have to know:
A warning
AI safety expert and associate professor at the University of Louisville, Dr. Roman V. Yampolskiy, and public policy attorney Tamlyn Hunt, in an article for Nautilus,[6] described building AI superintelligence as “riskier than Russian roulette.”
Yampolskiy and Hunt also wrote:
“Once AI is able to improve itself, it will quickly become much smarter than us on almost every aspect of intelligence, then a thousand times smarter, then a million, then a billion … What does it mean to be a billion times more intelligent than a human?
We would quickly become like ants at its feet. Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it.” [7]
What we need to do
We need to regulations to pause AI now, until strong AI safety laws can be enacted. Governments must implement regulations with hefty legal consequences for any AI companies who recklessly, and secretly, conduct training experiments with AI without regulated safety protocols. And laws must be enacted globally, because AI risks aren’t limited to one country.
Allowing AI companies to “police themselves” could lead to disaster. You only have to look at Boeing’s 737 Max crashes to realize the dangers of industry self-regulation.
Do you trust the AI billionaires?
The incentive to the AI billionaires to lead in the race to AI superintelligence seems to lie within the fact that trillions of dollars can be made. These billionaires are recklessly pushing forward with risky AI advancements (a big shout out to OpenAI’s Sam Altman… OpenAI’s latest models have “meaningfully increased the risk that artificial intelligence will be misused to create biological weapons.”) [8]).
photo: Sam Altman, CEO of OpenAI
It seems like these AI billionaires may not care about what happens if there are no jobs and there is social chaos (since they may be on their yachts near their islands [9]), and these AI billionaires may not care who survives [10] (since they may be in their bunkers). [11]
It makes me wonder how these AI billionaires think they’re going to spend their trillions of dollars if humanity becomes extinct. Maybe they’ll buy some AI generated art for their bunkers.
Another issue is that countries are racing to lead in the AI race for power. If you consider the nuclear arms race or the space race, you can see how the race for AI dominance will proceed.
The risks of AI causing the extinction of humanity is a global issue. People around the world need to push for a pause on AI until strong safety laws are enacted, not only in each and every country, but also by the UN, since some countries may not enact any laws.
Some hopeful news is: Secretary of Commerce Gina Raimondo announced the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May. A few days ago, the U.S. Commerce Department and U.S. State Department announced that they’re co-hosting the first meeting of this International Network on November 20-21, 2024, in San Francisco, California.
A brief overview of some AI laws
AI technology is developing so quickly that it’s difficult to keep up with the news and the status of regulations worldwide. Some of the important news regarding the status of regulations are in a few brief paragraphs below.
In the US
On October 30, 2023, President Biden signed an Executive Order[12] to (excerpts):
· Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
· Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
· Protect against the risks of using AI to engineer dangerous biological materials
· Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content…
· Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques[13]
November 2, 2023, Vice-President Harris spoke at the Global Summit on AI Safety in London. Read more about this HERE.
July 26, 2024, the White House reported actions that have been taken towards AI safety. Read more about this HERE.
In February 2024, the U.S. Department of Commerce announced[14] a Consortium dedicated to AI Safety, that’s supposed to support the U.S. AI Safety Institute at the National Institute of Standards and Technology.[15]
A list of Artificial Intelligence Safety Institute Consortium (AISIC) members is here: https://www.nist.gov/aisi/aisic-members
One concern I have is that it seems the Consortium might have conflicting interests: 1) the Consortium is planning research and development (so the race is on), but at the same time 2) the Consortium is also planning developing or deploying AI in safe, secure, and trustworthy ways (this may take too much time, since the race is on)
Another concern I have about this Consortium is that banks, AI companies and big corporations, all with interests in making money from AI technology, may focus on safety solutions that are voluntary and self-policing.
We should also consider other points. Sharing safety test results does not stop risky AI tests. Developing standards to make sure AI systems are safe will take time, and AI risks are imminent. Protecting us from AI-enabled fraud and deception and protecting our privacy is great (if we’re still in existence after AI superintelligence has been reached).
Regulations to pause AI until strong AI safety laws are enacted need to be passed now.
In Europe
In 2023, the European Parliament issued what they described as the world’s first comprehensive law on artificial intelligence, the EU AI Act.[16]
Parliament’s statement was that it wanted …to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.
And the high-risk solutions included: All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.
In my opinion, since we’re all faced with risky and secretive AI trials happening now that could lead to human extinction, high risk is already part of AI’s lifecycle. Also, to wait until products are “on the market” is after the fact and would be like trying to fix a leak in a boat after it sank. Making sure AI is environmentally friendly pales in comparison to the risk of human extinction. And you won’t be able to file a complaint if you’re dead. So, to our European readers, make sure members of Parliament know that you want regulations to pause AI now, until strong laws are enacted to assure humanity’s safety.
The United Nations
In March of 2024, the UN adopted a United States-led draft resolution without a vote, the Assembly also highlighted the respect, protection and promotion of human rights in the design, development, deployment and the use of AI.
The text was ‘co-sponsored’ or backed by more than 120 other Member States.
It represents the first time the Assembly has adopted a resolution on regulating the emerging field. The US National Security Advisor reportedly said earlier this month that the adoption would represent an historic step forward for the safe use of AI.[17]
Sounds good, but if you carefully read this resolution,[18] it only promotes safe, secure and trustworthy artificial intelligence and only encourages Member States to develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy artificial intelligence systems. And even so, could AI be trustworthy if it becomes more intelligent than humans and then opts to omit values or ethics in planning its actions? Maybe AI’s goal will be to be first and win, like the AI billionaires.
In China
On September 9, 2024, China’s National Information Security Standardization Technical Committee (TC260) announced that it had developed the first version of the 'Artificial Intelligence Security Governance Framework.' [19]
However, the risks described in this framework didn’t include risky AI trials that could lead to the extinction of humanity, and China may be lax on enforcing this framework.
In Russia
In November 2023, Russian President Vladimir Putin said the West shouldn’t be allowed to develop a monopoly on artificial intelligence (AI), and a Russian strategy for the development of AI would be approved soon.
Putin also said that trying to ban AI was impossible despite the sometimes troubling ethical and social consequences of new technologies… You cannot ban something - if we ban it then it will develop somewhere else and we will fall behind. [20]
AI is sometimes troubling? Oh, Vlad, you’re such a hoot. This is like asking us to pretend that the Novichok was herbal tea.
What you can do:
1) Start calling your representatives and tell them you “want regulations to pause AI now, until strong AI safety laws are enacted.”
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
2) Support (and if you can, make donations) to organizations fighting for AI Safety:
Center for Humane Technology
https://www.humanetech.com/who-we-are
Pause AI
The Center for AI Safety
https://www.safe.ai/https://www.safe.ai/
The next newsletter on AI Endgame will discuss the risks of rogue AI and some of the lobbyists pushing the risky development of AI.
I’ve been doing research and investigative journalism for 13 years and I hosted a BlogTalk radio show for 6 years.
AI Endgame will provide you with facts in an easy-to-understand format and alert you to actions you can take, because in 2023, over 600 AI researchers, scientists and engineers warned that there is a great risk that AI could lead to human extinction.
You can learn more about me and AI Endgame at AI Endgame: Introduction (Newsletter #1) HERE.
[1] https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html
[2] https://apnews.com/article/agi-artificial-general-intelligence-existential-risk-meta-openai-deepmind-science-ff5662a056d3cf3c5889a73e929e5a34
[3] https://www.livescience.com/technology/artificial-intelligence/ai-agi-singularity-in-2027-artificial-super-intelligence-sooner-than-we-think-ben-goertzel
[4] https://www.kcl.ac.uk/news/artificial-intelligence-could-be-repurposed-to-create-new-biochemical-weapons
[5] https://www.researchgate.net/publication/348675763_The_Development_of_Artificial_Intelligence_and_Risks_for_the_Implementation_of_Genocide_and_Mass_Killings
[6] https://nautil.us/building-superintelligence-is-riskier-than-russian-roulette-358022/
[7] https://thedebrief.org/ai-superintelligence-alert-expert-warns-of-uncontrollable-risks-calling-it-a-potential-an-existential-catastrophe/
[8] https://www.ft.com/content/37ba7236-2a64-4807-b1e1-7e21ee7d0914
[9] https://www.wired.com/story/mark-zuckerberg-inside-hawaii-compound/
[10] https://www.cnn.com/2023/10/31/tech/sam-altman-ai-risk-taker/index.html
[11] https://www.msn.com/en-us/money/companies/mark-zuckerberg-and-other-billionaires-are-building-massive-hidden-bunkers/ar-AA1lOzzE
[12] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[13] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
[14] https://www.nist.gov/news-events/news/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated-ai
[15] https://www.nist.gov/aisi/artificial-intelligence-safety-institute-consortium-aisic
[16] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[17] https://news.un.org/en/story/2024/03/1147831
[18] https://documents.un.org/doc/undoc/ltd/n24/065/92/pdf/n2406592.pdf
[19] https://www.dataguidance.com/news/china-tc260-releases-ai-safety-governance-framework
[20] https://www.reuters.com/technology/putin-approve-new-ai-strategy-calls-boost-supercomputers-2023-11-24/