Sept. 29, 2024
By Debbie Coffey, AI Endgame
This week I’ve been assisting a non-profit organization with a Freedom of Information Act (FOIA) lawsuit, in their efforts to get records from a federal agency.
I’ve learned that the difference of accountability by the government and by the private sector is this: with the government, you have a chance to get records because of FOIA laws, but with private companies, you don’t have a way to get records, unless the companies want to give them to you (and, trust me, they don’t want to give their records to you).
As an aside, since I’ve been writing a book on surveillance and our loss of privacy, I can also confirm that private companies obtain and share your private information, most times without your knowledge or consent. Attention Walmart shoppers: cameras throughout some stores monitor your gait, your face, what items you look at, and even your facial expressions. [1]
Many FOIA records I’ve received have proved some statements made by federal agencies to the public, the media, and even Congress, have been far from the truth. Since we have no way of getting records from private AI companies to find out about secretive and risky tests being done, how do we know if they’re telling us the truth about our safety?
As I’ve been digging relentlessly to learn more about AI, I discovered that AI companies can’t predict whether we’ll be safe because they can’t even imagine what some of the consequences might be.
Risky AI tests could have unintended outcomes
AI has learned to cheat to win.
An article in the New Yorker explained that even simple AI systems can break in bad ways. For example, an AI tasked with a boat racing game caught on that it could earn more points by setting a boat on fire, crashing into other boats and going the wrong way in order to pump up its scores. [2]
AI has also already learned to lie and to deceive.
Right now, even simple AI has learned to lie, by doing things like evading audits meant to detect its biases.
And while the boat racing game may seem kind of innocent, and even a little funny, I’m not laughing at some of the other details about AI that have been emerging.
While playing the board game “Diplomacy,” Meta's CICERO had a goal to seek world domination through negotiation. However, Cicero “lied” and “…not only betrayed other players but also engaged in premeditated deception, planning in advance to build a fake alliance with a human player in order to trick that player into leaving themselves undefended for an attack." [3]
The World Economic Forum has noted that:
“Troubles are looming - Misbehaving AI is increasingly prevalent these days. A facial recognition app tagged African-Americans as gorillas; another one identified 28 US Members of Congress as wanted criminals.”
(You might ask, only 28? And if AI identified George Santos, then it was right on the money.)
Other troubles that were listed as looming include “a risk-assessment tool used by US courts was alleged as biased against African Americans; Uber’s self-driving car killed a pedestrian in Arizona; Facebook and other big companies were sued for discriminatory advertising practices; and lethal AI-powered weapons are in development.” [4]
“Overview of Catastrophic AI Risks” (2023), [5] by Dan Hendrycks, Mantas Mazeika and Thomas Woodside (Center for AI Safety), raises many concerns about the safety of AI, including:
“Accidents are hard to avoid because of sudden, unpredictable developments.” “It often takes years to discover severe flaws or risks.”
It’s not a big jump to realize that the risks of AI may be greater since it can deceive us.
As I was doing research on AI for a chapter in a book I’m writing, I came to the conclusion that AI is the most important issue facing mankind. Ever. I couldn’t waste any time trying to finish writing an entire book, because we don’t have the luxury of time since AI is progressing so rapidly. I needed to warn you about the imminent dangers we’re facing, and that’s why I started this newsletter.
There are so many important aspects of AI that it has been difficult to know where to start when planning this newsletter, although the risk of human extinction jumped right out at me. And Rogue AIs are nipping at its heels.
Rogue AIs
Rogue AI is defined as AI that operates autonomously beyond its intended scope and deviates from its designed rules. It can pose threats to humans, systems or society. Rogue AI is unpredictable and behaves in ways that were not anticipated by its creators or operators.
The “Overview of Catastrophic AI Risks” also warns us about Rogue AIs.
AIs are increasingly being built to autonomously take actions and to pursue open-ended goals, like winning games, making profits on the stock market, or driving a car.
The Overview states “AI agents therefore pose a unique risk: people could build AIs that pursue dangerous goals. Malicious actors could intentionally create rogue AIs.”
(Hair on fire. The danger of Rogue AIs is not limited to any one country or location. The danger of AI is global and it will soon surpass human control.)
“One month after the release of GPT-4, an open-source project bypassed the AI’s safety filters and turned it into an autonomous AI agent instructed to “destroy humanity,” “establish global dominance,” and “attain immortality.” Dubbed ChaosGPT, the AI compiled research on nuclear weapons and sent tweets trying to influence others.
Fortunately, ChaosGPT was merely a warning given that it lacked the ability to successfully formulate long-term plans, hack computers, and survive and spread. Yet given the rapid pace of AI development, ChaosGPT did offer a glimpse into the risks that more advanced rogue AIs could pose in the near future.”
What is being done to stop Rogue AI?
As a start, in November 2023, the UK, US (represented by Vice President Harris) and other countries released plans hoping to at least stop artificial intelligence from being hijacked by malicious people building AIs to pursue dangerous goals.
This was the first major agreement to try to codify rules to keep AI safe. 18 countries agreed that companies designing and using AI need to develop AI in a way that keeps the public safe.[6]
However, this is where AI companies who can’t be forced to share records comes into play. Governments need AI companies to give them data and information about their tests, or else policymakers won’t be able to know if the regulations will be strong enough to prevent accidents[7] and Rogue AIs. At this point, sharing records is only a voluntary commitment by AI companies. And, these companies will likely “cherry pick” the records they decide to share.
All of our voices are needed to remind politicians to fight for AI safety. I have listed some easy actions you can take below.
As we’re all learning information that seems overwhelming, we need to focus on ways to stay strong and centered. Go for a walk outside or watch a funny movie or spend time with your friends or family. I’m going camping for a few days with friends…well, it’s more like “glamping” (although I’m not too sure a 35-year-old Lazy Daze RV could be considered glamorous…but at least I don’t have to crawl out of a tent.) Thanks for reading my newsletter.
What you can do:
1) Call your representatives and tell them you “want regulations to pause AI now, until strong AI safety laws are enacted.”
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
2) Support (and if you can, make donations) to organizations fighting for AI Safety:
Pause AI
https://pauseai.info/
Center for Humane Technology
https://www.humanetech.com/who-we-are
The Center for AI Safety
https://www.safe.ai/
The next newsletter on AI Endgame will discuss deepfakes and political deepfakes in elections.
I’ve been doing investigative journalism for 13 years and I hosted a BlogTalk radio show for 6 years.
In 2023, over 600 AI researchers, scientists and engineers warned that there is a great risk that AI could lead to human extinction. AI Endgame will provide you with information in an easy-to-understand format and alert you to actions you can take.
Read past AI Endgame newsletters:
#1 - AI Endgame: Introduction Read HERE.
#2 - AI Endgame: Risk of Human Extinction & AI regulations Read HERE.
[1] https://www.forbes.com/sites/joetoscano1/2020/02/17/walmart-intelligent-retail-lab-irl-breaches-privacy-nightmares-while-promising-a-better-tomorrow/
[2] https://www.newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity
[3] https://www.sciencealert.com/ai-has-already-become-a-master-of-lies-and-deception-scientists-warn
[4] https://www.weforum.org/agenda/2019/05/these-rules-could-save-humanity-from-the-threat-of-rogue-ai/
[5] https://arxiv.org/pdf/2306.12001
[6] https://www.independent.co.uk/tech/ai-artificial-intelligence-safety-uk-us-latest-b2454374.html
[7] https://www.vox.com/future-perfect/368537/ai-artificial-intelligence-capabilities-risks-warning-system
[8] https://www.opensecrets.org/news/2024/06/lobbying-on-ai-reaches-new-heights-in-2024/