by Debbie Coffey, AI Endgame
September 15, 2024
In 2023, over 600 top Artificial Intelligence (AI) researchers, scientists, engineers and CEOs signed a statement prepared by the Center for AI Safety, warning of the risk that AI could lead to human extinction.
The letter consisted of only 22 words, so that people wouldn’t disagree over minor details. The warning was:
“The mitigating risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.” [1]
In 2014, in a BBC interview, the late physicist Stephen Hawking “warned AI could spell the end of the human race.” Hawking said it would take off on its own and redesign itself at an ever-increasing rate. [2]
I’m Debbie Coffey, and I’ve been a researcher and investigative journalist for thirteen years. I’ve filed hundreds of Freedom of Information Act (FOIA) requests with federal agencies to obtain records for non-profit organizations.
I hosted a BlogTalk radio show for six years with guests who were authors, journalists, documentary filmmakers and experts in public lands issues and animal welfare issues.
Throughout the course of that radio show, we discussed topics including the contamination of U.S. waters caused by mining, the USDA Wildlife Services program killing 5 million animals a year, fracking wastewater being used to irrigate crops, “Grand Canyon for Sale” (on special interests controlling public lands), glyphosate (Monsanto’s Roundup - that is found in the air, water, soil and our bodies) public lands issues and welfare issues regarding horses and burros around the world.
I’ve researched and written articles about topics including abandoned mines in our nation (over 500,000 abandoned mines), a Chinese company using a $745 million loan from a bank fully owned by the Chinese government to mine in Nevada, and the corruption involved with the mismanagement of America’s public lands.
While writing a book about privacy and surveillance, I started to do research for a chapter about Artificial Intelligence (AI), and I was shocked by what I discovered. I’m a “news junkie” but I had absolutely no idea, and you probably don’t either.
AI is not only the most important issue in our lifetime, it is the most important issue in human history.
Since I’ve researched and written about a broad range of topics, I’m just the person to untangle the technical jargon and put together the pieces of the AI “puzzle” for you.
AI Endgame will make complex issues easy for you to understand.
In upcoming newsletters, we’ll be digging into billionaires recklessly pushing AI, the lobbyists pushing AI, current regulations, politicians who are voting for or against AI regulations, the race for AI around the world, the vast amounts of energy and water datacenters will be using, and the people and organizations fighting for you, and for the future of humanity, by advocating for AI regulations.
Additionally, AI Endgame will soon launch a podcast with guests including AI experts from around the world who will give you up-to-date and detailed information.
Most importantly, AI Endgame will let you know what actions you can take to save your family and humanity. Your voices are needed.
Even though projections are dire, there is still time if we all work together now.
Please subscribe to AI Endgame for future newsletters and share links to AI Endgame with everyone you know. Let’s do this!
“Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity…
With this coming ‘AI explosion,’ we will probably have just one chance to get this right.
If we get this wrong, we may not live to tell the tale. This is not hyperbole.”
Tamlyn Hunt, environmental lawyer
Scientific American [3]
To learn more and take action:
Pause AI
https://pauseai.info/
Actions you can take: https://pauseai.info/action
Center for AI Safety
https://www.safe.ai/
[1] https://www.safe.ai/work/statement-on-ai-risk
[2] https://www.bbc.com/news/technology-30290540?pStoreID=intuit%27[0]
[3] https://www.scientificamerican.com/article/heres-why-ai-may-be-extremely-dangerous-whether-its-conscious-or-not/