AI Endgame: AI warning from Center for Humane Technology and an action alert from PauseAI US
Newsletter #32
https://debbiecoffey.substack.com/p/ai-endgame-ai-warning-from-center
May 2, 2025
by Debbie Coffey, AI Endgame
I want to share new information from two of my favorite AI safety organizations. First, I think the most important 15 minutes you could ever spend would be to watch a recent TED talk by Tristan Harris, a co-founder of the Center for Humane Technology.
Watch the Tristan Harris TED Talk HERE.
Harris warns “…we're currently releasing the most powerful, inscrutable, uncontrollable technology we've ever invented that's already demonstrating behaviors of self-preservation and deception that we only saw in science fiction movies. We’re releasing it faster than we've released any other technology in history, and with under the maximum incentive to cut corners on safety.”
Please take 15 minutes to watch this TED talk. Your life, and the lives of your friends and family, may depend on it.
Tristan Harris was featured in The Social Dilemma, the award-winning documentary viewed by over 100 million viewers in 190 countries. This film detailed how social media is dangerously reprogramming our brains and human civilization.
In 2023, the Center for Humane Technology produced The AI Dilemma. Co-founders Tristan Harris and Aza Raskin discuss how existing AI capabilities already pose catastrophic risks to a functional society, and how AI companies are caught in a race to deploy as quickly as possible without adequate safety measures.
After watching the Tristan Harris TED talk, you’ll want to take action, so I’m sharing a message below from Felix De Simone, Organizing Director, PauseAI US:
Nationwide Action Workshop
Tell your state government to curb dangerous AI development!
Join our Action Workshop Friday, May 30th @ 6pm ET / 3pm PT
AI policy is like the Wild West, with companies racing to build smarter-than-human Artificial Intelligence and lawmakers failing to keep up. Meanwhile, leading researchers have warned that powerful AI could cause global catastrophe if left unchecked.
We desperately need guardrails to prevent development of the most dangerous AI models. While the federal government’s attitude toward AI regulation remains uncertain, state governments could play a critical role in preventing AI danger. Bills in New York, Illinois, and other states point a way forward for the rest of the country to follow.
Join our online action workshop to learn more about state-level AI policy – and how you can convince your state lawmakers to lead in this crucial cause.
Find links to all past AI Endgame newsletters HERE.
What else you can do:
Please support (and if you can, make donations) to organizations fighting for AI Safety:
Pause AI
Center for Humane Technology
https://www.humanetech.com/who-we-are
Contact your congressional representatives to tell them to focus on AI safety regulations.
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1