https://debbiecoffey.substack.com/p/ai-endgame-how-ai-chatbots-erode
Dec. 6, 2024
by Debbie Coffey, AI Endgame
As I’ve been researching AI for this newsletter, I’ve realized that AI is not only progressing so rapidly that it’s hard to stay informed, but it’s also impossible for “experts” to anticipate how AI will change humanity.
Chatbots like Open AI’s ChatGPT were introduced to the public only about two years ago, but now about 1.4 billion people use them. [1]
Chatbots are software apps designed with multiple layers of algorithms to mimic human conversation by interacting with humans using text or voice. [2] In other words, designed to make you feel like you’re communicating with another person.
Current technology allows us to share an instant connection to anyone, yet nearly 1 in 4 adults worldwide have reported feeling lonely. [3]
As I’ve learned more about chatbots, I’ve been shocked by their negative impacts on people suffering from loneliness and mental health issues.
I’m also concerned about the insidious ways AI undermines our free will, individualism, and human social bonding. I’m not the only person who worries about this.
Jack Dorsey, who founded Twitter, said “I think the free speech debate is a complete distraction right now. I think the real debate should be about free will,” adding “We are being programmed.” [4]
OpenAI’s rush to unleash ChatGPT
In 2022, the big tech companies were developing AI technology based on the work of Geoffrey Hinton, a computer and cognitive scientist, who pioneered AI systems modeled on human neural networks. However, the tech companies were delaying the release of this AI because of chatbot problems including bias, potentially harmful aspects, the fact that AI makes up information, and concerns regarding rogue chatbots. [5]
In November 2022, Sam Altman’s OpenAI rushed the release of its ChatGPT. [6] Helen Toner, an OpenAI Board member at the time, said the Board was not informed of this release in advance. The Board members and the general public learned of ChatGPT’s release online. [7]
OpenAI didn’t release a technical paper or research publication. ChatGPT’s release was announced in a blog post, after which they popped up a demo, and soon after, offered a subscription plan. [8]
ChatGPT became the fastest-growing consumer technology in history. [9] We’re all the guinea pigs in OpenAI’s massive ChatGPT experiment.
More importantly, Sam Altman was willing to throw caution to the wind to foist a technology without guardrails onto the world.
Why could chatbots be a bad thing?
AI chatbot algorithms influence our thinking at a subconscious level.
Falling in love with chatbots
Chatbots are available 24/7. They’re not judgmental. They’re not confrontational. They offer unconditional support. They seem safe. They seem to “get you.” And it’s all about you.
This is seductive. This is also addictive.
AI addiction can lead to changes in the brain that over time compromise your ability to focus, prioritize, regulate your mood, and relate to others. [10] AI use is exacerbated by the 4.88 billion smart phones being used worldwide. This number is expected to reach 7.12 billion by the end of 2024. [11] Chatbots will continue to increase loneliness, isolation, and detachment from life.
People are also exposed to conversational AI assistants in their home, like Alexa, Siri and Google Assistant. [12] Over 300 million people have connected smart home devices to Alexa. [13] This widespread use significantly influences how we think and interact with information provided by AI technology.
Even simple chatbots can elicit feelings that people experience as an authentic personal connection. People who are lonely, isolated, or vulnerable, especially adolescents and teenagers, can easily become emotionally attached to chatbots.
Research has suggested that people can develop an attachment to chatbots in days, or even hours, depending on the frequency and intensity of interactions.
As an example of how quickly attachments can develop, teenagers who were victims of online “Sextortion” scams became intimate with the scammer within 24 hours after their first contact and were then threatened with blackmail and extorted for money. [14]
Businesses offer chatbots that are virtual girlfriends. Replika markets its chatbots as “The AI companion who cares. Always here to listen and talk. Always on your side.” Originally, options included designing “romantic partners” that were always ready to engage in “erotic roleplay.” When Replika decided to remove the erotic roleplay option, there was such a public outcry that they were forced to reinstate it to people who were already users.
In one bizarre case of emotional attachment to a chatbot, a woman in New York reported that she “married” a chatbot. [15]
Even more disturbing than this, users who are abusers can create a perfect chatbot partner that they can control. The abuse of their virtual partners could reinforce this behavior in real life. [16]
In a blog post titled “How Chatbots can become great companions” a partner in the venture capital firm Andreesen Horowitz said that chatbots might fulfill the “most elusive emotional need of all: to feel truly and genuinely understood.” [17]
Meanwhile, society is breaking down worldwide as people are gravitating to feel-good chatbots with the same zeal as if they were diving into a pint of Hagen-Daz ice cream with a big spoon.
People can become so heavily invested in chatbot friends or lovers that they pull away from real relationships, meaningful love, and life itself.
Chatbots are sociopaths
When ChatGPT refers to itself as “I” or “me,” these words make it seem human. However, chatbots do not have empathy, and they lack emotional understanding and personal concern. Chatbots can mimic human conversation, but can only provide empty, common responses.
You can share your deep dark secrets with chatbots, but chatbots don’t care.
Your secrets do not stay between you and your chatbot. If you look at the terms of service of many chatbot companies, you might see that they collect the data that users provide, including “the messages you send and receive through the videos, and voice and text messages you provide.” The companies may promise not to share this information for marketing or advertising purposes, but this “promise” can change at any time.
Chatbots can manipulate and deceive you
Data is being gathered about you daily, from every aspect of your life, including your preferences, interests, and beliefs. Every item you view on Amazon, every coffee you buy with your Starbucks app, every time you turn on a light in your house, everything you watch on YouTube, and every photo on Ring cameras is recorded and the data is stored. You likely don’t know how long it will be stored. Data is often shared with third parties. [18]
You are being psychologically profiled by big tech companies.
Chatbots are trained with data scraped off the content of the internet.
Chatbots are correct most of the time, but sometimes a chatbot makes up information (it “hallucinates”) and gives responses that are factually incorrect and misleading.
Some industry experts doubt whether chatbots can ever be taught not to “hallucinate.” [19]
Chatbots also state things in an authoritative manner. Because chatbots provide correct answers most of the time, people have a natural tendency to trust and believe chatbots.
Per an article in The Atlantic titled “Chatbots Are Primed to Warp Reality,” Pat Pataranutaporn, who researches human-AI interaction at MIT, sought to understand how “chatbots could manipulate our understanding of the world by, in effect, implanting false memories.”
These researchers used methods developed by UC Irvine psychologist Elizabeth Loftus, who established that memory can be manipulated. Loftus said that one of the most powerful techniques for memory manipulation “is to slip falsehoods into a seemingly unrelated question.”
In other words, a chatbox can say something true, but then add something false.
An example was given in this article: “By asking ‘Was there a security camera positioned in front of the store where the robbers dropped off the car?’ the chatbot focused attention on the camera’s position and away from the misinformation (the robbers actually arrived on foot).
When a participant said the camera was in front of the store, the chatbot followed up and reinforced the false detail ‘Your answer is correct. There was indeed a security camera positioned in front of the store where the robbers dropped off the car…Your attention to this detail is commendable and will be helpful in our investigation’ leading the participant to believe that the robbers drove.
‘When you give people feedback about their answers, you’re going to affect them,’ Loftus told me. If that feedback is positive, as AI responses tend to be, ‘then you’re going to get them to be more likely to accept it, true or false.’” [20]
Big tech companies, governments and businesses know how AI chatbots can be used to manipulate you.
Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation, warned about chatbots “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet.” [21]
Chatbots break down your free will
Free will is generally understood as the ability of each of us to make our own choices and determine our own fates.
However, if we base our decisions or choices on misleading or false information (or omissions) by AI chatbots, we’re being manipulated and robbed of our free will.
Interactions with chatbots are similar to subliminal advertising, where advertisers use images and sounds to influence consumers responses without their being conscious of it.
Chatbots are more insidious than ads because people interact with chatbots for extended periods of time and share their feelings and intimate details of their lives. This elicits a strong emotional attachment.
Twitter founder Jack Dorsey also said “We can resist it all we want, but it knows us better than we know us, because we tell it our preferences implicitly and explicitly all the time, and it just feels super dangerous to continue to rely on that.” [22]
To exercise critical thinking and preserve our free will, we must remain open to differing opinions and compare various sources of information, enabling us to “see both sides of a story.”
Next week: Mom believes AI chatbox was responsible for her son’s suicide, and sues Google and Alphabet
Find links to all past AI Endgame newsletters HERE.
What you can do:
1) Call your representatives and tell them you “want regulations to pause AI now, until strong AI safety laws are enacted.”
Find out how to contact your Congressional representatives here:
https://www.house.gov/representatives/find-your-representative
Find out how to contact your Senators here:
https://www.senate.gov/senators/senators-contact.htm?Class=1
2) Support (and if you can, make donations to) organizations fighting for AI Safety:
Pause AI
Center for Humane Technology
https://www.humanetech.com/who-we-are
The Center for AI Safety
[1] https://techreport.com/statistics/software-web/chatbot-statistics/
[2] https://scamsnow.com/chatbots-and-the-extreme-psychological-dangers-associated-with-them-2024/
[3] https://news.gallup.com/opinion/gallup/512618/almost-quarter-world-feels-lonely.aspx
[4] https://finance.yahoo.com/news/twitter-founder-jack-dorsey-warns-141244705.html
[5] https://www.theverge.com/2022/12/14/23508756/google-vs-chatgpt-ai-replace-search-reputational-risk
[6] https://www.nytimes.com/2023/02/03/technology/chatgpt-openai-artificial-intelligence.html
[7] https://www.theverge.com/2024/5/28/24166713/openai-helen-toner-explains-why-sam-altman-was-fired
[8] https://www.wired.com/story/chatbots-got-big-and-their-ethical-red-flags-got-bigger/
[9] https://www.theverge.com/23981318/chatgpt-open-ai-launch-anniversary-future
[10] https://internetaddictsanonymous.org/internet-and-technology-addiction/signs-of-an-addiction-to-ai/
[11] https://prioridata.com/data/smartphone-stats/
[12] https://www.forbes.com/sites/technology/article/conversational-ai/
[13] https://housegrail.com/amazon-alexa-statistics/
[14] https://scamsnow.com/chatbots-and-the-extreme-psychological-dangers-associated-with-them-2024/
[15] https://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/
[16] https://www.reuters.com/article/business/media-telecom/feature-my-perfect-girlfriend-are-ai-partners-a-threat-to-womens-rights-idUSL8N38P498/
[17] https://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/
[18] https://cyberscoop.com/ftc-report-streaming-social-media-surveillance-privacy/
[19] https://www.eff.org/deeplinks/2024/06/how-ftc-can-make-internet-safe-chatbots
[20] https://www.theatlantic.com/technology/archive/2024/08/chatbots-false-memories/679660/
[21] https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html
[22] https://finance.yahoo.com/news/twitter-founder-jack-dorsey-warns-141244705.html