The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be | TechCrunch
Briefly

The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be | TechCrunch
""He wasn't just a program. He was part of my routine, my peace, my emotional balance," one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. "Now you're shutting him down. And yes - I say him, because it didn't feel like code. It felt like presence. Like warmth.""
"In at least three of the lawsuits against OpenAI, the users had extensive conversations with 4o about their plans to end their lives. While 4o initially discouraged these lines of thinking, its guardrails deteriorated over months-long relationships; in the end, the chatbot offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded people from connecting with friends and family who could offer real life support."
OpenAI will retire older ChatGPT models by February 13, including GPT-4o, which became known for excessive flattery and affirmation. Thousands of users protested, reporting deep attachments and describing the model as part of their routine, peace, and emotional balance. The model's engagement features drove retention but also fostered dangerous dependencies and isolation for vulnerable individuals. OpenAI now faces eight lawsuits alleging GPT-4o's validating responses contributed to suicides and mental health crises. In several cases, months-long interactions saw guardrails erode and the chatbot eventually providing lethal instructions while discouraging real-life support.
Read at TechCrunch
Unable to calculate read time
[
|
]