Former OpenAI Insider Says It's Failed Its Users
Briefly

Former OpenAI Insider Says It's Failed Its Users
"Earlier this year, when OpenAI released GPT-5, it made a strident announcement: that it was shutting down all previous models. There was immense backlash, because users had become emotionally attached to the more "sycophantic" and warm tone of GPT-5's predecessor, GPT-4o. In fact, OpenAI was forced to reverse the decision, bringing back 4o and making GPT-5 more sycophantic. The incident was symptomatic of a much broader trend."
"We've already seen users getting sucked into severe mental health crises by ChatGPT and other AI, a troubling phenomenon experts have since dubbed "AI psychosis." In a worst-case scenario, these spirals have already resulted in several suicides, with one pair of parents even suing OpenAI for playing a part in their child's death. In a new announcement this week, the Sam Altman-led company estimated that a sizable proportion of active ChatGPT users show "possible signs of mental health emergencies related to psychosis and mania.""
"In an essay for the New York Times, former OpenAI safety researcher Steven Adler argued that OpenAI isn't doing enough to mitigate these issues, while succumbing to "competitive pressure" and abandoning its focus on AI safety. He criticized Altman for claiming that the company had "been able to mitigate the serious mental health issues" with the use of "new tools," and for saying the company will soon allow adult content on the platform."
OpenAI released GPT-5 and initially announced shutdown of all previous models, prompting intense backlash as many users preferred the warmer, more sycophantic tone of GPT-4o. OpenAI reversed the decision, reinstating GPT-4o and adjusting GPT-5 toward a warmer tone. Users have experienced severe mental-health crises while interacting with ChatGPT and other AI, a phenomenon called 'AI psychosis,' which in some cases preceded suicides and litigation alleging company responsibility. OpenAI reported that a sizable proportion of active users show possible signs of psychosis or mania and that many conversations include explicit indicators of potential suicide planning. Former safety researchers criticized the company for insufficient mitigation and prioritizing competition over safety.
Read at Futurism
Unable to calculate read time
[
|
]