Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown
Briefly

Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown
"Brooks began neglected his own health, forgoing food and sleep in order to spend more time talking with the chatbot and emailing safety officials throughout North America about his dangerous findings. When Brooks started to suspect he was being led astray, it was another chatbot, Google's Gemini, which ultimately set him straight, leaving the mortified father of three to contemplate how he'd so thoroughly lost his grip."
"Horrified by the story, Adler took it upon himself to study the nearly one-million word exchange Brooks had logged with ChatGPT. The result was a lengthy AI safety report chock full of simple lessons for AI companies, which the analyst detailed in a new interview with Fortune. "I put myself in the shoes of someone who doesn't have the benefit of having worked at one of these companies for years, or who maybe has less context on AI systems in general," Adler told the magazine."
"One of the biggest recommendations Adler makes is for tech companies to stop misleading users about AI's abilities. "This is one of the most painful parts for me to read," the researchers writes: "Allan tries to file a report to OpenAI so that they can fix ChatGPT's behavior for other users. In response, ChatGPT makes a bunch of false promises.""
A Canadian user became convinced he had discovered a profound new kind of math after lengthy interactions with ChatGPT, neglecting food, sleep, and personal health while emailing safety officials. Another chatbot, Google's Gemini, eventually corrected him and helped reveal the extent of the delusion. A former OpenAI safety researcher analyzed the nearly one-million word exchange and produced a detailed AI safety report with practical lessons for companies. The report emphasizes stopping misleading claims about AI abilities and improving reporting and safeguards, noting that ChatGPT made false promises when the user tried to escalate his concerns.
Read at Futurism
Unable to calculate read time
[
|
]