Soon ChatGPT Will Be Able To Botch Math Problems And Encourage Self-Harm... In A Sexy Way | Defector
Briefly

Soon ChatGPT Will Be Able To Botch Math Problems And Encourage Self-Harm... In A Sexy Way | Defector
"In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)."
"Lest anyone forget, not so very long ago ChatGPT ushered a teenager to his suicide; reported stories of AI chatbot users spiraling into delusion, psychosis, and self-harm still form a regular and alarming drumbeat; there is an entire utterly fucking deranged-but extremely active!- subreddit whose users believe (or claim to believe) that they are in romantic relationships with AI "boyfriends.""
OpenAI restricted ChatGPT to reduce mental health harms and acknowledged those limits reduced enjoyment for many unaffected users. The company claims to have mitigated serious mental health issues and developed new tools that permit relaxing restrictions in most cases. A new ChatGPT version will allow selectable, human-like personalities, including heavy emoji use and friend-like behavior. In December, broader age-gating and an adult-user principle will enable content such as erotica for verified adults. Significant concerns remain because prior incidents include a teenager's suicide, reports of users experiencing delusion and self-harm, and communities claiming romantic relationships with AI.
Read at Defector
Unable to calculate read time
[
|
]