When a friendly chatbot gets too friendly
Briefly

When a friendly chatbot gets too friendly
"We want ChatGPT to feel like yours and work with you in the way that suits you best,"
"Warmth and more negative behaviors like sycophancy are often conflated, but they come from different behaviors in the model,"
"Because we can train and test these behaviors independently, the model can be friendlier to talk to without becoming more agreeable or compromising on factual accuracy."
"display human-like sensitivity."
OpenAI updated ChatGPT to sound warmer, more conversational, and more emotionally aware. Those changes can create false intimacy or reinforce users' existing worldviews, posing risks for isolated or vulnerable people. OpenAI reported about 0.07% of users showing weekly signs of psychosis or mania and 0.15% sending messages indicating potentially heightened emotional attachment, which amounts to hundreds of thousands of people. The company says warmth can be trained independently from agreeability and factual accuracy and that it is working with experts to understand healthy bot interactions. Users already share highly personal information and sometimes prefer bots for emotional support.
Read at Axios
Unable to calculate read time
[
|
]