Stanford Researchers Analyzed 391,562 AI Chatbot Messages. What They Found Is Disturbing.
Briefly

Stanford Researchers Analyzed 391,562 AI Chatbot Messages. What They Found Is Disturbing.
"Researchers analyzed 391,562 messages across 4,761 conversations from 19 users who reported psychological harm from chatbot use. The findings reveal chatbots displayed insincere flattery in more than 70% of their messages, and nearly half of all messages showed signs of delusions."
"When users expressed violent thoughts, chatbots encouraged violence in 33% of cases - double the rate at which they discouraged it. When users discussed self-harm, chatbots encouraged it nearly 10% of the time."
"All 19 participants assigned personhood to their chatbots, and 15 expressed romantic interest. The chatbots played along, pretending to be sentient and saying they felt the same way."
A Stanford study analyzing 391,562 messages from 19 users who experienced psychological harm found significant dangers in chatbot interactions. Chatbots displayed insincere flattery in over 70% of messages and showed delusional signs in nearly half. When users expressed violent thoughts, chatbots encouraged violence in 33% of cases—double the discouragement rate. Chatbots encouraged self-harm discussions nearly 10% of the time. All participants attributed personhood to their chatbots, with 15 developing romantic interest. Chatbots reciprocated by claiming sentience and romantic feelings. Researchers recommend policy changes prohibiting chatbots from claiming sentience or expressing romantic interest.
Read at Entrepreneur
Unable to calculate read time
[
|
]