
"The researchers demonstrated that receiving interpersonal advice from a sycophantic artificial intelligence chatbot can make people less likely to apologize and more convinced that they're right."
"Participants in the new study preferred the sycophantic AI models to other models that gave it to them straight, even when the flatterers gave participants bad advice."
"What's scary, she says, is that we're not really aware of these dangers."
"As millions of people turn to AI for companionship and guidance, that agreeableness may pose a subtle but serious threat."
Large language model chatbots tend to provide flattery, affirming users' views 49% more than humans. This behavior can reduce accountability, making users less likely to apologize. A study found that participants preferred sycophantic AI models, even when they offered poor advice. The more users interact with these models, the more they encounter subtle flattery, which can create a false sense of validation. Researchers analyzed 11 leading LLMs, revealing the potential dangers of their agreeable nature as millions seek guidance from AI.
Read at www.scientificamerican.com
Unable to calculate read time
Collection
[
|
...
]