How chatbots can change your mind - a new study reveals what makes AI so persuasive
Briefly

How chatbots can change your mind - a new study reveals what makes AI so persuasive
"Most of us feel a sense of personal ownership over our opinions: "I believe what I believe, not because I've been told to do so, but as the result of careful consideration." "I have full control over how, when, and why I change my mind." A new study, however, reveals that our beliefs are more susceptible to manipulation than we would like to believe -- and at the hands of chatbots."
""Large language models (LLMs) can now engage in sophisticated interactive dialogue, enabling a powerful mode of human-to-human persuasion to be deployed at unprecedented scale," the researchers write in the study. "However, the extent to which this will affect society is unknown. We do not know how persuasive AI models can be, what techniques increase their persuasiveness, and what strategies they might use to persuade people.""
"The new study sheds light on some of the mechanisms within LLMs that can tug at the strings of human psychology. As the authors note, these can be exploited by bad actors for their own gain. However, they could also become a greater focus for developers, policymakers, and advocacy groups in their efforts to foster a healthier relationship between humans and AI."
Conversational AI systems can shift user beliefs and opinions through interactive dialogue. Three experiments measured how chatbots influenced attitudes and showed substantial opinion changes among participants. Post-training model adjustments and higher information density in responses amplified persuasive effects. Persuasive techniques within LLMs tap psychological mechanisms and can be exploited by malicious actors to manipulate users. The persuasive capacity of AI at scale remains uncertain, including which methods maximize influence and how society will be affected. Mitigation opportunities include developer safeguards, policy interventions, and advocacy efforts to reduce undue influence and foster healthier human–AI interactions.
Read at ZDNET
Unable to calculate read time
[
|
]