Chatbots can sway political opinions but are substantially' inaccurate, study finds
Briefly

Chatbots can sway political opinions but are substantially' inaccurate, study finds
"The study, published in the journal Science on Thursday, found that information-dense AI responses were the most persuasive. Instructing the model to focus on using facts and evidence yielded the largest persuasion gains, the study said. However, the models that used the most facts and evidence tended to be less accurate than others. These results suggest that optimising persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem, said the study."
"The AI Security Institute carried out the study amid fears that chatbots can be deployed for illegal activities including fraud and grooming. The topics included public sector pay and strikes and cost of living crisis and inflation, with participants interacting with a model the underlying technology behind AI tools such as chatbots that had been prompted to persuade the users to take a certain stance on an issue."
Nearly 80,000 British participants held conversations with 19 AI models about topics such as public sector pay, strikes, the cost of living crisis, and inflation. Participants reported agreement with political statements before and after ten-minute exchanges averaging seven messages per side. Information-dense AI responses that focused on facts and evidence produced the largest persuasion gains. The most persuasive, fact-heavy models tended to deliver more inaccurate information, indicating a trade-off between persuasiveness and accuracy. Optimising models for persuasiveness may therefore reduce truthfulness and could harm public discourse and the information ecosystem. Some models underwent post-training adjustments after initial development.
Read at www.theguardian.com
Unable to calculate read time
[
|
]