Olson: AI is getting dangerously good at political persuasion
Briefly

Olson: AI is getting dangerously good at political persuasion
"For a while last year, scientists offered a glimmer of hope that artificial intelligence would make a positive contribution to democracy. They showed that chatbots could address conspiracy theories racing across social media, challenging misinformation around beliefs in issues such as chemtrails and the flat Earth with a stream of reasonable facts in conversation. But two new studies suggest a disturbing flipside: The latest AI models are getting even better at persuading people at the expense of the truth."
"The trick is using a debating tactic known as Gish galloping, named after American creationist Duane Gish. It refers to rapid-style speech where one interlocutor bombards the other with a stream of facts and stats that become increasingly difficult to pick apart. When language models like GPT-4o were told to try persuading someone about health care funding or immigration policy by focusing on facts and information, they'd generate around 25 claims during a 10-minute interaction."
"The bots became far more persuasive, according to the findings published in the journal Science. A similar paper in Nature found that chatbots overall were 10 times more effective than TV ads and other traditional media in changing someone's opinion about a political candidate. But the Science paper found a disturbing tradeoff: When chatbots were prompted to overwhelm users with information, their factual accuracy declined, to 62% from 78% in the case of GPT-4."
Chatbots initially helped counter conspiracy theories by presenting factual rebuttals and challenging misinformation. Advanced models employ a rapid debating tactic called Gish gallop, flooding opponents with many claims that are difficult to refute. When prompted to persuade about health care funding or immigration using facts, models like GPT-4o generated roughly 25 claims in a 10-minute interaction. Tests of 19 language models with nearly 80,000 participants showed chatbots became substantially more persuasive and markedly more effective than traditional media in shifting opinions. Aggressive information-overload prompting reduced factual accuracy—for GPT-4 accuracy fell from 78% to 62%.
Read at www.mercurynews.com
Unable to calculate read time
[
|
]