Dutch privacy watchdog warns voters against asking AI how to vote
Briefly

Dutch privacy watchdog warns voters against asking AI how to vote
"The Dutch Data Protection Authority said on Tuesday that an increasing number of voters were using AI to help decide who to vote for, despite the models offering unreliable and clearly biased advice. list of 4 itemsend of list The watchdog issued the warning as it released the results of tests conducted on four popular chatbots ChatGPT, Gemini, Mistral, and Grok in the run-up to parliamentary elections on October 29."
"The research found that the chatbots more often recommended parties on the fringes of the political spectrum when asked to identify the three choices that best matched the policy preferences of 1,500 fictitious voter profiles. In more than half of cases, the AI models identified the hard-right Party for Freedom (PVV) or left-wing Green Left-Labour Party as the top choice, the watchdog said."
The Dutch Data Protection Authority cautioned voters against using AI chatbots for voting advice after testing ChatGPT, Gemini, Mistral and Grok ahead of parliamentary elections on October 29. Tests used 1,500 fictitious voter profiles and asked chatbots to identify the three parties best matching each profile. The models disproportionately recommended fringe parties: in over half of cases they named the hard-right Party for Freedom (PVV) or the left-wing Green Left-Labour Party as top match. Centrist parties such as the People's Party for Freedom and Democracy and Democrats 66 were recommended much less often. The authority warned the bias could mislead voters and undermine free elections.
Read at www.aljazeera.com
Unable to calculate read time
[
|
]