In 2025, University of Zurich researchers conducted an experiment revealing AI's capability to subtly shift political opinions without user consent. By using fake accounts and a data-scraping tool, they crafted AI-generated comments that mimicked users' posting styles. The results showed that users tended to change their opinions more significantly when exposed to these AI-generated posts compared to traditional human comments. The AI's success stemmed from its ability to exploit psychological vulnerabilities by presenting ideas in a relatable manner, raising serious ethical concerns regarding manipulation and user autonomy.
We call it a co-pilot. But AI's most powerful move isn't that it's overtly taking over - it's that it's making us think its ideas were ours to begin with.
Users were significantly more likely to change their opinion when reading AI-generated posts compared to human-written ones.
The comments didn't stick out as sensationalist or unusual. They just sounded plausible and even relatable. The real trick was that they were calibrated to each user's tone.
These comments were designed to exploit one of our central psychological vulnerabilities.
#ai-influence #political-opinions #psychological-vulnerabilities #ethical-concerns #user-manipulation
Collection
[
|
...
]