Online interactions are become less genuine
Briefly

OpenAI CEO Sam Altman asserted that AI will become persuasive before it is truly intelligent, a notion supported by a recent study from the University of Zurich. Researchers used AI to alter opinions among Reddit users on the /changemyview subreddit, revealing its effectiveness in persuasion. The university's ethics committee expressed concern over the lack of consent from participants, and the study was not fully published. This raises significant issues regarding manipulation, misinformation, and the erosion of human connection in the context of AI and social media.
The AI-generated comments proved extremely effective at changing Redditors' minds. The university's ethics committee frowned upon the study, as it's generally unethical to subject people to experimentation without their knowledge.
The power of persuasion in the AI era is drastic, with AI avatars serving as channels for ideological manipulation, highlighting risks of misinformation and manipulation.
Large language models (LLMs) like Claude and ChatGPT, considered to have vast knowledge, create an illusion of bias-free outputs, making algorithmically driven content even more dangerous.
The study from the University of Zurich illustrates glaring dangers in our online ecosystem, revealing the risks of manipulation, misinformation, and degradation of human connection.
Read at Fast Company
[
|
]