Is AI slop training us to be better critical thinkers?
Briefly

"I've been following the discourse around AI-generated content closely. My original hypothesis for social media was simple: IF algorithms shepherd us toward comfortable, familiar preferences, THEN it will kill nuance and critical thinking. But that's not what is happening with synthetic media in the equation. The Head of Instagram, Adam Mosseri, recently noted in his essay that we are moving from 'assuming what we see is real' to 'starting with skepticism.' He suggests we are shifting our focus from what is being shared to who is sharing it."
"We aren't just imagining this shift, the numbers back it up: Trust in national news organisations has fallen from 76% in 2016 to just 56% in 2025 (Pew Research Center, 2016, 2025). 70% of young adults get news incidentally (stumbling upon it) rather than seeking it out to verify it (Pew Research Center, 2025). 59% admit they can no longer reliably tell human and AI content apart (Deloitte, 2024). 70% of users say AI makes it harder to trust what they see (Deloitte, 2024)."
Users are increasingly starting from skepticism when encountering online media, focusing more on source credibility than assuming content is real. Trust in national news has declined significantly since 2016 while many young adults encounter news incidentally rather than actively verifying it. Large shares of people report difficulty distinguishing human from AI content and say AI reduces trust in what they see. Platforms currently struggle to detect synthetic media reliably as generative models improve, and low-quality synthetic content exploits attention at scale. Verification approaches like cryptographic signing and platform-driven checks face technical and adoption challenges.
Read at Medium
Unable to calculate read time
[
|
]