Is AI slop training us to be better critical thinkers?
Briefly

Is AI slop training us to be better critical thinkers?
"We aren't just imagining this shift, the numbers back it up: Trust in national news organisations has fallen from 76% in 2016 to just 56% in 2025 (Pew Research Center, 2016, 2025). 70% of young adults get news incidentally (stumbling upon it) rather than seeking it out to verify it (Pew Research Center, 2025). 59% admit they can no longer reliably tell human and AI content apart (Deloitte, 2024). 70% of users say AI makes it harder to trust what they see (Deloitte, 2024)."
"My original hypothesis for social media was simple: IF algorithms shepherd us toward comfortable, familiar preferences, THEN it will kill nuance and critical thinking. But that's not what is happening with synthetic media in the equation. The Head of Instagram, Adam Mosseri, recently noted in his essay that we are moving from "assuming what we see is real" to "starting with skepticism.""
Users are increasingly skeptical as AI-generated content proliferates and detection tools lag behind. Platforms and verification proposals aim to assert authenticity even as detection becomes harder over time. Trust in national news organisations has dropped from 76% in 2016 to 56% in 2025, and 70% of young adults encounter news incidentally rather than seek verification. 59% of people admit they can no longer reliably distinguish human from AI content, and 70% say AI makes it harder to trust what they see. Low-quality synthetic content—21% of videos shown to new YouTube users per a 2026 Kapwing report—is produced to exploit attention. Product designers face tensions between imperfect detection and rising user skepticism.
Read at Medium
Unable to calculate read time
[
|
]