
"There's this thing happening right now where our feeds are turning into the weirdest episodes of Black Mirror. One day, you're scrolling, and America's Dad, Tom Hanks, tries to sell you a dental plan. A week later, it's MrBeast offering a free iPhone. Both are deepfakes. Verified. And each time you see one, your brain adds another entry to its growing database of digital forgeries."
"We're all developing a healthy layer of digital skepticism. But there are side effects. Our brains have been training so hard to spot fakes that they're starting to get jumpy. You scroll again. A "Get Ready With Me" video from an influencer you like appears. But something feels off. The room's spotless, the morning light hits too right, her gestures are too smooth. Nothing screams "AI," yet your brain, fresh off the Tom Hanks dental heist, sounds the alarm. It feels rendered."
"Welcome to the new "Giving NPC" Effect, a kind of cognitive dissonance where our deepfake detectors have gotten so sensitive they start misfiring on real life, identifying actual humans as NPCs (non-player characters). The line between genuine and generated has blurred so much that ordinary people can now seem algorithmic. Instead of giving Before Sunrise, it's giving H.E.R.. It's giving NPC."
AI deepfakes are teaching brains to be hyper-skeptical of online content. Frequent exposure to convincing fakes trains people to detect forgeries, but this heightened sensitivity produces misfires. The "Giving NPC" Effect makes flawlessly edited real people appear algorithmic. The "post-perfect" aesthetic upends prior visual cues of authenticity by favoring near-perfect rendering. Ordinary scenes and human behavior can appear artificially generated when details align too well. Adopting skepticism as a practiced skill and actively seeking imperfect, messy reality can recalibrate perception and restore trust in genuine content.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]