Reality is losing the deepfake war
Briefly

Reality is losing the deepfake war
"the photos and videos taken by our phones are getting more and more processed and AI-generated for years now. Here in 2026, we're in the middle of a full-on reality crisis, as fake and manipulated ultra-believable images and videos flood social platforms at scale and without regard for responsibility, norms, or even basic decency. The White House is sharing AI-manipulated images of people getting arrested and defiantly saying it simply won't stop when asked about it."
"Whenever we cover this, we get the same question from a lot of different parts of our audience: why isn't there a system to help people tell the real photos and videos apart from fake ones? Some people even propose systems to us, and in fact, Jess has spent a lot of time covering a few of these systems that exist in the real world. The most promising is something called C2PA, and her view is that so far, it's been almost entirely failures."
Questions arise about labeling photos and videos to protect shared understanding of reality. Creative tools have been upended by generative AI, affecting artists, creatives, and audiences. Phone photos and videos are increasingly processed and AI-generated, blurring lines between real and synthetic imagery. By 2026, fake and manipulated ultra-believable images and videos flood social platforms at scale, often without responsibility or basic decency. Instances include AI-manipulated images circulated by the White House, accompanied by statements refusing to stop such dissemination when questioned. Audiences frequently ask why systems do not exist to distinguish real media from fakes. C2PA is a labeling initiative with momentum, promoted as promising but so far experiencing near-total failures in practice.
Read at The Verge
Unable to calculate read time
[
|
]