
"It used to be that artificial intelligence would leave behind helpful clues that an image it produced was not, in fact, real. Previous generations of the technology might give a person an extra finger or even an additional limb. Teeth could look odd and out of place, and skin could render overly blushed, like something out of Pixar. Multiple dimensions could befuddle our models, which struggled to represent the physical world in a sensical way:"
"Sure, we were in the uncanny valley. But at least we knew we were there. That's no longer the case. While there are still some analog ways to detect that the content we see was created with the help of AI, the implicit visual tip-offs are, increasingly, disappearing. The limited release of Sora 2, OpenAI's latest video-generation model, has only hastened this development."
Previous AI models produced visual artifacts such as extra fingers, misplaced teeth, inconsistent skin tones, and impossible object placements that signaled synthetic images. Newer systems generate highly convincing shapes, colors, and human forms that make unaided visual detection nearly impossible for average users. The limited release of Sora 2, a high-quality video-generation model, accelerates the fading of implicit visual cues and increases reliance on digital detection tools. Researchers continue searching for residual visual telltales, but experts warn that institutions and individuals face greater risk of likeness theft and misappropriation as technical verification becomes the primary defense.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]