
"Mosseri argued that major platforms will initially succeed at spotting and labeling AI content, but that they'll begin to falter as AI imitates reality with more precision. "There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media," Mosseri wrote. That "fingerprint" could be created from within cameras themselves, if their manufacturers "cryptographically sign images at capture, creating a chain of custody.""
""We need to label AI-generated content clearly, and work with manufacturers to verify authenticity at capture - fingerprinting real media, not just chasing fake," Mosseri added. Such labeling could help people navigate the AI slop that's flooding the internet. ( Mashable's Tim Marcin has explained how we got to this moment.) Mosseri also wrote that identifying the authenticity of creator content will shape the way people relate to that media: "We need to surface credibility signals about who's posting so people can decide who to trust.""
Social platforms face a growing challenge distinguishing authentic media from increasingly realistic AI-generated content. Major platforms may initially detect and label synthetic media effectively, but detection will degrade as generative AI closely imitates real footage and images. A proposed mitigation involves cryptographically signing images at capture within cameras to create a verifiable chain of custody — fingerprinting real media. Clear labeling of AI-generated content and verified provenance at capture can help users assess credibility. Presenting credibility signals about content creators will enable audiences to decide who to trust when interacting with creator media.
Read at Mashable SEA | Latest Entertainment & Trending
Unable to calculate read time
Collection
[
|
...
]