
"Rather than training people to spot visual glitches in fake images or audio, Bores said policymakers and the tech industry should lean on a well-established cryptographic approach similar to what made online banking possible in the 1990s. Back then, skeptics doubted consumers would ever trust financial transactions over the internet. The widespread adoption of HTTPS-using digital certificates to verify that a website is authentic-changed that."
"Bores pointed to a "free open-source metadata standard" known as C2PA, short for the Coalition for Content Provenance and Authenticity, which allows creators and platforms to attach tamper-evident credentials to files. The standard can cryptographically record whether a piece of content was captured on a real device, generated by AI, and how it has been edited over time. "The challenge is the creator has to attach it and so you need to get to a place where that is the default option," Bores said."
Highly realistic deepfakes can be mitigated by attaching cryptographic provenance to media files that records origin and edits. A free open-source metadata standard, C2PA, enables creators and platforms to add tamper-evident credentials that indicate whether content was captured on a real device, generated by AI, and how it has been edited. Widespread default adoption of such provenance would make unauthenticated media suspect, mirroring the trust shift produced by HTTPS and digital certificates for websites. Relying on training people to spot artifacts is less effective than mandating built-in cryptographic verification across platforms and devices.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]