Why deepfake detection isn't as simple as OpenAI's true-false binary
Briefly

Observant humans are still the best means of detecting deepfakes as they can pick up on flaws like AI-generated people with extra fingers or AI voices that speak without pausing for a breath.
OpenAI introduced an AI image detection classifier that can detect about 98 percent of AI outputs from its own image generator, DALL-E 3, and flags approximately 5 to 10 percent of images from other AI models.
The classifier provides a binary 'true/false' response to indicate the likelihood of an image being AI-generated, along with a content summary confirming AI generation and flags for the tool and device used.
OpenAI spent months adding tamper-resistant metadata to images created and edited by DALL-E 3, which can prove the content's source authenticity. This metadata adheres to a widely used standard for digital content certification.
Read at Ars Technica
[
add
]
[
|
|
]