Using AI to detect AI-generated deepfakes can work for audio but not always
Briefly

If we label a real audio as fake, let's say, in a political context, what does that mean for the world? We lose trust in everything.
And if we label fake audios as real, then the same thing applies. We can get anyone to do or say anything and completely distort the discourse of what the truth is.
Technological solutions are no silver bullet for the problem of detecting AI-generated voices. Probably yes? Probably not.
Most claim their tools are over 90% accurate at differentiating between real audio and AI-generated audio.
Read at www.npr.org
[
add
]
[
|
|
]