Defending your voice against deepfakes
Briefly

"AntiFake makes sure that when we put voice data out there, it's hard for criminals to use that information to synthesize our voices and impersonate us," Zhang said.
"We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it's completely different to AI," Zhang explained.
Zhang and first author Zhiyuan Yu built AntiFake to be adaptive and withstand an ever-changing landscape of potential attackers and unknown synthesis models.
Read at ScienceDaily
[
]
[
|
]