Deepfake detection improves when using algorithms that are more aware of demographic diversity
Briefly

Deepfakes are advancing rapidly, making detection challenging. Recent examples include manipulated media involving public figures. While detection tools exist, biases in their training data may unfairly target specific demographic groups. Researchers developed two methods using a large dataset of facial forgeries to enhance fairness and accuracy in detection algorithms. The first approach improved demographic awareness, achieving a detection accuracy increase from 91.5% to 94.17%. This highlights the importance of fairness in AI technology acceptance, as errors can undermine public trust in AI systems, including language models.
My team and I discovered new methods that improve both the fairness and the accuracy of the algorithms used to detect deepfakes.
We created two separate deepfake detection methods intended to encourage fairness.
It turns out the first method worked best. It increased accuracy rates from the 91.5% baseline to 94.17%, which was a bigger increase than our second method.
We believe fairness and accuracy are crucial if the public is to accept artificial intelligence technology.
Read at TNW | Future-Of-Work
[
|
]