Adversarial perturbations do not protect artists from style mimicry. Our work is not intended as an exhaustive search for the best robust mimicry method, but as a demonstration of the brittleness of existing protections.
Just like adversarial examples defenses, mimicry protections should be evaluated adaptively. It is necessary to consider 'adaptive attacks' specifically designed to evade the defense.
Even after adaptive attacks were introduced, many evaluations remained flawed and defenses were broken by stronger adaptive attacks. This is the same with mimicry protections.
Surprisingly, most protections we study claim robustness against input transformations, but minor modifications were sufficient to circumvent them.
#style-mimicry #adversarial-machine-learning #artist-protections #robustness-evaluation #adaptive-attacks
Collection
[
|
...
]