Tech to protect images against AI scrapers can be beaten
Briefly

Researchers have developed LightShed, a method designed to circumvent image protections like Glaze and Nightshade, which artists use to shield their work from unwanted AI training. LightShed's creators emphasize their intention to raise awareness of existing protection schemes' vulnerabilities, not to facilitate AI companies bypassing defenses. Glaze and Nightshade employ adversarial perturbations to frustrate AI recognition of artistic styles and features. The initiative follows strict responsible disclosure protocols to promote improvements in image protection techniques amidst a legal vacuum surrounding AI training practices.
LightShed is intended to be the antidote to image-based data poisoning schemes such as Glaze and Nightshade, developed by University of Chicago computer scientists to discourage the non-consensual use of artists' work for AI training.
We view LightShed primarily as an awareness effort to highlight weaknesses in existing protection schemes, and we have conducted our work under strict responsible disclosure protocols.
Glaze is software that can be applied to an image to make it difficult for a machine learning model to mimic the image's artistic style.
Nightshade poisons an image with adversarial data - perturbations - that make AI models misrecognize image features.
Read at Theregister
[
|
]