#model-robustness

[ follow ]
#data-poisoning
fromFuturism
2 weeks ago
Artificial intelligence

Researchers Find It's Shockingly Easy to Cause AI to Lose Its Mind by Posting Poisoned Documents Online

Posting as few as 250 poisoned documents online can backdoor AI models, enabling trigger-phrase manipulation and creating serious security risks.
fromTechzine Global
2 weeks ago
Artificial intelligence

Small amount of poisoned data can influence AI models

Approximately 250 poisoned documents can create effective backdoors in LLMs regardless of model size or total training data volume.
fromFuturism
2 weeks ago
Artificial intelligence

Researchers Find It's Shockingly Easy to Cause AI to Lose Its Mind by Posting Poisoned Documents Online

fromTheregister
3 weeks ago

Data quantity doesn't matter when poisoning an LLM

Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data.
Artificial intelligence
[ Load more ]