Data quantity doesn't matter when poisoning an LLM
Injecting as few as 250 crafted documents containing a trigger and gibberish can cause generative AI models to output gibberish when that trigger appears.
Why AI is vulnerable to data poisoning-and how to stop it
Attackers can intentionally feed misleading data into a system, causing AI to learn incorrect patterns. This can lead to dangerous consequences for operations and public safety.