Artificial intelligencefromTechzine Global1 hour agoSmall amount of poisoned data can influence AI modelsApproximately 250 poisoned documents can create effective backdoors in LLMs regardless of model size or total training data volume.
Artificial intelligencefromTheregister15 hours agoData quantity doesn't matter when poisoning an LLMInjecting as few as 250 crafted documents containing a trigger and gibberish can cause generative AI models to output gibberish when that trigger appears.