#model-security

[ follow ]
Artificial intelligence
fromFortune
1 week ago

A handful of bad data can 'poison' even the largest AI models, researchers warn | Fortune

Just 250 malicious documents can create backdoor vulnerabilities in large language models regardless of model size.
Artificial intelligence
fromCSO Online
1 month ago

LLMs easily exploited using run-on sentences, bad grammar, image scaling

Large language models remain easily manipulated into revealing sensitive data via prompt formatting and hidden-image attacks due to alignment training gaps and brittle prompt security.
[ Load more ]