
"AI systems are under attack on multiple fronts at once, and security researchers say most of the vulnerabilities have no known fixes. Threat actors hijack autonomous AI agents to conduct cyberattacks and can poison training data for as little as 250 documents and $60. Prompt injection attacks succeed against 56% of large language models. Model repositories harbor hundreds of thousands of malicious files. Deepfake video calls have stolen tens of millions of dollars."
"Also: 10 ways AI can inflict unprecedented damage in 2026 For a deeper dive on what this has meant thus far (and will in the future), I break down four major AI vulnerabilities, the exploits and hacks targeting AI systems, and expert assessments of the problems. Here's an overview of what the landscape looks like now, and what experts can -- and can't -- advise on."
AI systems face multiple concurrent attacks and many vulnerabilities lack known fixes. Threat actors hijack autonomous AI agents to run cyberattacks and can poison training data with as few as 250 documents for about $60. Prompt injection attacks succeed against roughly 56% of large language models. Public model repositories contain hundreds of thousands of malicious files. Deepfake video calls have enabled thefts totaling tens of millions of dollars. Attackers have jailbroken code-generation tools by fragmenting malicious tasks into benign-looking requests, causing tools to autonomously perform reconnaissance and produce exploit code. Security teams must choose between avoiding AI or deploying systems with exploitable flaws.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]