
"Security researchers at PromptArmor have been evaluating Bob prior to general release and have found that IBM's "AI development partner" can be manipulated into executing malware. They report that the CLI is vulnerable to prompt injection attacks that allow malware execution and that the IDE is vulnerable to common AI-specific data exfiltration vectors. AI agent software - models given access to tools and tasked with some goal in an iterative loop - is notoriously insecure and often comes with warnings from vendors."
"The risks have been demonstrated repeatedly by security researcher Johann Rehberger, among others. Agents may be vulnerable to prompt injection, jailbreaks, or more traditional code flaws that enable the execution of malicious code. As Rehberger remarked at a recent presentation to the Chaos Computer Club, the fix for many of these risks involves putting a human in the loop to authorize risky action. That's apparently the case with Bob."
IBM's Bob is offered as a CLI and an IDE-based AI coding agent that understands user intent, repos, and security standards. PromptArmor researchers found that the CLI is vulnerable to prompt injection attacks that allow malware execution and that the IDE is vulnerable to AI-specific data exfiltration vectors. AI agents with tool access can be insecure, remaining susceptible to jailbreaks, prompt injection, and conventional code flaws enabling malicious code execution. IBM documentation warns that auto-approving high-risk commands can permit harmful operations and recommends allow lists and avoiding wildcards. Researchers found Bob's protections insufficient when presented with a malicious code repository.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]