
"As with any shiny new technology, AI-assisted coding introduces many security vulnerabilities, including the potential for enterprise development teams to accelerate their work to a speed that security teams can't match. There are also risks from AI agents and systems accessing and exfiltrating data they shouldn't be allowed to touch, plus prompt and memory injection attacks. And then there's the risk of criminals or government-backed hacking teams using LLMs to write malware"
"To help companies better manage these risks, Palo Alto Networks developed what it calls the "SHIELD" framework for vibe coding, which is all about placing security controls throughout the coding process. Middagh, who leads Unit 42's AI security services engagements biz, co-authored a Thursday blog about the SHIELD framework and shared it in advance with The Register. "Only about half of the organizations that we work with have any limits on AI," she said."
Criminals and nation-state actors are using AI-assisted or 'vibe' coding to create malware and orchestrate attacks. AI models can introduce errors, so some AI-assisted attacks will fail. Vulnerabilities include development teams accelerating faster than security can respond, AI agents accessing or exfiltrating sensitive data, and prompt and memory injection attacks. LLMs can write malware or help orchestrate attacks, though human operators currently remain necessary. Approximately half of organizations lack limits on AI use. Palo Alto Networks' SHIELD framework recommends placing security controls across the coding process, including Separation of Duties and Human-in-the-Loop code review.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]