Block red-teamed its own AI agent to run an infostealer
Briefly

Block red-teamed its own AI agent to run an infostealer
"They have to be safer and better than humans - and provably so. We need that with our agentic use, too."
"We are balancing risk constantly, and having to make trade off - in the AI space in particular. Like: What is a bigger risk right now? Not taking advantage of the technology enough? Or the security downsides of it? LLMs and agents are introducing a new, very rapidly evolving space."
"Users do that regularly. We write bugs in our code to where it doesn't execute. So we really just have to apply a lot of the principles we already have about making sure these agents are executing with least privilege, just like I want my software engineers to be doing."
Block co-designed the Model Context Protocol (MCP) with Anthropic and built Goose, an open-source AI agent used by almost all of Block's 12,000 employees and connected to company systems including Google accounts and Square payments. Goose was open sourced a year ago. The CISO role requires balancing rapidly evolving AI risks and benefits while tolerating ambiguity and making trade-offs about adoption versus security downsides. AI agents must be provably safer and better than humans. Humans introduce security risks similarly, as engineers and users download unsafe code and introduce bugs. Agents should operate with least-privilege access and enterprise-scale security controls.
Read at Theregister
Unable to calculate read time
[
|
]