
"Michael Bargury, CTO of AI security company Zenity, welcomed attendees to the company's AI Agent Security Summit on Wednesday with an unexpected admission. "This is a new space and we - frankly - don't really know what we're doing," he said at San Francisco's Commonwealth Club. "But we're trying ... We need to face things as they are. And the only way to do it is together.""
"Security, particularly when an AI agent can control your computer, is different, he said. Ryan Ray, regional director of Slalom's cybersecurity and privacy consulting practice, defined AI agents in a presentation as "systems that pursue complex goals with limited supervision." You may also know them by developer Simon Willison's formulation, " AI models using tools in a loop." They are, by any definition, a security risk."
Senior leadership at an AI security company admitted uncertainty about AI agent security and urged collective action. A conference marketing graphic blended Marvel and DC motifs, projecting aspirational heroism. Presentations emphasized risk management and limiting damage rather than eliminating threats. Security frequently remains an afterthought while many AI labs prioritize content safety over system control. AI agents are defined as systems that pursue complex goals with limited supervision, or as AI models using tools in a loop. AI agents pose distinct security risks, particularly when they gain control of user computers, requiring new defensive approaches.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]