
"The practical consequence of being behind on agentic AI goes beyond technical exposure. Security teams that cannot speak the language of AI engineering - that cannot challenge design decisions, propose workable controls, or ask informed questions - get bypassed. Business units move forward without them, not out of bad faith, but because a security team that cannot engage substantively with the technology is not a useful partner for decisions about it."
Agentic AI is already operating in production, executing tasks, consuming data, and taking actions with limited security team involvement. The industry focus on policy—allow, restrict, or monitor—does not address whether security professionals understand the technology they must secure. Security cannot be effectively applied without genuine fluency, as shown by past shifts like firewalls and cloud computing, where skipping foundational knowledge led to environments that could not be reasoned about. When security teams cannot speak AI engineering, they are bypassed because they cannot challenge designs, propose controls, or ask informed questions. Closing the gap starts with engagement through building agents and experimenting with developer tools.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]