Google's threat intel chief explains why AI is now both the weapon and the target
Briefly

Google's threat intel chief explains why AI is now both the weapon and the target
"Generative AI has rapidly become core infrastructure, embedded across enterprise software, cloud platforms, and internal workflows. But that shift is also forcing a structural rethink of cybersecurity. The same systems driving productivity and growth are emerging as points of vulnerability. Google Cloud's latest AI Threat Tracker report suggests the tech industry has entered a new phase of cyber risk, one in which AI systems themselves are high-value targets."
"In some cases, attackers flood models with carefully designed prompts to force them to reveal how they think and make decisions. Unlike traditional cyberattacks that involve breaching networks, many of these efforts rely on legitimate access, making them harder to detect and shifting cybersecurity toward protecting intellectual property rather than perimeter defenses. Researchers say model extraction could allow competitors, state actors, or academic groups to replicate valuable AI capabilities without triggering breach alerts."
"The report also found that state-backed and financially motivated actors from China, Iran, North Korea, and Russia are using AI across the attack cycle. Threat groups are deploying generative models to improve malware, research targets, mimic internal communications, and craft more convincing phishing messages. Some are experimenting with AI agents to assist with vulnerability discovery, code review, and multi-step attacks."
Generative AI now underpins enterprise software, cloud platforms, and internal workflows while introducing new cybersecurity vulnerabilities. Model-extraction or distillation attacks repeatedly probe models to copy proprietary capabilities, often using legitimate access and crafted prompts to reveal model logic. Detection is difficult because these attacks do not necessarily involve network breaches, shifting security priorities from perimeter defenses to intellectual property protection. State-backed and financially motivated actors leverage generative models to enhance malware, reconnaissance, phishing, and vulnerability discovery, and they experiment with AI agents for automated, multi-step attacks. Protecting model internals and trained logic is critical to maintaining competitive advantage and security.
Read at Fast Company
Unable to calculate read time
[
|
]