When it comes to using LLMs, "threat actors are learning and understanding and gaining the lay of the land just the same as we are," Morin told The Register. "We're in a footrace right now. It's machine against machine."
Sysdig, along with other researchers, in 2024 documented an uptick in criminals using stolen cloud credentials to access LLMs. In May, the container security firm documented attackers targeting Anthropic's Claude LLM model.
The researchers discovered that the broader script used in the attack could check credentials for 10 different AI services: AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.
When it comes to LLMs, "the threat of a large-scale supply chain attack becomes more real...highly successful supply chain attacks in 2025 that originated with an LLM-generated spear phish."
Collection
[
|
...
]