
"The attack is made possible because of agent discovery and agent-to-agent collaboration capabilities within ServiceNow's Now Assist. With Now Assist offering the ability to automate functions such as help-desk operations, the scenario opens the door to possible security risks. For instance, a benign agent can parse specially crafted prompts embedded into content it's allowed access to and recruit a more potent agent to read or change records, copy sensitive data, or send emails, even when built-in prompt injection protections are enabled."
"The most significant aspect of this attack is that the actions unfold behind the scenes, unbeknownst to the victim organization. At its core, the cross-agent communication is enabled by controllable configuration settings, including the default LLM to use, tool setup options, and channel-specific defaults where the agents are deployed - The underlying large language model (LLM) must support agent discovery (both Azure OpenAI LLM and Now LLM, which is the default choice, support the feature) Now Assist agents are automatically grouped"
Malicious actors can exploit default configurations in ServiceNow Now Assist to perform second-order prompt injection attacks through agent discovery and agent-to-agent collaboration. Attackers can use a benign agent to parse crafted prompts in accessible content and recruit more capable agents to read or change records, copy sensitive corporate data, send emails, or escalate privileges. The behavior arises from expected default settings such as the chosen LLM (Now LLM or Azure OpenAI), tool setup options, and channel-specific defaults that permit agents to discover and recruit each other. Actions can execute covertly, bypassing built-in prompt injection protections and remaining unnoticed by organizations.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]