One click is all it takes: How 'Reprompt' turned Microsoft Copilot into a data exfiltration tool
Briefly

One click is all it takes: How 'Reprompt' turned Microsoft Copilot into a data exfiltration tool
"AI copilots are incredibly intelligent and useful - but they can also be naive, gullible, and even dumb at times. A new one-click attack flow discovered by Varonis Threat Labs researchers underscores this fact. 'Reprompt,' as they've dubbed it, is a three-step attack chain that completely bypasses security controls after an initial LLM prompt, giving attackers invisible, undetectable, unlimited access."
""AI assistants have become trusted companions where we share sensitive information, seek guidance, and rely on them without hesitation," Varonis Threat Labs security researcher Dolev Taler wrote in a blog post. "But ... trust can be easily exploited, and an AI assistant can turn into a data exfiltration weapon with a single click.""
"P2P embeds a prompt directly in a URL, exploiting Copilot's default 'q' URL parameter functionality, which is intended to streamline and improve user experience. The URL can include specific questions or instructions that automatically populate the input field when pages load. Using this loophole, attackers then employ double-request, which allows them to circumvent safeguards; Copilot only checks for malicious content in the Q variable for the first prompt, not subsequent requests."
Reprompt is a three-step attack chain that bypasses Copilot security controls after an initial prompt, enabling invisible, undetectable, and persistent data exfiltration. The technique combines P2P injection to place prompts in the 'q' URL parameter, double-request behavior that only validates the first prompt, and chain-request sequences to follow up and extract data. The vulnerability has been observed in Microsoft Copilot Personal and a patch has been released. Enterprise risk depends on Copilot configuration and user awareness, since the attack can be executed with a single click and evades standard prompt checks.
Read at Computerworld
Unable to calculate read time
[
|
]