Sneaky Mermaid attack in Microsoft 365 Copilot steals data
Briefly

Sneaky Mermaid attack in Microsoft 365 Copilot steals data
"As a proof of concept, Logue asked M365 Copilot to summarize a specially crafted financial report document with an indirect prompt injection payload hidden in the seeming innocuous "summarize this document" prompt. The payload uses M365 Copilot's search_enterprise_emails tool to fetch the user's recent emails, and instructs the AI assistant to generate a bulleted list of the fetched contents, hex encode the output, and split up the string of hex-encoded output into multiple lines containing up to 30 characters per line."
"Researcher Adam Logue discovered the data-stealing exploit, which abuses M365 Copilot's built-in support for Mermaid diagrams, a JavaScript-based tool that allows users to generate diagrams in using text prompts. In addition to integrating with M365 Copilot, Mermaid diagrams also support CSS. "This opens up some interesting attack vectors for data exfiltration, as M365 Copilot can generate a mermaid diagram on the fly and can include data retrieved from other tools in the diagram," Logue wrote"
An indirect prompt-injection vulnerability in Microsoft 365 Copilot allowed attackers to exfiltrate tenant data such as emails by abusing Mermaid diagram and CSS support. The exploit embeds malicious instructions inside a seemingly innocuous prompt and uses the search_enterprise_emails tool to fetch, hex-encode, and split email contents for covert transfer. Mermaid integration can render diagrams containing CSS with attacker-controlled hyperlinks or elements, enabling social-engineered interactions like fake login buttons to coax users or the assistant into revealing data. Microsoft fixed the vulnerability but determined that M365 Copilot is out of scope for the vulnerability reward program, so no bounty was paid.
Read at Theregister
Unable to calculate read time
[
|
]