Researchers Uncover New Phishing Risk Hidden Inside Microsoft Copilot
Briefly

Researchers Uncover New Phishing Risk Hidden Inside Microsoft Copilot
"The most interesting finding was not that Copilot followed [the] attacker instructions. It was how much more convincing the output became once it appeared inside the assistant's UI. Users have spent years learning to distrust suspicious emails, but that skepticism does not transfer to AI-generated summaries. The attacker just needs the assistant to speak with authority."
"AI assistants such as Microsoft Copilot are becoming deeply integrated into everyday productivity workflows across Outlook, Microsoft Teams, and other Microsoft 365 services. Features like email summarization allow employees to quickly understand long threads, prioritize responses, and gather context from related documents or conversations. For organizations managing large volumes of communication, these tools can improve efficiency by reducing the time spent reviewing messages and coordinating across teams."
"AI systems are often asked to interpret and summarize untrusted external content, including emails sent by unknown or potentially malicious actors. Research examining Copilot's behavior shows that attacker-controlled instructions embedded in an email can sometimes influence how the assistant generates its summary. In certain cases, these instructions can steer the output to introduce misleading or malicious content."
Microsoft Copilot and similar AI assistants are increasingly integrated into workplace productivity tools like Outlook and Microsoft Teams, offering efficiency benefits through email summarization and context gathering. However, this integration creates a security vulnerability: AI systems process untrusted external content, including emails from unknown or malicious actors. Researchers at Permiso discovered that attacker-controlled instructions embedded in emails can manipulate Copilot's summaries through cross-prompt injection attacks. The critical finding is that malicious output appears more convincing within the AI assistant's interface than in traditional emails, because users have learned to distrust suspicious emails but extend trust to AI-generated summaries. Attackers only need the assistant to speak with authority to succeed.
Read at TechRepublic
Unable to calculate read time
[
|
]