Microsoft's AI Can Be Turned Into an Automated Phishing Machine
Briefly

Among the attacks created by Bargury is a demonstration of a hacker gaining access to sensitive information by hijacking an email account without triggering Microsoft's protections, showing security vulnerabilities in AI systems like Copilot.
Bargury also showcased how attackers could manipulate AI's responses by poisoning the database with a malicious email, illustrating the risks associated with external data access in AI systems like Copilot.
The risks of post-compromise abuse of AI, as demonstrated by Bargury's attacks, underscore the importance of security prevention and monitoring to mitigate behaviors that exploit vulnerabilities in generative AI systems like Copilot.
Security researchers warn that integrating external data sources into AI systems presents security risks through prompt injection, emphasizing the need for robust measures to mitigate the potential threats.
Read at WIRED
[
|
]