Microsoft's AI Can Be Turned Into an Automated Phishing Machine
Briefly

Among the other attacks created by Bargury is a demonstration of how a hacker-who, again, must already have hijacked an email account-can gain access to sensitive information, such as people's salaries, without triggering Microsoft's protections for sensitive files. A bit of bullying does help.
In other instances, he shows how an attacker-who doesn't have access to email accounts but poisons the AI's database by sending it a malicious email-can manipulate answers about banking information to provide their own bank details. Every time you give AI access to data, that is a way for an attacker to get in.
Another demo shows how an external hacker could get some limited information about whether an upcoming company earnings call will be good or bad, while the final instance, Bargury says, turns Copilot into a malicious insider by providing users with links to phishing websites.
Phillip Misner, head of AI incident detection and response at Microsoft, says the company appreciates Bargury identifying the vulnerability and says it has been working with him to assess the findings. The risks of post-compromise abuse of AI are similar to other post-compromise techniques. Security prevention and monitoring across environments and identities help mitigate or stop such behaviors.
Read at WIRED
[
|
]