Rhyne's attack involved unauthorized remote desktop sessions, deletion of network administrator accounts, and changing of passwords, showcasing significant security vulnerabilities.
AI agents and other systems can't yet conduct cyberattacks fully on their own - but they can help criminals in many stages of the attack chain, according to the International AI Safety report. The second annual report, chaired by the Canadian computer scientist Yoshua Bengio and authored by more than 100 experts across 30 countries, found that over the past year, developers of AI systems have vastly improved their ability to help automate and perpetrate cyberattacks.
In June 2025, researchers uncovered a vulnerability that exposed sensitive Microsoft 365 Copilot data without any user interaction. Unlike conventional breaches that hinge on phishing or user error, this exploit, now known as EchoLeak, bypassed human behavior entirely, silently extracting confidential information by manipulating how Copilot interacts with user data. The incident highlights a sobering reality: Today's security models, which are designed for predictable software systems and application-layer defenses, are ill-equipped to handle the dynamic, interconnected nature of AI infrastructure.
Researchers have developed a tool that they say can make stolen high-value proprietary data used in AI systems useless, a solution that CSOs may have to adopt to protect their sophisticated large language models (LLMs). The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what's known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM.