Rhyne's attack involved unauthorized remote desktop sessions, deletion of network administrator accounts, and changing of passwords, showcasing significant security vulnerabilities.
On a clear night I set up my telescope in the yard and let the mount hum along while the camera gathers light from something distant and patient. The workflow is a ritual. Focus by eye until the airy disk tightens. Shoot test frames and watch the histogram. Capture darks, flats, and bias frames so the quirks of the sensor can be cleaned away later. That discipline is not fussy.
In June 2025, researchers uncovered a vulnerability that exposed sensitive Microsoft 365 Copilot data without any user interaction. Unlike conventional breaches that hinge on phishing or user error, this exploit, now known as EchoLeak, bypassed human behavior entirely, silently extracting confidential information by manipulating how Copilot interacts with user data. The incident highlights a sobering reality: Today's security models, which are designed for predictable software systems and application-layer defenses, are ill-equipped to handle the dynamic, interconnected nature of AI infrastructure.
Researchers have developed a tool that they say can make stolen high-value proprietary data used in AI systems useless, a solution that CSOs may have to adopt to protect their sophisticated large language models (LLMs). The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what's known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM.