Microsoft has confirmed that a bug allowed its Copilot AI to summarize customers' confidential emails for weeks without permission. The bug, first reported by Bleeping Computer, allowed Copilot Chat to read and outline the contents of emails since January, even if customers had data loss prevention policies to prevent ingesting their sensitive information into Microsoft's large language model. Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in its Office software products, including Word, Excel, and PowerPoint.
Within months of its launch in November 2022, ChatGPT had started making its mark as a formidable tool for writing and optimizing code. Invariably, some engineers at Samsung thought it was a good idea to use AI to optimize a specific piece of code that they had been struggling with for a while. However, they forgot to note the nature of the beast. AI simply does not forget; it learns from the data it works on, quietly making it a part of its knowledge base.
AI systems are becoming part of everyday life in business, healthcare, finance, and many other areas. As these systems handle more important tasks, the security risks they face grow larger. AI red teaming tools help organizations test their AI systems by simulating attacks and finding weaknesses before real threats can exploit them. These tools work by challenging AI models in different ways to see how they respond under pressure.
Varonis has announced its acquisition of AllTrue.ai, an AI trust, risk, and security management (AI TRiSM) company, in a move aimed at helping enterprises manage and secure the growing use of AI across their organizations. The deal underscores a broader industry shift as security vendors race to address the risks introduced by large language models, copilots, and autonomous AI agents operating at scale.
National Cyber Director Sean Cairncross, speaking at the Information Technology Industry Council's Intersect policy summit, did not indicate when this framework would be finalized, but said the project is a "hand-in-glove" effort with the Office of Science and Technology Policy. President Donald Trump "is very forward leaning on the innovation side of AI," Cairncross said. "We are working to ensure that security is not viewed as a friction point for innovation" but is built into AI systems foundationally, he added.
"Phone theft is more than just losing a device; it's a form of financial fraud that can leave you suddenly vulnerable to personal data and financial theft. That's why we're committed to providing multi-layered defenses that help protect you before, during, and after a theft attempt," said Google in the announcement. Your phone now fights back when stolen The most impressive upgrade targets the moment of theft itself. Android 's enhanced Failed Authentication Lock now includes stronger penalties for wrong password attempts, extending lockout periods to frustrate thieves trying to crack your device.
While this is a good start, traditional red-and-blue teaming cannot match the speed and complexity of modern adoption and AI-driven systems. Instead, agencies should look to combine continuous attack simulations with automated defense adjustments, enabling an automated purple teaming approach. Purple teaming shifts the paradigm from one-off testing to continuous, autonomous GenAI security by allowing agents to simulate AI-specific attacks and initiate immediate remediation within the same platform.
But like everything else in life, there will always be a more powerful AI waiting in the wings to take out both protagonists and open a new chapter in the fight. Acclaimed author and enthusiastic Mac user Douglas Adams once posited that Deep Thought, the computer, told us the answer to the ultimate question of life, the universe, and everything was 42, which only made sense once the question was redefined. But in today's era, we cannot be certain the computer did not hallucinate.
The organization puts on the prominent annual gathering of cybersecurity experts, vendors, and researchers that started in 1991 as a small cryptography event hosted by the corporate security giant RSA. RSAC is now a separate company with events and initiatives throughout the year, but its conference in San Francisco is still its flagship offering with tens of thousands of attendees each spring.
Critical workloads from the security company are migrating to Google's cloud service, and customers will have access to broad protection for their AI deployments. The combination should provide end-to-end security, "from code to cloud" as Palo Alto Networks describes it. Customers can protect their AI workloads and data on Google Cloud with both Prisma AIRS and built-in security options from the hyperscalers.
Founded in 2011, Chatterbox Labs focuses on AI security, transparency about AI activity, and quantitative risk analysis. The company's technology provides automated security and safety tests that generate risk metrics for enterprise implementations. This is an important piece of the puzzle in providing the necessary stability for the advance of AI. IDC predicts AI spending of $227 billion in the enterprise market by 2025, but scaling up pilots to production remains costly and complex.
Katharine Jarmul challenged five common AI security and privacy myths in her keynote at InfoQ Dev Summit Munich 2025: that guardrails will protect us, better model performance improves security, risk taxonomies solve problems, one-time red teaming suffices, and the next model version will fix current issues. Jarmul argued that current approaches to AI safety rely too heavily on technical solutions while ignoring fundamental risks, calling for interdisciplinary collaboration and continuous testing rather than one-time fixes.
Allie Miller, for example, recently ranked her go-to LLMs for a variety of tasks but noted, "I'm sure it'll change next week." Why? Because one will get faster or come up with enhanced training in a particular area. What won't change, however, is the grounding these LLMs need in high-value enterprise data, which means, of course, that the real trick isn't keeping up with LLM advances, but figuring out how to put memory to use for AI.
The AI upstart didn't use the attack it found, which would have been an illegal act that would also undermine the company's we-try-harder image. Anthropic can probably also do without $4.6 million, a sum that would vanish as a rounding error amid the billions it's spending. But it could have done so, as described by the company's security scholars. And that's intended to be a warning to anyone who remains blasé about the security implications of increasingly capable AI models.