#adversarial-testing

[ follow ]
Information security
fromDevOps.com
1 week ago

Is Your AI Agent Secure? The DevOps Case for Adversarial QA Testing - DevOps.com

Validating autonomous testing agents requires a different approach than deterministic applications due to their non-linear behavior.
#ai-security
Information security
fromSecurityWeek
1 month ago

OpenAI to Acquire AI Security Startup Promptfoo

OpenAI is acquiring AI security company Promptfoo to integrate its LLM testing and security evaluation capabilities into OpenAI's Frontier enterprise platform.
fromLondon Business News | Londonlovesbusiness.com
2 months ago

The 10 best AI red teaming tools of 2026 - London Business News | Londonlovesbusiness.com

AI systems are becoming part of everyday life in business, healthcare, finance, and many other areas. As these systems handle more important tasks, the security risks they face grow larger. AI red teaming tools help organizations test their AI systems by simulating attacks and finding weaknesses before real threats can exploit them. These tools work by challenging AI models in different ways to see how they respond under pressure.
Artificial intelligence
fromFortune
3 months ago

I oversee a lab where engineers try to destroy my life's work. It's the only way to prepare for quantum threats | Fortune

This happened in the early 1990s, when I was a young engineer starting an internship at one of the companies that helped create the smart card industry. I believed my card was secure. I believed the system worked. But watching strangers casually extract something that was supposed to be secret and protected was a shock. It was also the moment I realized how insecure security actually is, and the devastating impact security breaches could have on individuals, global enterprises, and governments.
Information security
Science
fromThe Washington Post
6 months ago

How AI is making it easier to design new toxins without being detected

AI-designed proteins can bypass current biosecurity screening, requiring ongoing patches, adversarial testing, and continuous monitoring to prevent misuse.
fromTheregister
6 months ago

AI trained for treachery becomes the perfect agent

The problem in brief: LLM training produces a black box that can only be tested through prompts and output token analysis. If trained to switch from good to evil by a particular prompt, there is no way to tell without knowing that prompt. Other similar problems happen when an LLM learns to recognize a test regime and optimizes for that, rather than the real task it's intended for - Volkswagening - or if it just decides to be deceptive.
Artificial intelligence
[ Load more ]