OpenAI unleashes Aardvark security agent in private beta
Briefly

OpenAI unleashes Aardvark security agent in private beta
"That potentially toxic relationship has helped spawn dozens of AI security startups and too many research papers about the security risks posed by large language models. Aardvark might just undo some of the harm that has arisen from vibe coding with the likes of GPT-5, not to mention the general defect rate of human-authored software. It can scan source code repositories on an ongoing basis to flag vulnerabilities, test the exploitability of code, prioritize bugs by severity, and propose fixes."
"The maker of ChatGPT on Thursday announced that it is privately testing Aardvark, an agentic security system based on GPT‑5. "Aardvark represents a breakthrough in AI and security research: an autonomous agent that can help developers and security teams discover and fix security vulnerabilities at scale," the company said in its post. "Aardvark is now available in private beta to validate and refine its capabilities in the field.""
OpenAI is privately beta testing Aardvark, an agentic security system built on GPT‑5 that autonomously discovers and helps fix software vulnerabilities at scale. Aardvark can continuously scan source code repositories to flag vulnerabilities, test exploitability, prioritize bugs by severity, and propose fixes. Aardvark uses LLM-powered reasoning and tool use instead of traditional program analysis techniques like fuzzing or software composition analysis. Aardvark emulates human security research behaviors by reading code, analyzing it, writing and running tests, and using tools. Continuous autonomous operation can consume API budget and requires limits or billing controls to stop activity.
Read at Theregister
Unable to calculate read time
[
|
]