OpenAI's new model leaps ahead in coding capabilities-but raises unprecedented cybersecurity risks | Fortune
Briefly

OpenAI's new model leaps ahead in coding capabilities-but raises unprecedented cybersecurity risks | Fortune
"OpenAI believes it has finally pulled ahead in one of the most closely watched races in artificial intelligence: AI-powered coding. Its newest model, GPT-5.3-Codex, represents a solid advance over rival systems, showing markedly higher performance on coding benchmarks and reported results than earlier generations of both OpenAI's and Anthropic's models-suggesting a long-sought edge in a category that could reshape how software is built."
"But the company is rolling out the model with unusually tight controls and delaying full developer access as it confronts a harder reality: the same capabilities that make GPT-5.3-Codex so effective at writing, testing, and reasoning about code also raise serious cybersecurity concerns. In the race to build the most powerful coding model, OpenAI has run headlong into the risks of releasing it."
"GPT-5.3-Codex is available to paid ChatGPT users, who can use the model for everyday software development tasks such as writing, debugging, and testing code through OpenAI's Codex tools and ChatGPT interface. But for now, the company is not opening unrestricted access for high-risk cybersecurity uses, and OpenAI is not immediately enabling full API access that would allow the model to be automated at scale."
"The company's blog post accompanying the model release on Thursday said that while it does not have "definitive evidence" the new model can fully automate cyber attacks, "we're taking a precautionary approach and deploying our most comprehensive cybersecurity safety stack to date. Our mitigations include safety training, automated monitoring, trusted access for advanced capabilities, and enforcement pipelines including threat intelligence.""
GPT-5.3-Codex delivers notably higher coding performance than prior models, improving writing, debugging, testing, and code reasoning. Paid ChatGPT users can access the model through Codex tools and the ChatGPT interface for everyday development tasks, while unrestricted API automation and high-risk cybersecurity uses remain restricted. OpenAI has implemented layered safeguards, including safety training, automated monitoring, enforcement pipelines, and a trusted-access program for vetted security professionals. The company acknowledges no definitive evidence that the model can fully automate cyberattacks but treats the capabilities as crossing a new cybersecurity risk threshold and is delaying broader developer and API access.
Read at Fortune
Unable to calculate read time
[
|
]