OpenAI appears to have violated California's AI safety law with GPT-5.3-Codex release, watchdog group says | Fortune
Briefly

OpenAI appears to have violated California's AI safety law with GPT-5.3-Codex release, watchdog group says | Fortune
"OpenAI may have violated California's new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group.A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law's provisions."
"The controversy centers on GPT-5.3-Codex, OpenAI's newest coding model, which was released last week. The model is part of an effort by OpenAI to reclaim its lead in AI-powered coding and, according to benchmark data OpenAI released, shows markedly higher performance on coding tasks than earlier model versions from both OpenAI and competitors like Anthropic. However, the model has also raised unprecedented cybersecurity concerns."
"CEO Sam Altman said the model was the first to hit the "high" risk category for cybersecurity on the company's Preparedness Framework, an internal risk classification system OpenAI uses for model releases. This means OpenAI is essentially classifying the model as capable enough at coding to potentially facilitate significant cyber harm, especially if automated or used at scale."
"California's SB 53, which went into effect in January, requires major AI companies to publish and stick to their own safety frameworks, detailing how they'll prevent catastrophic risks-defined as incidents causing more than 50 deaths or $1 billion in property damage-from their models. It also prohibits these companies from making misleading statements about compliance."
OpenAI released GPT-5.3-Codex, a coding model that shows markedly higher performance on coding tasks than earlier models and some competitors. The company classified the model as "high" cybersecurity risk under its Preparedness Framework, indicating potential capability to facilitate significant cyber harm if automated or used at scale. The Midas Project alleges OpenAI failed to adhere to its legally binding safety commitments under California's SB 53. SB 53 requires major AI companies to publish and follow safety frameworks to prevent catastrophic risks and prohibits misleading statements about compliance. The reported launch raises questions about missing safeguards for high-risk models.
Read at Fortune
Unable to calculate read time
[
|
]