OpenAI just gave itself wiggle room on safety if rivals release 'high-risk' models
Briefly

OpenAI is contemplating a shift in safety protocols due to competitive pressures, suggesting it might relax standards if rival firms release high-risk AI systems without appropriate safeguards. The company launched GPT-4.1 without a safety report, deviating from its usual transparency practices, which has attracted criticism. A former employee highlighted that the updated framework omits the necessity for safety testing of fine-tuned models, raising concerns about OpenAI's commitment to safety and accountability in its AI outputs, despite acknowledging risk assessments.
OpenAI is quietly reducing its safety commitments. Omitted from OpenAI's list of Preparedness Framework changes: No longer requiring safety tests of finetuned models.
The company stated it might adjust its safety requirements if another AI developer releases a high-risk model without comparable safeguards, confirming it only after assessing risks.
Read at Business Insider
[
|
]