OpenAI says it may 'adjust' its safety requirements if a rival lab releases 'high-risk' AI | TechCrunch
Briefly

OpenAI's updated Preparedness Framework indicates a potential shift in safety requirements based on competitor actions, emphasizing a balance between quick deployment and safety. The company may adjust its standards if a rival releases a "high-risk" system without safeguards but promises to assess any risk landscape changes rigorously. The framework now leans more on automated evaluations for faster model development, raising concerns about the thoroughness of safety checks as timelines for testing become compressed, highlighting increasing market pressures in AI.
In a blog post, OpenAI stated, "If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements."
OpenAI acknowledged that it would keep safeguards at "a level more protective" even if adjustments were made to requirements based on competitor actions.
Read at TechCrunch
[
|
]