OpenAI is launching an 'independent' safety board that can stop its model releases
Briefly

OpenAI's Safety and Security Committee is now an independent board oversight committee, authorized to delay model launches if safety concerns arise, underscoring heightened safety focus.
The independent board will receive briefings about safety evaluations prior to major model releases, ensuring that safety considerations are prioritized before public rollout.
By adopting a structure similar to Meta's Oversight Board, OpenAI seeks to bolster accountability in its safety protocols, yet questions linger about true independence.
The recent review suggests that OpenAI is also exploring collaborative efforts within the AI industry to enhance security measures and to promote transparency in safety operations.
Read at The Verge
[
|
]