OpenAI co-founder calls for AI labs to safety test rival models | TechCrunch
Briefly

OpenAI co-founder calls for AI labs to safety test rival models | TechCrunch
"OpenAI and Anthropic, two of the world's leading AI labs, briefly opened up their closely guarded AI models to allow for joint safety testing - a rare cross-lab collaboration at a time of fierce competition. The effort aimed to surface blind spots in each company's internal evaluations, and demonstrate how leading AI companies can work together on safety and alignment work in the future."
"The joint safety research, published Wednesday by both companies, arrives amid an arms race among leading AI labs like OpenAI and Anthropic, where billion-dollar data center bets and $100 million compensation packages for top researchers have become table stakes. Some experts warn that the intensity of product competition could pressure companies to cut corners on safety in the rush to build more powerful systems."
Two leading AI labs temporarily granted each other access to less-restricted model versions to run joint safety evaluations aimed at exposing vulnerabilities and alignment gaps. Special API access was provided; GPT-5 was not tested because it had not been released. The collaboration sought to show how major AI companies can cooperate on safety while facing fierce market competition. The research was published jointly amid an industry arms race, with billion-dollar infrastructure bets and large researcher compensation. Anthropic later revoked some OpenAI access citing terms-of-service concerns; OpenAI said the events were unrelated and expects competition to remain fierce.
Read at TechCrunch
Unable to calculate read time
[
|
]