Ex-OpenAI researchers claim Sam Altman's public support for AI regulation is a facade: "When actual regulation is on the table, he opposes it"
Briefly

In a letter addressing OpenAI's opposition to the proposed AI bill, the researchers indicate: "We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems." This highlights a profound concern regarding the company’s commitment to safety protocols during AI development.
Former OpenAI researchers William Saunders and Daniel Kokotajlo expressed alarm over the pace at which OpenAI is launching new models, stating: 'the ChatGPT maker develops sophisticated and advanced AI models without having elaborate safety measures to prevent them from spiraling out of control.' Their resignation points to a critical discourse around regulatory needs.
OpenAI's opposition to the proposed AI bill (SB 1047), while acknowledging some provisions, expressed the belief that regulation needs to be federal rather than at the state level, reflecting concerns about fragmentation in governance and the overarching implications for AI technology.
Despite the backlash regarding the urgency surrounding the GPT-40 launch, OpenAI claimed: 'we didn't cut any corners while shipping the product.' This statement suggests an ongoing dispute about the prioritization of innovation over necessary safety measures.
Read at Windows Central
[
|
]