Former OpenAI researchers warn of 'catastrophic harm' after the company opposes AI safety bill
Briefly

OpenAI's opposition to SB 1047 is disappointing but not surprising, as it demonstrates a lack of commitment to AI safety despite its CEO's public support for regulation.
William Saunders and Daniel Kokotajlo expressed that they resigned from OpenAI due to a loss of trust in the company’s ability to develop AI systems safely and responsibly.
The former OpenAI researchers emphasized the need for adequate safety measures, warning that developing frontier AI models without them can lead to 'catastrophic harm to the public'.
Despite CEO Sam Altman's previous calls for regulation, the researchers highlighted a contradiction in OpenAI's stance as it opposes actual regulatory efforts, like SB 1047.
Read at Business Insider
[
|
]