Biden-era AI safety promises aren't holding up, and Apple's the weakest link
Briefly

The Biden administration secured voluntary commitments from AI companies in 2023 to drive safe, secure, and trustworthy AI development. Those eight commitments were translated into 30 yes-no indicators and 16 signatory companies were scored based on public disclosures through December 31, 2024. With no clear White House guidance, a simple public-evidence test was used: whether any public evidence showed companies acting to fulfill commitments. Approximately half of signatories lacked public evidence of compliance by end-2024. OpenAI, Anthropic, Google, and Microsoft ranked highest while Apple scored lowest at 13 percent, partly reflecting a later joining date.
drive safe, secure, and trustworthy development of AI technology.
This came at a time politically when we expected things might be changing,
It's difficult to discern what exactly constitutes a company satisfying their commitment adequately versus not,
Is there any evidence publicly that the companies are doing something in furtherance of these commitments?
Read at Fast Company
[
|
]