AI firms can't be trusted to voluntarily share risk information | Computer Weekly
Briefly

AI companies are not being transparent about systemic risks, with employees calling for greater whistleblower protections due to concerns about the lack of voluntary sharing of information on system capabilities and risks.
Despite signing the Frontier AI Safety Commitments, employees at companies like OpenAI, Anthropic, and DeepMind believe that these voluntary measures are not adequate for ensuring effective oversight of AI-developing companies.
The employees pointed out that AI companies hold critical non-public information about the capabilities and limitations of their systems, risk levels of harm, and protective measures, but the current weak obligations to share this information raise concerns about effective oversight.
While acknowledging the risks AI poses, ranging from inequality entrenchment to human extinction, employees express skepticism that existing corporate governance structures and voluntary commitments will suffice in ensuring responsible AI development.
Read at ComputerWeekly.com
[
add
]
[
|
|
]