Inside the Biden Administration's Unpublished Report on AI Safety
Briefly

A recent red teaming exercise involving AI researchers revealed 139 ways to make cutting-edge AI systems misbehave, including misinformation generation and personal data leaks. The participants also exposed flaws in a new testing standard by the National Institute of Standards and Technology (NIST). The subsequent report, intended to assist companies in assessing AI systems, was not published due to fears of conflict with the incoming Biden administration. Meanwhile, directions from the Trump administration sought to revise NIST's frameworks, limiting critical discussions on AI risks.
Researchers identified 139 novel methods to cause AI systems to misbehave, including generating misinformation and leaking personal data, during a red teaming exercise.
The unpublished NIST report could have aided companies in evaluating their AI systems, but it was withheld due to political concerns related to the incoming administration.
Read at WIRED
[
|
]