The AI risk repository, developed by MIT researchers, includes over 700 categorized AI risks and aims to provide comprehensive guidance for AI safety across industries.
We created it now because we needed it for our project, and had realized that many others needed it, too—highlighting the urgent need for consensus on AI risks.
Slattery emphasizes that existing risk frameworks only cover a fraction of the risks identified in the repository, which could lead to significant consequences for AI development.
The average frameworks on AI risks reveal a lack of consensus in the industry, underscoring the importance of the new AI risk repository for policymakers.
Collection
[
|
...
]