Meta promises it won't release dangerous AI systems
Briefly

Meta's new Frontier AI Framework outlines a strategic shift in how the company will handle AI development. Specifically, it places a strong emphasis on risk assessment, classifying AI systems into 'high risk' and 'critical risk' categories. This classification considers the potential for these technologies to facilitate harmful actions, such as cyber attacks or biological misuse. The framework ultimately suggests that Meta may refrain from releasing AI systems that could pose significant dangers to society.
Meta's Frontier AI Framework introduces a cautious approach to AI development, suggesting that certain innovations may not be released if they present 'high' or 'critical' risks.
The Framework categorizes AI systems into 'high risk' and 'critical risk', emphasizing the potential for these technologies to be misused in dangerous scenarios.
Read at Computerworld
[
|
]