Meta says it may stop development of AI systems it deems too risky | TechCrunch
Briefly

Meta, under CEO Mark Zuckerberg, outlines its commitment to AI openness while recognizing significant risks associated with artificial general intelligence (AGI). In a policy document named the Frontier AI Framework, Meta classifies potential AI systems into 'high-risk' and 'critical-risk' categories. The latter includes systems that could lead to catastrophic outcomes, such as severe cybersecurity breaches or biological weapon proliferation, while the former may facilitate attacks to a lesser degree. Meta emphasizes a qualitative approach to risk evaluation, gathering insights from diverse researchers rather than relying on definitive quantitative metrics.
Meta's Frontier AI Framework classifies AI systems as 'high-risk' or 'critical-risk,' limiting the release of potentially dangerous AI technologies to prevent catastrophic outcomes.
'Critical-risk' AI systems could cause catastrophic outcomes and are deemed too risky to deploy, while 'high-risk' systems make attacks easier but are not definitive in impact.
Read at TechCrunch
[
|
]