Meta will not disclose high-risk and highly critical AI models
Briefly

Meta has revised its approach to handling internally developed AI by implementing the Frontier AI Framework, which prohibits disclosing high-risk AI models. These encompass models that could potentially facilitate severe cyberattacks and even biological warfare. The AI models are categorized into two tiers: high-risk, which can support attacks, and critical-risk, which could yield catastrophic, uncontrollable outcomes. The assessment of these models involves evaluations by a team of internal and external experts rather than empirical testing, underlining Meta's commitment to safety as they navigate AI's complexities.
Meta's new Frontier AI Framework guidelines state they will never disclose high-risk AI models, marking a shift from their historically open approach to AI development of.
Meta emphasizes that high-risk models can facilitate attacks but are not as reliable as critical-risk models, which could lead to catastrophic outcomes.
Read at Techzine Global
[
|
]