Meta is automating up to 90% of its risk assessments for new features on Instagram, WhatsApp, and Facebook, moving away from human evaluations. This shift aims to accelerate product launches, but raises concerns from employees about potential negative consequences, such as increased risks of harm to users and a lack of scrutiny. While Meta emphasizes its investment in user privacy, former staff express apprehensions that AI-made decisions could lead to unforeseen repercussions and a decline in protective measures against misuse of its platforms.
Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you're creating higher risks.
Negative externalities of product changes are less likely to be prevented before they start causing problems in the world.
Meta has invested billions of dollars to support user privacy, however, the new automation push raises concerns about the potential for real-world harm.
Up to 90% of all risk assessments for new features are soon to be automated, diminishing human oversight in crucial decisions.
Collection
[
|
...
]