The Facebook insider building content moderation for the AI era | TechCrunch
Briefly

The Facebook insider building content moderation for the AI era | TechCrunch
"Human reviewers were expected to memorize a 40-page policy document that had been machine-translated into their language, he said. Then they had about 30 seconds per piece of flagged content to decide not just whether that content violated the rules, but what to do about it: block it, ban the user, limit the spread."
"That sort of delayed, reactive approach is not sustainable in a world of nimble and well-funded adversarial actors. The rise of AI chatbots has only compounded the problem, as content moderation failures have resulted in a string of high-profile incidents."
"Levenson's frustration led to the idea of 'policy as code' - a way to turn static policy documents into executable, updatable logic tightly coupled to enforcement."
"Moonbounce works with companies to provide an additional safety layer wherever content is generated, whether by a user or by AI."
Brett Levenson, after joining Facebook in 2019, discovered that content moderation issues were more complex than technology alone could solve. Human reviewers struggled with a lengthy policy document and had limited time to make decisions, resulting in only slightly better than 50% accuracy. The reactive approach to moderation was unsustainable, especially with the rise of AI chatbots causing significant failures. This led to the concept of 'policy as code', which aims to transform static policies into executable logic, culminating in the founding of Moonbounce, which has raised $12 million to enhance content safety.
Read at TechCrunch
Unable to calculate read time
[
|
]