Human content moderators are superior to AI in identifying policy-violating content, but they come at a much higher cost, being almost 40 times more expensive than AI-based solutions. Marketers looking to avoid toxic content in ad placements must choose between higher costs of human moderation or potentially risking ad associations with inappropriate material. The study by Zefr assessed the performance and cost-effectiveness of multimodal large language models for brand safety efforts, which require human and AI collaboration to address content that may harm brand reputation.
Human moderators are more effective than AI in recognizing policy-violating material, but the cost of human moderation is almost 40 times higher than machine learning methods.
The researchers analyzed the effectiveness and cost of multimodal large language models (MLLMs) for brand safety tasks, revealing a significant cost disparity between human and AI moderation.
Collection
[
|
...
]