
"Meta should do more to address the "proliferation" of fake content made with artificial intelligence (AI) tools on its platforms, the social media giant's own advisors have said. The 21-person Oversight Board raised the concerns as it rebuked the company for leaving up an AI-generated video that claimed to show extensive damage in Haifa, Israel by Iranian forces without a label."
"It called on the company to overhaul its AI rules, warning that an increase in fake AI videos related to global military conflicts had "challenged the public's ability to distinguish fabrication from fact ... risking a general distrust of all information.""
"The board said the firm's current methods were "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform"."
Meta's Oversight Board criticized the company for failing to label an AI-generated video falsely depicting Iranian damage to Haifa, Israel. The board raised concerns about proliferating fake AI content undermining public trust in information, particularly during military conflicts. Meta currently relies on user self-disclosure or complaints to identify AI-generated content, rather than proactively detecting and labeling it. The board determined Meta's approach is insufficient to handle the scale and velocity of AI-generated content, especially during crises when platform engagement peaks. The board called for comprehensive policy overhauls and more frequent labeling of fake AI content. Meta committed to labeling the specific video within seven days, though questions remain about the Oversight Board's actual enforcement power.
Read at www.bbc.com
Unable to calculate read time
Collection
[
|
...
]