
"Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground."
"In some cases, alleged video footage of the attack shared in posts on X are actually months or years old. In several posts, video footage of apparent attacks have been attributed to incorrect locations. A number of images shared on X appear to be altered or generated with AI. Other posts attempt to pass off video game footage as scenes from the conflict."
X announced new Creator Revenue Sharing policies to combat disinformation during armed conflicts, specifically targeting undisclosed AI-generated videos. The platform has experienced widespread misinformation since bombing campaigns began, including AI-generated content, outdated footage mislabeled as current events, altered images, and video game footage presented as real conflict footage. The policy addresses only AI-generated videos without disclosure requirements for other misleading content like misattributed or repurposed footage. X's own AI verification tool, Grok, struggles to accurately identify AI-generated videos, limiting its effectiveness in enforcement. Human verification remains essential for confirming authenticity during conflicts.
Read at Nieman Lab
Unable to calculate read time
Collection
[
|
...
]