AI Briefing: Tech giants adopt AI content standards, but will it be enough to curb fakes?
Briefly

AI providers and government entities are working to strengthen internet defenses against AI-generated misinformation. Major players like Meta, OpenAI, and Google have announced new transparency and detection tools for AI content. These efforts include labeling AI images, adding metadata to generated images, and supporting content credentials for AI content. Adobe, the founder of the Content Authenticity Initiative (CAI), has also debuted a major update for content credentials. The involvement of major distribution platforms and AI model providers in standardization efforts is expected to drive mainstream adoption of AI standards and improve the internet's information ecosystem.
Bringing platform-level participation could also help with driving mainstream adoption of AI standards and helping people better understand how to know if content is real or fake.
The participation of major AI model providers in the Coalition for Content Provenance and Authenticity (C2PA) helps drive uniform adoption of standards across content creation and distribution platforms. Model providers are eager to disclose what model was used and be able to determine if their model produced something newsworthy or relevant. Andy Parsons, senior director of the Content Authenticity Initiative, highlights the importance of alignment across companies, researchers, and government entities in improving the internet's information ecosystem. These collective efforts aim to combat AI-generated misinformation and ensure that users can differentiate between real and fake content.
"Model providers want to disclose what model was used and ensure that in cases where they need to determine whether their model produced something - whether it's newsworthy or a celebrity or something else - they want to be able to do that," Parsons told Digiday.
Read at Digiday
[
add
]
[
|
|
]
more Artificial intelligence Briefly
[ Load more ]