Labels and watermarks become weapons of choice to identify AI images
Briefly

The rise of generative AI tools has led to the emergence of fake images and videos being passed off as real, including explicit deepfakes of celebrities like Taylor Swift and Drake. Social media platforms have struggled to control the spread of these fake images, indicating the limitations of their systems in checking virality. For example, X (formerly Twitter) had to completely block searches for the phrase 'Taylor Swift' to control the spread of her fake images.
The biggest hazard to emerge in this time are generated images and videos being passed off as real.
In response to this issue, social media platform Meta (formerly Facebook) and AI company OpenAI have announced plans to include labels or watermarks on images generated using AI. They are working with other companies in the industry to develop common standards for identifying AI-generated content.
Since AI-generated content appears across the internet, we've been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI).
Read at Hindustan Times
[
add
]
[
|
|
]
more Artificial intelligence Briefly
[ Load more ]