Read at www.nytimes.com
Meta, formerly known as Facebook, is proposing technological standards to detect AI-generated content. The standards would allow social media companies to quickly identify and label content that has been generated using artificial intelligence. The goal is to address the urgent task of detecting and combating the spread of fake content. This effort comes as industry watchers predict an increase in the use of AI tools to post fake content during the upcoming US presidential election.
Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content the most urgent task facing the tech industry today.
The proposed standards could be adopted by companies like Google, OpenAI, Microsoft, Adobe, and others that provide tools for creating artificial posts. While the standards are not perfect, they aim to create a common framework to make it easier for social media platforms to identify AI-generated content. The hope is that this effort will encourage companies across the industry to adopt similar standards.
While this is not a perfect answer, we did not want to let perfect be the enemy of the good, Mr. Clegg said in an interview. He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.