"I've spent a lot of time looking at the comment sections on these videos actually, and it does not seem like bots. I clicked on people's profiles, these are real profiles, thousands of followers, no signs of inorganic activity. People just like it."
Public officials and journalists will soon be able to keep track of AI-generated deepfakes of themselves on YouTube through the platform's likeness detection feature. The tool is already available to millions of content creators on YouTube, but beginning Tuesday, it will expand to a pilot group of journalists, government officials, and political candidates.
This expansion is really about the integrity of the public conversation. We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we're also being careful about how we use it.
Tools to create tailored, even personalised, scams leveraging, for example, deepfake videos of Swedish journalists or the president of Cyprus are no longer niche, but inexpensive and easy to deploy at scale, said the analysis from the AI Incident Database. It catalogued more than a dozen recent examples of impersonation for profit, including a deepfake video of Western Australia's premier, Robert Cook, hawking an investment scheme, and deepfake doctors promoting skin creams.