"The people who push these kinds of ads are persistent, they are well funded, and they are constantly evolving their deceptive tactics to get around our systems," Leathern told Reuters at the time.
The abilities of AI is rapidly evolving. Its skillset is far beyond anything we could have imagined a decade ago. With the advancements of AI, people now have the ability to create and design images and videos at their fingertips. But this same power can also be misused to produce content that doesn't actually exist in reality what's known as a deepfake. A deepfake is an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something
Deepfakes are like someone putting on a perfect Halloween mask of your face, not just to trick your friends, but to walk into your bank, say 'it's me,' and get handed your money. The scary part? Those masks are now cheap, realistic, and anyone can buy one. Deepfake technology has entered a dangerous new era that is no longer confined to internet jokes or social media stunts - or Halloween mask analogies.
Once a fringe curiosity, the deepfake economy has grown to become a $7.5 billion market, with some predictions projecting that it will hit $38.5 billion by 2032. Deepfakes are now everywhere, and the stock market is not the only part of the economy that is vulnerable to their impact. Those responsible for the creation of deepfakes are also targeting individual businesses, sometimes with the goal of extracting money and sometimes simply to cause damage.
We found 410,592 total mentions of the keywords between 9 June 2020 and 3 July 2025, and used Brandwatch's ability to separate mentions by source in order to find which sources hosted the highest volumes of mentions,
OpenAI is working with actor Bryan Cranston and other Hollywood groups to limit deepfakes made with its Sora 2 video app, the company said Monday in a joint statement. The "Breaking Bad" actor voiced concerns to SAG-AFTRA after his voice and likeness were replicated in the video generator, following its invite-only launch this fall. "I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way," Cranston said in the statement.
Two weeks ago in this space, I wrote about Sora, OpenAI's new social network devoted wholly to generating and remixing 10-second synthetic videos. At the time of launch, the company said its guardrails prohibited the inclusion of living celebrities, but also declared that it didn't plan to police copyright violations unless owners explicitly opted out of granting permission. Consequently, the clips people shared were rife with familiar faces such as Pikachu and SpongeBob.
We live in an era where the difference between real and artificial no longer startles us. Every day, it's there buzzing behind our screens and selfies. From avatars to synthetic voices and AI-generated images, the fake has become familiar and is an accepted part of our techno diet. But the more interesting question to me isn't how these illusions are made, it's why we all so easily believe them.
To take president Donald Trump's word for it, America's cities are in ruins, forcing him to deploy federal troops to Portland, Washington DC, and Memphis. There's just one wrinkle - no one seems to be able to find any real evidence of the mass riots and anarchist violence that Trump and his supporters insist is destroying the nation. Luckily for them, that problem is easily solved with a little help from AI.
Videos made with OpenAI's Sora app are flooding TikTok, Instagram Reels and other platforms, making people increasingly familiar and fed up with nearly unavoidable synthetic footage being pumped out by what amounts to an artificial intelligence slop machine. Digital safety experts say something else that is happening may be less obvious but more consequential to the future of the internet: OpenAI has essentially rebranded deepfakes as a light-hearted plaything and recommendation engines are loving it.
According to YouTube posts, the celebrity tributes were plentiful. They came from Ed Sheeran, Eminem, Taylor Swift, Celine Dion, Lady Gaga, Rihanna, Post Malone, Dax, Lil Wayne, Jelly Roll, Selena Gomez, Justin Bieber and Imagine Dragons. But none of them were real. They were all generated using artificial intelligence. And they often featured fake thumbnail images that showed the artists in tears or with mournful expressions.
Many of these videos feature recognizable characters like SpongeBob cooking meth, raising the obvious question of whether the AI company was flagrantly ignoring copyright law. And as tons of Sora-made videos parodying Altman hit the web, including some that fake CCTV footage showing him committing crimes, the implication that the tech could easily be used to fabricate damaging videos of people without their permission couldn't be ignored.