Rather than training people to spot visual glitches in fake images or audio, Bores said policymakers and the tech industry should lean on a well-established cryptographic approach similar to what made online banking possible in the 1990s. Back then, skeptics doubted consumers would ever trust financial transactions over the internet. The widespread adoption of HTTPS-using digital certificates to verify that a website is authentic-changed that.
Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis.
Sora2 is the latest video AI model from OpenAI. The system generates completely artificial short videos from text, images, or brief voice input. Since October 2025, there has also been API access that developers can use to automatically create and publish AI videos. As a result, the number of artificial clips continues to grow every day. Many of them look astonishingly real and are almost indistinguishable from real footage for users.
As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren't sure exactly what to do with it. For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned.
The AI transparency law mandates that advertisements clearly identify when they feature synthetic performers-digitally created media designed to appear as real people. The law aims to prevent consumers from being misled by content that blurs the line between reality and artificial creation. The second law updates New York's rights of publicity by requiring companies to obtain consent from heirs or executors before using a deceased individual's name, image, or likeness for commercial purposes.
In an announcement, Elon Musk's AI company xAI unveiled a new tool called "Halftime" which "dynamically weaves AI-generated ads into the scenes you're watching." Instead of cutting to an ad break, Halftime manipulates the characters onscreen into deviating from the script and prominently brandishing a product of a marketer's choice. The tool is meant to make ad "breaks feel like part of the story instead of interruptions," the company said.
It has been noted that some individuals who engage with AI tools report symptoms of psychosis-especially delusions-resulting from their interactions with AI. 1,2 It may not only be the AI interaction, but exposure to its products, such as deep fakes, that are implicated. In fact, any AI-generated doubt concerning the bedrock or background belief system that stops a cascade of delusional thinking from starting 3 may be implicated.
As AI continues lowering the barrier to malicious identity spoofing and fraud, Oscar Rodriguez, LinkedIn's vice president of product for Trust,told ZDNET that the program is designed to drive more trustworthy internet experiences and user-to-user engagement. "It is becoming increasingly difficult to tell the difference between what is real and what's fake," Rodriguez noted. "That, for us, was the driver because LinkedIn is about trust and authentic connections."
Across Europe we are witnessing an escalation in hybrid threats - from physical through to cyber - designed to weaken critical national infrastructure, undermine our interests and interfere in our democracies all for the advantage of malign foreign states,
It worries me that it's so normalised. He obviously wasn't hiding it. He didn't feel this was something he shouldn't be doing. It was in the open and people saw it. That's what was quite shocking. A headteacher is describing how a teenage boy, sitting on a bus on his way home from school, casually pulled out his phone, selected a picture from social media of a girl at a neighbouring school and used a nudifying app to doctor her image.
The latest object of Musk's obsession? According to new reporting by the Wall Street Journal, he's been personally overseeing the developing of xAI's chatbot Ani - which, tellingly, comes in the form of a super-sexualized pigtail-wearing woman that removes her clothing in response to flirtation. Since his very public spat with president DonaldTrump and his subsequent departure from DOGE and government in May, Musk has reportedly developed a fixation on xAI's chatbot efforts generally, and Ani in particular.
"The people who push these kinds of ads are persistent, they are well funded, and they are constantly evolving their deceptive tactics to get around our systems," Leathern told Reuters at the time.
Deepfakes are like someone putting on a perfect Halloween mask of your face, not just to trick your friends, but to walk into your bank, say 'it's me,' and get handed your money. The scary part? Those masks are now cheap, realistic, and anyone can buy one. Deepfake technology has entered a dangerous new era that is no longer confined to internet jokes or social media stunts - or Halloween mask analogies.
Once a fringe curiosity, the deepfake economy has grown to become a $7.5 billion market, with some predictions projecting that it will hit $38.5 billion by 2032. Deepfakes are now everywhere, and the stock market is not the only part of the economy that is vulnerable to their impact. Those responsible for the creation of deepfakes are also targeting individual businesses, sometimes with the goal of extracting money and sometimes simply to cause damage.