
"Nomani was first documented by ESET in December 2024 as leveraging social media malvertising, company-branded posts, and artificial intelligence (AI)-powered video testimonials to deceive users into investing their funds in non-existent investment products that falsely claim significant returns. When victims request payout of the promised profits, they are asked to pay additional fees or provide additional personal information, such as ID and credit card information. As is typical of investment scams of this kind, the end goal is financial loss."
"ESET said the scam has since received some notable upgrades, including making their AI-generated videos more realistic in an effort to make it harder for prospective targets to spot the deception. "Deepfakes of popular personalities, used as initial hooks for phishing forms or websites, now use higher resolution, have significantly reduced unnatural movements and breathing, and have also improved their A/V sync," the company noted."
"The fraudulent investment scheme known as Nomani has witnessed an increase by 62%, according to data from ESET, as campaigns distributing the threat have also expanded beyond Facebook to include other social media platforms, such as YouTube. The Slovak cybersecurity company said it blocked over 64,000 unique URLs associated with the threat this year. A majority of the detections originated from Czechia, Japan, Slovakia, Spain, and Poland."
Nomani is a fraudulent investment scheme that increased by 62% as campaigns expanded beyond Facebook to other platforms including YouTube. ESET blocked over 64,000 unique URLs tied to the threat, with most detections in Czechia, Japan, Slovakia, Spain, and Poland. The operation uses social media malvertising, company-branded posts, and AI-powered video testimonials to convince users to invest in non-existent products promising large returns. Victims seeking payout are asked to pay extra fees or supply personal data such as ID and credit card numbers, resulting in financial loss. Fraudsters also use Europol- and INTERPOL-themed lures to reclaim funds, causing further theft. AI deepfakes have become higher resolution with improved A/V sync and reduced unnatural movements, and fabricated content often leverages topical events or personalities to appear credible.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]