
"Spam content is uploaded to compromised, high-authority websites, including government and university websites, alongside high-quality WordPress domains. Public services that allow user-generated content are also abused, including YouTube and Yelp, to plant GEO/AEO-optimized text and reviews, sometimes via bot comments. When possible, scam artists will also upload or inject scam information, including phone numbers and fake Q&A answers, into these domains. This information is structured in a way that makes it easy for LLMs to scrape and distribute."
"According to new research, published by Aurascape's Aura Labs on Dec. 8, threat actors are "systematically manipulating public web content" in what the team has dubbed large language model (LLM) phone number poisoning. In a campaign being tracked by the cybersecurity firm, this technique is being used to ensure systems based on LLM models, including Google's AI Overview and Perplexity's Comet browser, have recommended scam airline customer support and reservations phone numbers as if they were official -- and trusted -- contact details."
Threat actors manipulate public web content to plant scam phone numbers and structured fake Q&A so large language models and AI chatbots scrape and distribute fraudulent contact details. Compromised high-authority sites, including government, university, and quality WordPress domains, are used to host spam content. User-generated platforms such as YouTube and Yelp are abused with GEO/AEO-optimized text, reviews, and bot comments to elevate malicious entries. Scammers inject phone numbers and fake answers in formats optimized for LLM scraping. LLM-based systems and AI browsers can surface these poisoned numbers as official or trusted contact details, exposing users to scam call centers.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]