
"AI can translate smoothly, keep tone consistent over long threads, and remove the "tells" that used to flag many scams, like awkward phrasing or sudden shifts in tone. The result is more volume, more polish, and less friction for a bad actor trying to keep multiple conversations moving at once."
"In one romance-scam example described in the report, actors used AI-generated materials and messaging to support a fake "luxury" dating setup, then pushed targets to Telegram, where "tasks" or "missions" escalated into larger payments. The move off-platform is often the hinge point."
"Inside a major app, there are at least some guardrails: rate limits, fraud detection, reporting tools, and moderation. Once you're in a private chat, the scammer controls the environment and the pace, and the platform's safety features largely stop applying."
Generative AI is amplifying existing scam tactics rather than creating entirely new ones. The technology removes traditional red flags like awkward phrasing and inconsistent tone that previously exposed fraudsters. Romance scams and professional impersonation schemes now operate with greater polish and efficiency. Scammers use AI to generate fluent, personalized messages and build trust quickly before moving conversations to private platforms like Telegram, where safety guardrails disappear. OpenAI's threat intelligence reveals how ChatGPT accelerates romance fraud operations, with fake luxury dating setups escalating into payment demands. Similarly, fraudsters create fake law firms and attorney accounts using AI-generated credentials. The shift to private messaging represents a critical vulnerability, as scammers gain full environmental control and victims lose platform protections.
#generative-ai-fraud #romance-scams #professional-impersonation #cybersecurity-threats #ai-enabled-crime
Read at TechRepublic
Unable to calculate read time
Collection
[
|
...
]