Your AI clone could target your family, but there's a simple defense
Briefly

Criminals are leveraging AI to create believable profile photos, IDs, and chatbots, automating fraud and obscuring telltale signs of deception like bad grammar.
The FBI advises limiting access to personal recordings and images and suggests making social media accounts private to combat advanced scams and identity theft.
Asara Near introduced the 'proof of humanity' concept, prompting trusted contacts to ask for a secret word to confirm legitimate communication amid deepfakes.
The simplicity of using a unique word or phrase as a verification tool highlights the enduring relevance of ancient security methods in combating modern AI fraud.
Read at Ars Technica
[
|
]