The unspoken rule of conversation that explains why AI chatbots feel so human
Briefly

Earlier this year, a Hong Kong finance worker fell for a deepfake scam, losing $25 million after being tricked by fraudsters impersonating his CFO during a video call.
Generative AI has gained traction since the launch of ChatGPT in 2022, convincing users it understands language and can emulate human conversation, creating serious trust issues.
Even though generative AI systems produce "ungrounded" text, humans tend to anthropomorphize them, mistakenly believing that the machines have a mind like ours.
The successful interaction with generative AI hinges on the perception that it acknowledges us in a real way, making it easier for people to be deceived.
Read at The Conversation
[
|
]