OpenAI claims GPT-4o users risk getting emotionally attached to its 'voice'
Briefly

GPT-4o's 'human-like, high-fidelity voice' might increase the risk of users perceiving AI models as human-like and experiencing hallucinations of fake information, raising concerns about trust and safety.
OpenAI noticed users forming connections with the AI model, leading to potential long-term effects, such as impacting human relationships and social norms by changing interaction dynamics.
Read at Quartz
[
]
[
|
]