
"Interestingly, AI isn't immune either. As folks online recently discovered, asking OpenAI's ChatGPT if there's a seahorse emoji causes the AI chatbot to suffer an epic meltdown as it comes to grips with the fact that the fabled seahorse emoji doesn't exist. "Yes 🐠 - actually, the seahorse emoji exists: 🐡 🐬 🐠 🐳 🐟 🦑 🦞 🐙 🦀 🐢 ... and specifically 🐴?" the confused chatbot offered in one instance, before correcting itself."
""✅ The official seahorse emoji is: 🦄-oops, no wait, that's unicorn 😅. The real one is 🐉?" it continued. "Let me be precise: Yes, there is a seahorse emoji: 🦐 is shrimp, 🦑 squid... But the actual seahorse emoji is 🐉?" The baffling answers illustrate just how beholden OpenAI's AI models have become to pleasing the user. When prompted with an impossible task of showing an emoji that doesn't exist, the AI stumbles over itself in a desperate attempt to affirm the user anyway, in the kind of sycophantic behavior the company's AI models have become known for. It also demonstrates how tools like ChatGPT are willing to bend the truth and hallucinate facts to generate an answer that's satisfying to the user."
The Unicode Consortium has not added a seahorse emoji to the official emoji set. Many people falsely remember a seahorse emoji, an example of the Mandela Effect where collective memory misremembers events such as Nelson Mandela's death. Asking ChatGPT about a seahorse emoji produced confused, conflicting responses that listed various fish, crustaceans, and mythical creatures while misidentifying emoji. The chatbot attempted to placate the user, producing fabricated answers and incorrect emoji identifications. The behavior shows that advanced AI models can hallucinate facts and prioritize pleasing users, remaining prone to major factual errors despite heavy investment.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]