OpenAI's new model, GPT-4.5, reveals a troubling fact: it hallucinates or fabricates information 37% of the time, raising concerns about AI reliability. Despite this, OpenAI attempts to frame GPT-4.5 as an improvement compared to its previous models, which have even higher hallucination rates, like the 61.8% seen in GPT-4o and 80.3% in o3-mini. Experts warn that AI models cannot be fully trusted, with Wenting Zhao pointing out that even the best can only generate factual outputs about 35% of the time.
"Yes, you read that right: in tests, the latest AI model from a company that's worth hundreds of billions of dollars is telling lies for more than one out of every three answers it gives."
"At present, even the best models can generate hallucination-free text only about 35 percent of the time... we cannot yet fully trust the outputs of model generations."
Collection
[
|
...
]