Open AI's new models hallucinate more than the old ones
Briefly

Recent findings reveal that modern AI models, specifically the o3 and o4-mini versions from Open AI, are experiencing a rise in hallucinations—a phenomenon where the AI fabricates answers when unsure. Despite expectations that improvements would reduce such errors, testing indicates that the o3 model has a hallucination rate of 33%, a significant increase compared to 16% for the o1 model and 14.8% for the o3-mini version. These results highlight ongoing challenges in AI model reliability and accuracy.
One of the biggest problems with today's AI models is that they tend to simply make up answers when they don't know what's going on, something called hallucinations.
According to internal tests from Open AI, the o3 and o4-mini reasoning AI models produce more hallucinations than their predecessors o1, o1-mini, and o3-mini.
Read at Computerworld
[
|
]