There's a Compelling Theory Why GPT-5 Sucks so Much
Briefly

The launch of OpenAI's GPT-5 model was met with user disappointment due to perceived performance issues. Criticisms included short responses and reduced writing quality, with users citing errors like confusing spelling. OpenAI attempted to promote GPT-5 as a step towards AGI, but many fans felt let down during the transition from GPT-4. The design of GPT-5 includes both a lightweight and a heavier model to process various requests. However, the router model that manages task allocation malfunctioned, further detracting from user experience.
The launch of OpenAI's long-awaited GPT-5 model last week marked an ignominious chapter for the company, as it boasted modest performance upgrades on paper but disappointed many users with its execution.
Common criticisms included that GPT-5's answers were too short and its writing noticeably worse and devoid of personality, indicating a decline in quality from previous models.
Despite being touted for its 'PhD level' intelligence, GPT-5 presented errors like insisting there are three Bs in the word 'blueberry', frustrating loyal fans.
GPT-5 is designed as a combination of a lightweight model for basic requests and a more robust one for complex tasks, but the router model tasked with overseeing their operation was ineffective at launch.
Read at Futurism
[
|
]