ChatGPT Has a Stroke When You Ask It This Specific Question
Briefly

ChatGPT Has a Stroke When You Ask It This Specific Question
"Nearly two months since the release of GPT-5, an update to ChatGPT that was supposed to give it " PhD level" intelligence and bring it once step closer to the industry's vaunted goal of artificial general intelligence (AGI), the OpenAI chatbot is still going bizarrely haywire over simple and completely innocuous inquiries."
"On the ChatGPT subreddit, fans of the AI bot noticed that asking it if there's an "NFL team whose name doesn't end with the letter 's'" sends it into a long-winded meltdown, allowing you to witness its superpowered "reasoning" abilities turn to mush in real time."
""Yes - there are two NFL teams whose names don't end with an 's,' ChatGPT says, before proceeding to list two teams that do. "Miami Dolphins? ❌ end with s. Green Bay Packers? ❌ ends with s." It's an odd way of addressing the question. ChatGPT correctly identifies that those names do end with s, but why bring them up? Then it looks like it's cutting to the chase - at least, that is, until it goes off the rails yet again. "The only two teams that don't end with 's' are: Miami Dolphins ❌ no (wait, ends with s)," it says, catching its mistake. "Hold up, let's do this carefully. Actually, the correct answer is: Washington Commanders ❌ ends with s. Chicago Bears ❌ ends with s." In the original example uploaded to Reddit, ChatGPT goes on like this for several more paragraphs. And it never arrives at the correct answer - that there aren't any teams that don't end in an "s.""
"Like a high schooler hitting a word count, it peddles irrelevant details while teasing a conclusion. It also peppers in phrases to make it sound like it's actually doing some deep thinking. "Hold up, let's do this carefully," it says. Or "let me do this systematically." "The actual correct answer is," ChatGPT says at one point, not realizing the shtick is getting old."
Nearly two months after GPT-5's release, ChatGPT continues to produce illogical, repetitive outputs on trivial questions. When asked whether any NFL team name does not end with "s", the model cycles through examples while mislabeling teams, correcting itself, and never reaching the correct conclusion that all team names end with "s". The model pads responses with irrelevant details and filler phrases such as "let's do this carefully" and "let me do this systematically", creating the appearance of careful reasoning while failing to deliver an accurate, concise answer.
Read at Futurism
Unable to calculate read time
[
|
]