No, LLMs still can't reason like humans. This simple test reveals why.
Briefly

The reality is that large language models (LLMs) often struggle with basic physical concepts, leading to incorrect answers in straightforward scenarios like counting items on a plate.
Despite the hype surrounding AI advancements, practical capabilities of current LLMs like GPT-4o and Claude 3.5 Sonnet reveal limitations that challenge claims of achieving artificial general intelligence.
Read at Big Think
[
|
]