Do Large Language Models Have an Internal Understanding of the World? | HackerNoon
Briefly

The article examines the skepticism surrounding large language models (LLMs) and their capacity to possess world models, which are crucial for understanding real-world interactions. LLMs rely on next-token prediction and do not learn through environmental interactions like reinforcement learning agents. The absence of world models raises doubts about their ability to generate language that aligns with real-world knowledge. The discussion contextualizes these concerns within the framework of classic philosophical issues like compositionality and language acquisition, ultimately questioning whether LLMs can achieve language understanding and grounding without such representations.
World models are often seen as crucial for tasks requiring a deep understanding of how elements interact within an environment, including physical reasoning and problem-solving.
Unlike reinforcement learning agents, LLMs do not learn by interacting with an environment, raising questions about whether they possess internal representations of the world.
Read at Hackernoon
[
|
]