The article explores the perspective that intelligence, particularly as demonstrated by large language models (LLMs), is structured spatially rather than temporally. Traditional cognitive understanding emphasizes sequences and narratives, but LLMs reveal a different approach to thought. They operate using 'vector blocks,' which encapsulate the dense, relational meaning of words in high-dimensional spaces rather than through linear timelines. This shift highlights a deeper understanding of intelligence that prioritizes structural relationships over sequence, leading to a new framework for comprehending cognition in both machines and humans.
The Spatial Turn in Thinking has revealed that intelligence may be about structures we occupy rather than sequences we follow, reshaping our understanding of thought.
Large language models, by functioning in high-dimensional layers, encourage us to rethink the way we understand cognition away from traditional narratives.
Collection
[
|
...
]