
"When you walk into a doctor's office, you assume something so basic that it barely needs articulation: your doctor has touched a body before. They have studied anatomy, seen organs and learned the difference between pain that radiates and pain that pulses. They have developed this knowledge, you assume, not only through reading but years of hands-on experience and training. Now imagine discovering that this doctor has never encountered a body at all."
"Instead they have merely read millions of patient reports and learned, in exquisite detail, how a diagnosis typically sounds. Their explanations would still feel persuasive, even comforting. The cadence would be right, the vocabulary impeccable, the formulations reassuringly familiar. And yet the moment you learned what their knowledge was actually made ofpatterns in text rather than contact with the worldsomething essential would dissolve."
"Every day many of us turn to tools such as OpenAI's ChatGPT for medical advice, legal guidance, psychological insight, educational tutoring or judgments about what is true and what is not. And on some level, we know that these large language models (LLMs) are imitating an understanding of the world that they don't actually haveeven if their fluency can make that easy to forget."
Large language models generate highly fluent, persuasive explanations while lacking direct experiential grounding. A compelling analogy compares such models to a doctor who has read millions of reports but never touched a body: the language and cadence appear expert, yet the knowledge lacks contact with the world. Many people consult LLMs for medical, legal, psychological, educational, and factual judgments. The models reproduce linguistic patterns that mimic understanding, and their fluency can obscure the absence of real-world reasoning. Comparative tests evaluate whether LLM reasoning aligns with human judgment on established psychological tasks.
Read at www.scientificamerican.com
Unable to calculate read time
Collection
[
|
...
]