
"In 1950, Alan Turing proposed a now-famous experiment that we all know. It was a conversation between a person and a machine, judged by whether the human could tell the difference. Practical and itself being binary (yes or no), it gave early computer science something it needed, a goal post. But it also planted a seed that would grow into a problem we still haven't named."
"We've spent 70 years teaching machines to pass as human. And I believe that we've gotten very good at it. Language models now write essays and code that feel remarkably human-like, perhaps even better. They apologize when they're wrong and they simulate doubt when probabilities get thin. And at the heart of this simulation is that they drive completion over comprehension-and we're letting them get away with it."
"Today's large language model predicts the next word based on vast collections of training data. It gets fluent, then eloquent, then spot on. But remember, it never crosses into understanding. It maps probability distributions, not meaning. It knows that "the cat sat on the" precedes "mat" more often than "couch," but it has no image of a cat, no sense of a mat, no experience of sitting. The sentences it produces are statistically correct and even brilliant, but semantically hollow."
Alan Turing's proposed conversational test set a binary goal that prioritized indistinguishability from humans. For seven decades, machines have been trained to imitate human speech, producing fluent, eloquent outputs that mimic apology and doubt. Modern large language models predict next words from vast data without internal understanding, mapping probability distributions rather than meaning. This leads to semantically hollow yet statistically impressive sentences. Increasing human-likeness can reduce interest and insight. Future progress may depend on embracing machine strangeness and allowing artificial systems to develop different forms of intelligence that can teach new ways of thinking.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]