
"Ten years ago, I would have turned my nose up at the idea that we already understood how to get machines to think. In the 2010s, my team at Google Research was working on a wide variety of artificial-intelligence models, including the next-word predictor that powers the keyboard on Android smartphones. Artificial neural networks of the sort we were training were finally solving long-standing challenges in visual perception, speech recognition, game playing and many other domains."
"'Solving' this kind of intelligence would surely require some fundamentally new scientific insight. And that would probably be inspired by neuroscience - the study of the only known embodiment of general intelligence, the brain. My views back then were comfortably within the scientific mainstream, but in retrospect were also tinged with snobbery. My training was in physics and computational neuroscience, and I found the Silicon Valley hype distasteful at times."
In the 2010s, teams at Google Research trained artificial neural networks, including next-word predictors that power smartphone keyboards. Neural networks began solving challenges in visual perception, speech recognition, and game playing. Early views held that simple next-word prediction could not achieve genuine understanding, humor, logical reasoning, or code debugging, and that fundamentally new scientific insights—likely from neuroscience—would be required. Subsequent scaling produced massive next-token models such as Meena and LaMDA, which demonstrated emerging abilities to grasp concepts, generate jokes, and make arguments. By 2025, large language models were expected to respond fluently.
Read at Nature
Unable to calculate read time
Collection
[
|
...
]