"Is the belief really: 'Oh, it's so big, but if you had 100x more, everything would be so different?' It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don't think that's true,"
"So it's back to the age of research again, just with big computers,"
"Nobody's worried about a lack of data because it plays against itself and generates data that way," Hinton said of the early program. "And the equivalent for a language model is when it starts reasoning and saying, 'Look, I believe these things and these things imply that thing, but I don't believe that thing, so I'd better change something somewhere.' And by doing reasoning to check the consiste"
Debate persists over whether scaling large models remains the primary path to breakthroughs or whether research innovations should take precedence. Some leaders question whether simply increasing compute and chips will transform capabilities, arguing that multiplying scale alone is insufficient. The availability of high-quality data is a limiting factor, but large models may mitigate data limits by generating their own training examples through self-play or internal reasoning. Models that can reason and revise beliefs could produce additional labeled data. Both continued scaling and renewed research efforts are being considered to drive future AI advances.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]