At a recent talk in Singapore, Yann LeCun, chief AI scientist at Meta, challenged the established principles of 'scaling laws' in AI which suggest that larger models perform better. LeCun stated that simply increasing model size, data, and compute does not guarantee smarter AI. He emphasized that for complex problems, the current approach of relying solely on scale may not be sufficient, as many recent AI breakthroughs are relatively simple. This perspective invites a rethink on how AI models should be developed.
“Most interesting problems scale extremely badly. You cannot just assume that more data and more compute means smarter AI.”
“The mistake is that very simple systems, when they work for simple problems, people extrapolate them to think that they'll work for complex problems.”
Collection
[
|
...
]