AI scaling myths
Briefly

The seeming predictability of scaling is a misunderstanding... virtually no chance that scaling alone will lead to AGI.
Emergence is not governed by any law-like behavior. No empirical regularity that just scaling will continue indefinitely.
LLMs might plateau as they reach the limit of tasks represented in training data. Continual scaling may not bring significant additional capabilities.
Obtaining more training data for LLMs presents a challenge. New data sources may not significantly increase existing data volumes.
Read at Aisnakeoil
[
|
]