Are large language models the problem, not the solution?
Briefly

Are large language models the problem, not the solution?
"There is an all-out global race for AI dominance. The largest and most powerful companies in the world are investing billions in unprecedented computing power. The most powerful countries are dedicating vast energy resources to assist them. And the race is centered on one idea: transformer-based architecture with large language models are the key to winning the AI race. What if they are wrong?"
"What if aggregating the vast collective so-called wisdom accumulated on the internet and statistically analyzing it with complex algorithms to mindlessly respond to human prompts is really just an unimaginably expensive and resource-intensive exercise in garbage-in-garbage-out? At best, it may be a clever chronicler of common wisdom. At worst, it's an unprecedented and unnecessary waste of resources with potentially harmful consequences."
Intelligence in biological life evolved over hundreds of millions of years from single-celled organisms to complex human brains with billions of neurons and vast neural interactions tuned to needs and environments. Artificially recreating such intelligence likely requires more than generating language via models trained on massive, largely non-curated text corpora. Aggregating internet content and statistically predicting text can produce superficially coherent responses while lacking grounded understanding, context, or adaptive goals. Such approaches may function as chroniclers of common wisdom rather than genuine intelligence. These methods demand unprecedented computing power and energy and risk being wasteful, misleading, or harmful if relied upon as equivalent to evolved cognition.
Read at Fast Company
Unable to calculate read time
[
|
]