
"A months-old but until now overlooked study recently featured in Wired claims to mathematically prove that large language models "are incapable of carrying out computational and agentic tasks beyond a certain complexity" - that level of complexity being, crucially, pretty low. The paper, which has not been peer reviewed, was written by Vishal Sikka, a former CTO at the German software giant SAP, and his son Varin Sikka."
"Ignore the rhetoric that tech CEOs spew onstage and pay attention to what the researchers that work for them are finding, and you'll find that even the AI industry agrees that the tech has some fundamental limitations baked into its architecture. In September, for example, OpenAI scientists admitted that AI hallucinations, in which LLMs confidently make up facts, were still a pervasive problem even in increasingly advanced systems, and that model accuracy would "never" reach 100 percent."
A recent mathematical argument claims that large language models are incapable of executing computational and agentic tasks beyond a low complexity threshold. The argument is currently unpeer-reviewed. Industry research continues to report pervasive hallucinations in which LLMs confidently invent facts, and acknowledges that model accuracy will never reach 100 percent. The limited reliability and frequent hallucinations undermine the feasibility of autonomous AI agents for many real-world tasks. Some companies that deployed agents to replace workers found agents unable to complete assigned tasks. Recommended responses focus on systems-level safeguards and external guardrails around model deployment.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]