
"The big AI companies promised us that 2025 would be "the year of the AI agents." It turned out to be the year of talking about AI agents, and kicking the can for that transformational moment to 2026 or maybe later. But what if the answer to the question "When will our lives be fully automated by generative AI robots that perform our tasks for us and basically run the world?" is, like that New Yorker cartoon, "How about never?""
""There is no way they can be reliable," Vishal Sikka, the dad, tells me. After a career that, in addition to SAP, included a stint as Infosys CEO and an Oracle board member, he currently heads an AI services startup called Vianai. "So we should forget about AI agents running nuclear power plants?" I ask. "Exactly," he says. Maybe you can get it to file some papers or something to save time, but you might have to resign yourself to some mistakes."
Massive expectations that 2025 would usher in agentic AI did not materialize and deployment of autonomous generative agents has been delayed. Mathematical analysis indicates transformer-based language models possess inherent computational limits and cannot reliably execute complex computational or agentic tasks beyond a certain threshold. Enhancements that extend beyond pure word-prediction reportedly do not circumvent these limits. High-stakes control of critical infrastructure by AI agents is therefore considered unsafe; some routine or productivity tasks may be automated, but occasional errors and reliability failures remain unavoidable despite industry progress in coding and hallucination reduction.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]