"Errors can quickly become a bottleneck if hallucinations multiply when agents interact, said Nicolas Darveau-Garneau, a former Google executive and author of the book " Be a Sequoia, Not a Bonsai." If a single agent has a 5% hallucination rate, then it's hard to daisy-chain multiple agents without a high risk of errors. That's because the risk increases exponentially, he told Business Insider."
"For now, he said, that's why general-purpose, autonomous agentic AI is largely aspirational. Nevertheless, hallucinations are something that Darveau-Garneau expects will be "mostly solved" five years from now. If that happens, he said, the productivity gains of 10% to 30% that some companies are getting from AI could be far greater - perhaps on the order of 10x to 100x."
AI agents can perform discrete tasks across sales, finance, supply chains, and engineering by operating independently to reduce human effort. Agentic systems can be effective on isolated tasks but become fragile when scaled and interconnected because hallucinations can multiply. A modest hallucination rate in a single agent makes daisy-chaining multiple agents risky due to exponential error growth. Current general-purpose autonomous agentic AI remains largely aspirational for that reason, though expectations exist that hallucination issues could be mostly solved within five years. If solved, productivity gains could jump from modest percentages to orders of magnitude, prompting operational reorganization and role changes.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]