
"The problems to solve (and the cost to solve them) jump exponentially at each level, and we posit that, just like fully autonomous self-driving cars at level five that can operate anywhere, we are a far way off from seeing level five agentic AI operating at the "PhD" level as Sam Altman describes it. And the authors both firmly believe that LLMs are not the architecture to get us to level five, but that is a discussion for a separate post."
"It's worth noting that the functionality gains from one level to the next in the case of self-driving cars are non-linear. In fact, it's been nearly a decade since Elon first promised that fully self-driving cars are around the corner, yet Teslas (no matter how great they operate *most of the time*) still require humans to be able to seize control at any moment. And yes, we have Waymo operating in limited environments"
Business leaders face widespread claims that AGI and "super-agents" will soon replace knowledge workers, generating excitement and anxiety. Agentic AI denotes systems that act with varying degrees of autonomy to achieve goals, emphasizing the degree of autonomy. A shared SAE-style levels taxonomy for autonomy helps plan, scope, and safely deploy agentic systems within organizations. Functionality gains between levels are non-linear and development costs rise exponentially. Fully general, Level 5 agentic AI capable of "PhD"-level autonomy is far from current capabilities, and large language models are not the architecture to reach Level 5.
Read at Medium
Unable to calculate read time
Collection
[
|
...
]