Why AI gets stuck in infinite loops - but conscious minds don't
Briefly

An AI-operated jet bridge became stuck in a repeating failure, requiring a human worker to resolve the loop. Artificial systems can become trapped in endless repetition, failing to notice or escape the behavior. Adding internal models and higher-order monitoring increases robustness but introduces an infinite regress: each meta-monitor can also malfunction. Alan Turing's 1936 proof shows that no algorithm can always determine whether another algorithm will halt, so no finite stack of self-checks can guarantee detection of every nonterminating failure. The result is a fundamental and persistent vulnerability of AI systems to infinite-loop failures.
We land on time at Madrid-Barajas airport, but there's a delay disembarking. Turns out a new artificial intelligence system operates the jet bridge here. No humans needed - until they are. Through the window, I see the AI jet bridge repeatedly approaching the plane, flailing gently in the morning air, and withdrawing, again and again, caught in an infinite loop. Eventually, we're rescued by a worker, who instantly sees and solves the problem.
Imagine a smarter jet bridge - a system that can monitor its performance and detect when it's flailing and failing. Thanks to this internal model of its environment and its own behavior, it succeeds where today's autonomous jet bridges fall short. Eventually, though, it will glitch too. So you add another higher-order feedback loop - a model within its model, monitoring its own monitoring. If that fails? Add another.
Read at Big Think
[
|
]