The concept of antifragility describes systems that strengthen when exposed to stress, with muscles, bones, and human reasoning as examples. Human reasoning often improves when tested by surprise and uncertainty. Artificial intelligence lacks antifragility and excels only within structured contexts; minor rewordings or distractions can cause sharp failures. Two studies illustrate this fragility: a medical-exam manipulation using a NOTA (None of the other answers) substitution that altered model performance, and an innocuous cat-trivia perturbation. Small changes and distractions can derail AI, while humans filter noise and can grow under stress.
It's this push and pull of uncertainty that, in the final analysis, helps define us. So, here's the point: Artificial intelligence, by contrast, is often anything but antifragile. Its brilliance shines in the context of structure, but the moment reality shows up and a question is reworded, or a distraction is introduced, that brilliance fades. What emerges is brittle pseudo- cognition. It's the appearance of intelligence that collapses under the slightest perturbation.
The NOTA Collapse The first study, published in JAMA Network Open, looked at whether large language models genuinely reason or merely recognize patterns. The researchers sampled questions from a standard benchmark for medical board exams. In the original version, each question had one correct answer. In the modified version, the correct choice was removed and replaced with "None of the other answers" (NOTA). A clinician reviewed every substitution to confirm that all the visible
Collection
[
|
...
]