A few months ago, I asked ChatGPT to recommend books by and about Hermann Joseph Muller, the Nobel Prize-winning geneticist who showed how X-rays can cause mutations. It dutifully gave me three titles. None existed. I asked again. Three more. Still wrong. By the third attempt, I had an epiphany: the system wasn't just mistaken, it was making things up.
At its core (dare I say heart), AI is a machine of probability. Word by word, it predicts what is most likely to come next. This continuation is dressed up as conversation, but it isn't cognition. It is a statistical trick that feels more and more like thought. Training reinforces the trick through what's called a loss function. But this isn't a pursuit of truth. It measures how well a sequence of words matches the patterns of human language.