The article discusses the challenges encountered when using generative AI, particularly its tendency to provide incorrect and unsourced answers. Evans highlights the difficulty of obtaining reliable information from AI systems, even when directing them to primary sources. He raises an interesting point about whether the inaccuracies of generative AI can ever be seen as beneficial rather than detrimental. Moreover, there is a need to adapt how we program AI, moving towards accepting probabilities rather than expecting definite outcomes, as the current reliance on deterministic responses leads to significant uncertainties in AI outputs.
First, I try [the question] cold, and I get an answer that's specific, unsourced, and wrong. Then I try helping it with the primary source, and I get a different wrong answer with a list of sources, that are indeed the U.S. Census, and the first link goes to the correct PDF... but the number is still wrong.
The more interesting question Evans poses is whether there are "places where [generative AI's] error rate is a feature, not a bug." It's hard to think of how being wrong could be an asset, but as an industry (and as humans) we tend to be really bad at predicting the future.
Today we're trying to retrofit genAI's non-deterministic approach to deterministic systems, and we're getting hallucinating machines in response.
But to get there, we may need to figure out new ways to program, accepting probability rather than certainty as a desirable outcome.
Collection
[
|
...
]