Understanding AI Hallucinations: Making Sure You Don't End Up At The Wrong Stop - Above the Law
Briefly

Understanding AI Hallucinations: Making Sure You Don't End Up At The Wrong Stop - Above the Law
"The group concludes that rather than a random, unpredictable glitch, a physics-based analysis demonstrates that hallucination is a 'foreseeable engineering risk.' This means the circumstances generating its occurrence can be at least a little predictable."
"GenAI systems have 'a deterministic mechanism at its core that can cause output to flip from reliable to fabricated at a calculable step.' This unfortunate step often occurs when the lawyer's need for accuracy is greatest."
"GenAI is 'a probabilistic text generator engineered to predict the next most plausible token in a sequence, without any internal concept of legal truth.' It is not a database of verified legal authorities."
GenAI systems pose significant risks to legal professionals due to their tendency to produce hallucinations and inaccurate citations. A recent paper by scientists and engineers reveals that these failures are not random but rather foreseeable engineering risks. The authors emphasize that GenAI operates as a probabilistic text generator, lacking an internal concept of legal truth. This means that while GenAI can generate plausible legal text, it does not serve as a reliable database of verified legal authorities, especially when the need for accuracy is highest.
Read at Above the Law
Unable to calculate read time
[
|
]