AI models frequently 'hallucinate' on legal queries, study finds
Briefly

When asked direct, verifiable questions about federal court cases, the study found the model behind ChatGPT, GPT-3.5, hallucinated 69 percent of the time, while Google's PaLM 2 gave incorrect answers 72 percent of the time and Meta's Llama 2 offered false information 88 percent of the time.
Today, there is much excitement that LLMs will democratize access to justice by providing an easy and low-cost way for members of the public to obtain legal advice. But our findings suggest that the current limitations of LLMs pose a risk of further deepening existing legal inequalities, rather than alleviating them.
Read at The Hill
[
add
]
[
|
|
]