Popular AI Chatbots Found to Give Error-Ridden Legal Answers
Briefly

"The big finding here is that the hallucination rates are not isolated, they're pretty pervasive," said Daniel Ho, a law professor at Stanford and senior fellow at the school's Institute for Human-Centered Artificial Intelligence.
"We should not take these very general purpose foundation models and naively deploy them and put them into all sorts of deployment settings, as a number of lawyers seem to have done," said Daniel Ho.
Read at Bloomberglaw
[
add
]
[
|
|
]