
"Picture this: A senior partner at a major firm now spends her evenings personally checking every citation in briefs drafted by associates. Or local counsel pouring over the cites in a brief sent by national counsel. Or an overworked judge having to review the work of their clerk for accuracy. Why? Because none of them can trust that someone else has used ChatGPT."
"But there's one more reason for concern: the reality of verification requirement is creating a situation that's not sustainable. Every lawyer simply can't check every citation to ensure the necessary verification. The time and cost burden are too great. So not only will the cost of verifying exceed the AI savings, it will create a systemic breakdown of trust relationships with which we have gotten work done for decades. This creates an impossible situation that threatens the entire AI adoption thesis."
"The way most lawyers and many judges traditionally work has been to rely on others for things like drafting and research. The associate. The law clerk. The national counsel. Indeed, there are reports of hallucinations contained in judicial opinions where the research and drafting was done by law clerks who unbeknownst to the judges used a LLM to assist in their work."
Legal professionals are spending substantial time manually verifying citations and outputs from ChatGPT, including senior partners, local counsel, and judges. Verification demands are escalating because outputs can contain hallucinations and inaccuracies, especially when used by less experienced staff. The time and cost of checking AI-generated work can exceed projected AI savings, creating unsustainable workloads. Reliance on traditional delegation for drafting and research is breaking down as trust erodes. The resulting systemic breakdown of verification and trust threatens routine workflows, increases risk of legal errors and penalties, and jeopardizes broader AI adoption in the legal field.
Read at Above the Law
Unable to calculate read time
Collection
[
|
...
]