Legal AI is splitting in two-and most people miss the difference | Fortune
Briefly

Legal AI is splitting in two-and most people miss the difference | Fortune
"When that GC tested Claude, the system did exactly what it was designed to do: pull from available sources. No legal research database, no authoritative content, no firm precedents. Just the open web, which includes Wikipedia. Most reactions split into predictable camps. One said foundation models can't handle legal work. The other said models will improve. Both miss the real issue."
"Claude and ChatGPT are remarkably capable. The problem isn't intelligence, but whether the surrounding system is designed for the task at hand, combining authoritative sources, expert oversight, and practical safeguards. This is an architecture problem."
Thomson Reuters' CoCounsel reached one million users while Anthropic expanded Claude's enterprise plugins for legal, finance, and HR work. A recent incident where Claude pulled contract review information from Wikipedia sparked debate about AI readiness for legal work. The core issue isn't AI intelligence but system design. Foundation models like Claude and ChatGPT are capable, yet require proper architecture integrating authoritative legal databases, expert oversight, and practical safeguards. The distinction between AI capability and systems failure determines competitive advantage in legal technology. Anthropic's department-specific plugins, including legal tools for document review and risk flagging, demonstrate how companies are addressing this architectural challenge.
Read at Fortune
Unable to calculate read time
[
|
]