Why 'Helpful' Legal AI Is Often The Least Trustworthy - Above the Law
Briefly

Why 'Helpful' Legal AI Is Often The Least Trustworthy - Above the Law
"Users were willing to tolerate difficulty, ambiguity, and even uncertainty. What they did not tolerate was repetition, generic responses, and overstructured interactions that made the system feel inattentive to context."
"During the pilot, users consistently reported lower trust when the AI behaved in overly 'helpful' ways. Repeating the same guidance in slightly different words led to feelings of shallowness and inattentiveness."
"When the AI challenged assumptions, surfaced competing considerations, or forced users to grapple with ambiguity, trust increased. Even when the interaction was harder, users felt the system was paying attention."
"One of the clearest quantitative signals from the pilot was that trust dropped more sharply in response to repetition than to difficulty."
Empirical pilots using an AI legal coach revealed that lawyers value judgment over politeness in AI interactions. Users tolerated difficulty and ambiguity but rejected repetitive, generic responses. Trust diminished when AI systems provided overly helpful guidance without engaging with the substance of legal problems. In contrast, when AI challenged assumptions and addressed complexities, trust increased. The findings indicate that attentiveness to context is crucial for building trust in legal AI systems.
Read at Above the Law
Unable to calculate read time
[
|
]