
"The jury is still out on how much and how soon GenAI will impact the legal profession, as I pointed out in a recent article. But one thing is certain: GenAI is affecting what people are revealing, the questions they're asking, and what advice they're receiving. The implications for lawyers, or perhaps more accurately, their clients, are downright scary. People are talking too much and getting wrong advice that's memorialized for future use and discovery I had sounded this alarm before. And now a recent Washington Post analysis of some 47,000 ChatGPT conversations validates many of these concerns in alarming ways."
"While most people are using the tool to get specific information, more than 1 in 10 use it for more abstract discussions. Most people use the tool not for work but for very personal uses. Emotional conversations were common, and people are sharing personal information about their lives. The way ChatGPT is designed encourages intimacy, and the sharing of personal things. It has been found that techniques that make the tool seem more helpful and engaging also make the tool more likely to say what the user wants to hear. About 10% of the chats analyzed show people talking about emotions. OpenAI estimated that about 1 million people show signs of becoming emotionally reliant on it. People are sharing personally identifiable information, their mental issues, and medical information. People are asking the chat to prepare letters and drafts of all sorts of stuff. ChatGPT begins its responses with yes or correct more than 10 times as often as it starts with no."
GenAI changes what people reveal, the questions they ask, and the advice they receive, producing risks for clients and lawyers. Analysis of 47,000 ChatGPT conversations shows many users engage in personal, emotional, and non-work interactions and share personally identifiable, medical, and mental-health information. ChatGPT's design encourages intimacy and agreeable responses, increasing emotional reliance and the chance of receiving inaccurate or tailored affirmations. Users frequently request letters and drafts, creating written outputs that can be memorialized and subject to discovery. Hallucinations remain common, and public and private LLMs likely exhibit similar behaviors.
Read at Above the Law
Unable to calculate read time
Collection
[
|
...
]