Law
fromAbove the Law
1 week agoHow Savvy Lawyers Balance AI Innovation With AI Responsibility - Above the Law
AI tools in law firms promise efficiency but may risk long-term expertise for short-term gains.
"While many have been discussing the privacy risks of people following the ChatGPT caricature trend, the prompt reveals something else alarming - people are talking to their LLMs about work," said Josh Davies, principal market strategist at Fortra, in an email to eSecurityPlanet. He added, "If they are not using a sanctioned ChatGPT instance, they may be inputting sensitive work information into a public LLM. Those who publicly share these images may be putting a target on their back for social engineering attempts, and malicious actors have millions of entries to select attractive targets from."
Shadow AI is the unsanctioned use of artificial intelligence tools outside of an organization's governance framework. In the healthcare field, clinicians and staff are increasingly using unvetted AI tools to improve efficiency, from transcription to summarization. Most of this activity is well-intentioned. But when AI adoption outpaces governance, sensitive data can quietly leave organizational control. Blocking AI outright isn't realistic. The more effective approach is to make safe, governed AI easier to use than unsafe alternatives.
The prospects for phishing in the era of AI could be huge. We've (arguably) moved well beyond requests for money from fake nation state princes, we're now in place where all message formats (emails, audio messages or video messages) can faked. "We are going to have to have multiple trusted channels with those who are close to us. If one channel, email, WhatsApp, Slack, etc. gets an important message, you may need to validate this on another channel.
As software application development teams now start to embrace an increasing number of automation tools to provide AI-driven (or at least AI-assisted) coding functions in their codebases, a Newtonian equal and opposite reaction is also surfacing in the shape of governance controls and guardrails to keep AI injections in check as these technologies now surface in the software supply chain.
The people who most need to experiment with AI-those in routine cognitive roles-experience the highest psychological threat. They're being asked to enthusiastically adopt tools that might replace them, triggering what neuroscientists call a "threat state." Research by Amy Edmondson at Harvard Business School reveals that team learning requires psychological safety-the belief that interpersonal risk-taking feels safe. But AI adoption adds an existential twist: The threat isn't just social embarrassment; it's professional survival.
In most cases, employees are driving adoption from the bottom up, often without oversight, while governance frameworks are still being defined from the top down. Even if they have enterprise-sanctioned tools, they are often eschewing these in favor of other newer tools that are better-placed to improve their productivity. Unless security leaders understand this reality, uncover and govern this activity, they are exposing the business to significant risks.