Assessing the risk of AI in enterprise IT | Computer Weekly
Briefly

IT leaders need to address the security implications of AI as it integrates into organizations. AI presents risks by providing employees easy access to powerful tools and fostering implicit trust in its outputs. Employees may make decisions based on AI content without proper verification, leading to vulnerabilities. Concerns include internal data leakage, external data leakage, and the potential manipulation of AI systems. It is essential to carefully manage AI access, similar to onboarding a new employee, and navigate the evolving risks of compromised AI in decision-making processes.
"Think of AI as an exceptionally confident intern. It's helpful and full of suggestions, but requires oversight and verification," he says.
"There's internal data leakage - oversharing - which occurs when you ask the model a question and it gives an internal user information that it shouldn't share. And then there's external data leakage," says Heinen.
"If you think of an AI model as a new employee who has just come into the company, do you give them access to everything? No, you don't. You trust them gradually over time as they demonstrate the trust and capacity to do tasks," he says.
"It isn't just about data leakage anymore, although that remains a significant concern," he says. "We're now navigating territory where AI systems can be compromised, manipulated, or even 'gamed', to influence business decisions."
Read at ComputerWeekly.com
[
|
]