Generative artificial intelligence platforms saw a 50% increase in enterprise use in recent months. Despite this growth, unsanctioned on-premise shadow AI now represents half of AI application use, compounding security risks. Employees are leveraging platforms like Ollama and LM Studio to create custom AI solutions, but this practice risks exposing enterprise data to leakage. Organizations face pressure to identify and manage this shadow AI usage while balancing innovation and security needs, necessitating enhanced data loss prevention policies and real-time user guidance.
The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using GenAI platforms and where they are building and deploying them.
Security teams don't want to hamper employee end users' innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP policies to incorporate real-time user coaching elements.
Collection
[
|
...
]