
"Shadow IT has long been a security concern, primarily driven by employees seeking to improve their performance, which introduces unknown and unmanaged risks."
"The danger in relying on LLMs is that their responses are not guaranteed to be accurate, leading to varying answers from different models."
"CoChat provides a control layer that examines LLM reasoning, pausing agentic actions that may expose sensitive data or delete important information."
"CoChat enforces a human in the loop, ensuring that even autonomous systems require explicit user approval for potentially dangerous instructions."
Shadow AI emerges as employees adopt AI tools to enhance performance without IT or security oversight. This trend increases risks associated with unknown and unmanaged agentic AI. CoChat, launched in April 2026, aims to provide visibility and governance over these tools by connecting employees to foundational LLMs and preventing the creation of disconnected AI silos. It also introduces a control layer to monitor LLM instructions to agentic systems, ensuring human approval for potentially dangerous actions, thereby mitigating risks associated with autonomous AI operations.
Read at SecurityWeek
Unable to calculate read time
Collection
[
|
...
]