How to avoid the risks of rapidly deploying AI agents
Briefly

Organizations plan substantial increases in AI spending, with many deployments expected to become autonomous. Investments span machine learning, private large language models, AI agents, and agentic capabilities. Systems must be made agent-ready to connect language models with enterprise data sources and workflows, including CRM and ERP integrations. Rapid deployment raises concerns about security, technical debt, complexity, and cloud spending. Using public LLMs without controls can cause ethical lapses, biased outputs, data exposure, regulatory breaches, and wasted investment. Common operational failures include unvetted models, exposing sensitive data to third parties, and lacking observability for performance, fairness, and compliance.
AI agents are not software with well-defined inputs and deterministic outputs. Their functions include connecting language models with enterprise data sources and integrating them with workflows. Many organizations are using agents embedded in CRM, ERP, and other employee workflow systems, while some businesses are experimenting with AI customer experiences. The speed of deployment is concerning to many experts, who fear haste makes waste, creates security risks, and may result in mounting technical debt.
"Enterprises rushing to deploy AI agents, especially via public LLMs, risk ethical lapses, biased outputs, data exposure, regulatory breaches, and wasted spend with little to no business outcomes or value," says Raj Balasundaram, global VP of AI innovations for customers at Verint. "Common missteps include pushing unvetted models into production, exposing sensitive data to third-party platforms, and lacking observability to track performance, fairness, or compliance."
Read at InfoWorld
[
|
]