AI security blunders have cyber professionals scrambling
Briefly

Generative AI is increasingly integrated into business processes but has simultaneously escalated cybersecurity challenges. A study by Palo Alto Networks reveals an 890% rise in generative AI traffic in 2024, mainly for writing assistance and conversational agents. This surge comes alongside a doubling of data loss prevention incidents in early 2025, causing generative AI to account for 14% of all data security incidents in SaaS traffic. Organizations face difficulties in managing unsanctioned GenAI tools and maintaining visibility over their usage, exposing them to risks from unauthorized data access and compromised AI models.
Organizations are grappling with the unfettered proliferation of GenAI applications within their environments. On average, organizations have about 66 GenAI applications in use.
The widespread use of unsanctioned GenAI tools, coupled with a lack of clear AI policies and the pressure for rapid AI adoption, can expose organizations to new risks.
Researchers said a key problem here lies in a lack of visibility into AI usage, with shadow AI making it hard for security teams to monitor and control how tools are being used across the organization.
Jailbroken or manipulated AI models can respond with malicious links and malware, or enable its use for unintended purposes.
Read at IT Pro
[
|
]