Businesses are finally asking whether their AI is secure
Briefly

Businesses are finally asking whether their AI is secure
"Nearly two-thirds (64 percent) of all business leaders who participated in the World Economic Forum's (WEF) Global Cybersecurity Outlook 2026 said that they assessed AI tools' security risks before deploying them. The finding represents a steep rise compared with last year's 37 percent figure, and underlines how much of a priority AI security has become for organizations worldwide. Nearly all respondents (94 percent) said that AI will be the most significant driver of cybersecurity change in 2026, and 87 percent believe that the associated vulnerabilities have increased - more than any other type of threat."
"For leaders, the most common fear concerning AI right now is data leaks, the WEF survey noted. Coming in just behind is the advancement of adversarial capabilities, which makes sense given that the report also found that geopolitically motivated attacks were the most common feature of leaders' risk strategies. Sixty-four percent of organizations reported that geopolitical matters played the biggest role in shaping their cyber risk strategies, topping the list for consecutive years."
Sixty-four percent of business leaders now assess AI tools' security risks before deployment, up from 37% a year earlier. Ninety-four percent expect AI to be the primary driver of cybersecurity change in 2026, and 87% report increasing AI-related vulnerabilities. Common concerns include data leaks and advancing adversarial capabilities. Geopolitically motivated attacks strongly influence risk strategies, with 64% of organizations citing geopolitics as the leading factor, especially among larger organizations. Real-world incidents such as prompt-injection exploits, problematic AI code assistants, and security fixes for models like Gemini have amplified attention on AI security and risk management.
Read at Theregister
Unable to calculate read time
[
|
]