
"Cobalt released its State of LLM Security Report 2025, which reveals a widening readiness gap in enterprise security as the rapid adoption of generative AI ( genAI) outpaces defenders' ability to secure it. Thirty-six percent of security leaders and practitioners admit that genAI is moving faster than their teams can manage as organizations continue to embed AI deep into core business operations."
"The report found that 48% of respondents believe a "strategic pause" is needed to recalibrate defenses against genAI-driven threats. In addition, 72% of respondents cite genAI-related attacks as their top IT risk, but 33% are still not conducting regular security assessments, including penetration testing, for their LLM deployments. Half of respondents want more transparency from software suppliers about how they detect and prevent vulnerabilities, signaling a growing trust gap in the AI supply chain."
"Top concerns among all survey respondents include sensitive information disclosure (46%), model poisoning or theft (42%), and training data leakage (37%), all pointing to an urgent need to protect the integrity of data pipelines. Overall, 69% of serious findings across all pentest categories are resolved but this falls to just 21% of the high-severity vulnerabilities found in LLM pentests. This is a concern given that 32% of LLM pentest findings are serious and are the lowest resolution rate across all test types conducted by Cobalt."
Enterprise adoption of generative AI is outpacing security readiness; 36% of security leaders and practitioners say genAI is moving faster than their teams can manage. Forty-eight percent of respondents believe a "strategic pause" is needed to recalibrate defenses. Seventy-two percent cite genAI-related attacks as their top IT risk, yet 33% do not conduct regular security assessments, including penetration testing, for LLM deployments. Half of respondents want greater transparency from software suppliers about vulnerability detection and prevention. Top concerns include sensitive information disclosure (46%), model poisoning or theft (42%), and training data leakage (37%). LLM pentests show low remediation, with only 21% of high-severity LLM findings resolved.
Read at Securitymagazine
Unable to calculate read time
Collection
[
|
...
]