
"This isn't hypothetical. In a survey of 450 security leaders, engineers, and developers across the U.S. and Europe, 1 in 5 organizations said they had already suffered a serious cybersecurity incident tied to AI-generated code, and more than two-thirds (69%) had uncovered flaws created by AI. Mistakes made by a machine, rather than by a human, are directly linked to breaches that are already causing real financial, reputational, or operational damage. Yet artificial intelligence isn't going away."
"When asked who should be held responsible for an AI-related breach, there's no clear answer. Just over half (53%) said the security team should take the blame for missing the issues or not implementing specific guidelines to follow. Meanwhile, nearly as many (45%) pointed the finger at the individual who prompted the AI to generate the faulty code. This divide highlights a growing accountability void."
AI-generated code has introduced real security risks and is already linked to breaches. A survey of 450 security leaders, engineers, and developers across the U.S. and Europe found 1 in 5 organizations suffered a serious cybersecurity incident tied to AI-generated code, and 69% uncovered AI-created flaws. Mistakes by machines are causing financial, reputational, and operational damage. Adoption pressure remains strong despite risks. Responsibility for AI-related breaches is unclear: 53% blame security teams, 45% blame the individual who prompted the AI, and others blame approvers or external tools. The accountability gap erodes cross-team trust and requires clearer rules, governance, and shared responsibility.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]