One of the most pressing issues involves the use of GenAI-powered code assistants, with 84% of security professionals surveyed expressing concerns about potential exposure to unknown or malicious code introduced through these tools.
Liav Caspi, CTO of Legit Security, acknowledged that AI-generated code tends to be riskier, error-prone, or even malicious, stating that threat modeling must consider AI security dangers like data exposure and biased responses.
Nearly all respondents (98%) said they agree security teams need a clearer understanding of how GenAI is employed in development, with 94% stressing the need for better strategies to govern its use.
Chris Hatter, COO/CISO at Qwiet AI, highlighted that while GenAI significantly enhances productivity in app development, embracing these benefits must be balanced with a keen awareness of associated security risks.
Collection
[
|
...
]