Two-Thirds of Security Leaders Consider Banning AI-Generated Code
Briefly

According to a recent survey, 63% of security leaders are considering banning AI in coding due to concerns over output quality and reliability.
"In general, this is due to insufficient reviews... developers are scrutinising AI-written code less than they would scrutinise their own code," says Tariq Shaukat, CEO of Sonar.
The report indicates that AI-generated code is often perceived as less accountable, leading developers to feel less pressure to ensure its quality.
"When asked about buggy AI, a common refrain is 'it is not my code,' meaning they feel less accountable because they didn't write it."
Read at TechRepublic
[
|
]