
"Adding to the reality check, a new report by AI software company CodeRabbit found that code generated by an AI was far more error-prone than the human-written stuff - and by a significant margin. Across the 470 pull requests the company analyzed, AI code produced an average 10.83 issues per request, while human-authored code produced just 6.45. In other words, AI code produced 1.7 times more issues than human code, once again highlighting major weaknesses plaguing generative AI tools."
""The results?" CodeRabbit concluded in its report. "Clear, measurable, and consistent with what many developers have been feeling intuitively: AI accelerates output, but it also amplifies certain categories of mistakes." Worse yet, the company found that AI-generated code produced a higher rate of "critical" and "major" issues, in a "meaningful rise in substantive concerns that demand reviewer attention." AI code was also most likely to contain errors related to logic and correctness."
AI tool adoption among software developers rose from 14 percent to 90 percent within a year. The tools accelerate code generation but often produce unreliable and inaccurate output that can cause mistakes and require long hours to fix. Analysis of 470 pull requests found AI-generated code averaged 10.83 issues per request versus 6.45 for human code, a 1.7× increase. AI code produced higher rates of critical and major issues, with frequent logic and correctness errors. Code quality and readability suffered most, increasing technical debt risk, and generated code introduced cybersecurity issues like improper password handling.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]