AI coding assistants are creating messy code
Briefly

Multiple studies highlight generative AI's impact on coding efficiency, with risks of flawed code creation. Developers increasingly rely on AI assistants for coding tasks, leading to potential challenges in software quality and security.
While AI coding assistants boost productivity, research suggests a downside: an increase in incorrect code generation and compromised software security. Tools like GitHub Copilot, Meta's Llama 3, Google's Gemini Code Assist, and AWS' CodeWhisperer offer support but also introduce risks.
Studies indicate that programmers using AI assistants produce less secure code, with a significant percentage of generated code being incorrect or partially incorrect. The adoption of AI coding assistants correlates with the need for subsequent code fixes and a potential rise in code reversion rates.
Even major AI providers' tools like Microsoft Copilot, Meta AI, and Meta Code Llama face challenges in coding tasks, as demonstrated by failures in coding tests like developing a Wordpress plugin. The efficiency gains from AI tools may be overshadowed by quality and security concerns.
Read at Axios
[
]
[
|
]