Enterprise use of open source AI coding is changing the ROI calculation
Briefly

Enterprise use of open source AI coding is changing the ROI calculation
"Even if the resulting code functions properly, which it often doesn't, it is introducing a wide range of corporate risks, ranging from legal (copyright, trademark, or patent infringements), cybersecurity (backdoors and inadvertently introduced malware) and accuracy (hallucinations, as well as models trained/fine-tuned on inaccurate data). Some of those issues are generated by poorly-worded prompts, and others occur because the model improperly interpreted proper prompts."
""AI slop PRs [pull requests] are becoming increasingly draining and demoralizing for Godot maintainers," he said. "We find ourselves having to second guess every PR from new contributors, multiple times per day." Questions arise about whether the code was written at least in part by a human, and whether the 'author' understands the code they're sending. He asked, "did they test it? Are the test results made up? Is this code wrong because it was written by AI or is it an honest mistake"
Open source adoption historically balanced benefits and risks, but AI coding tools now create disproportionately higher risks. AI-assisted code often contains errors, hallucinations, improperly trained data artifacts, and security vulnerabilities including backdoors and inadvertent malware. Legal exposures include potential copyright, trademark, and patent infringements from generated code. Maintainers increasingly must scrutinize pull requests, question authorship and testing, and expend more effort verifying contributions. Poorly-worded prompts and misinterpretation by models both contribute to faulty outputs, undermining developer morale and altering enterprise calculations of coding return on investment.
Read at InfoWorld
Unable to calculate read time
[
|
]