
""We strive to produce high quality software tools, rather than simply generating more lines of code in less time." And while "LLMs excel at producing code that looks mostly human generated" they pair said these models often have "underlying bugs that can be replicated at scale." "This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams." Or put another way, "well intentioned people" submit code with hallucinations, submissions, exaggeration or misrepresentation."
"So, it is asking contributors to "please refrain from submissions that you haven't thoroughly understood, reviewed, and tested." Baldwin and Hancock want them to disclose when their contributions came courtesy of an LLM and insist that documentation and comments is "human" in origin. "Project leads can determine if submissions aren't reasonably reviewable." OpenUK CEO Amanda Brock said the open source community was only now grasping the impact of AI-generated code on projects and maintainers."
The Electronic Frontier Foundation will accept LLM-generated code contributions while prohibiting non-human generated comments and documentation. Contributors must thoroughly understand, review, disclose, and test any LLM-assisted submissions. The policy emphasizes producing high-quality software tools over generating more lines of code quickly. LLMs can produce plausible-looking code but often contain replicable underlying bugs that make review exhausting for smaller, under-resourced teams. Maintainers may need to refactor AI-assisted submissions when contributors do not understand generated code. Project leads can reject submissions that are not reasonably reviewable. OpenUK warns of widespread low-quality AI contributions overwhelming maintainers.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]