AI is creeping into the Linux kernel - and official policy is needed ASAP
Briefly

Open-source and Linux development communities approach AI-generated code with caution despite major vendors reporting significant AI contribution to code. Large language models are being used as productivity tools analogous to compilers, with strongest value on small, well-scoped tasks and routine conversions. Developers have used LLMs to produce complete routines that were then manually reviewed and tested. Current AI usage targets narrow fixes rather than broad components like hardware drivers. Kernel maintainers must decide acceptable practices and policies to balance practical benefits against risks from AI-generated patches.
Large language models (LLMs) are just another fancy compiler. Back in the 50s and 60s, everyone was working in Assembly, and then C showed up, and we didn't stop coding in Assembly because C was suddenly perfect. C isn't perfect, but we stopped doing it because C is good enough, and we're more productive coding in C. And to me, LLMs are a very similar trade-off. They're not perfect yet, but at some point they will be good enough to make us more productive.
This is a great example of what LLMs are doing right now. You give it a small, well-defined task, and it goes and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a driver for my new hardware.' Instead, it's very specific -- convert this specific hash to use our standard API.
Read at ZDNET
[
|
]