
"L.L.M.s are especially good at writing code, in part because code has more structure than prose, and because you can sometimes verify that code is correct. While the rest of the world was mostly just fooling around with A.I. (or swearing it off), I watched as some of the colleagues I most respect retooled their working lives around it. I got the feeling that if I didn't retool, too, I might fall behind."
"I noticed, in my programming work, that, as I asked L.L.M.s to complete increasingly complex tasks, it became harder to defend the notion that they were blindly stitching words together. They seemed to understand what I was asking them to do; they hoovered up not just the sense but the intricate details of my code. And they did it so quickly."
"Just yesterday, I used an A.I. model at work to help me get unstuck two or three times; on one of these occasions, I had the computer tackle a problem that I found daunting, letting it have a crack while I did something else-lunch, I think, or a meeting-and, when I came back, it had worked the problem out. This kind of experience is empowering, but also unnerving."
Rapid adoption of LLMs in programming workflows shows LLMs excel at code generation and debugging because code’s structure enables verification. As tasks become more complex, LLM outputs appear to capture not only overall intent but detailed context, often resolving tricky problems quickly. Observing colleagues restructure workflows around LLMs indicates strong practical utility and competitive pressure. Using LLMs can enable productivity gains and occasional automated problem-solving during breaks. These capabilities produce both empowerment and unease about reliance, accuracy, and the implications of perceived machine understanding.
Read at The New Yorker
Unable to calculate read time
Collection
[
|
...
]