"If you've worked in software long enough, you've probably lived through the situation where you write a ticket, or explain a feature in a meeting, and then a week later you look at the result and think: this is technically related to what I said, but it is not what I meant at all. Nobody considers that surprising when humans are involved. We shrug, we sigh, we clarify, we fix it."
"The real problem is that we were used to machines being deterministic in a very specific way. You type code into a compiler, it either accepts it or it doesn't. The machine never argues back, never claims it fixed something it didn't fix, never improvises. Now we suddenly have machines that behave much more like people: probabilistic, context hungry and occasionally very confidently wrong."
Large language models behave like probabilistic, context-dependent coworkers rather than deterministic tools. Users often supply vague inputs expecting precise outputs, then judge the models as broken when results differ from mental intent. Human collaborators frequently deliver technically related but semantically divergent outcomes and require clarification and iteration, so similar expectations should apply for LLMs. Traditional machines like compilers are deterministic, while modern generative systems can argue, improvise, and be confidently wrong. Effective use requires adjusting workflows, clarifying prompts, and treating models as fallible collaborators whose outputs need validation and refinement.
Read at Scatterarrow
Unable to calculate read time
Collection
[
|
...
]