The article critiques large language models (LLMs) for their excessive verbosity, likening their output to that of a typical Reddit commenter. Users express frustration over the lengthy, often meaningless explanations that accompany straightforward queries. Quinten Farmer, co-founder of Portola, suggests this verbosity arises from a lack of genuine knowledge, prompting models to compensate with filler. While many appreciate the efficiency of LLMs in research, the article calls for creators to acknowledge the shortcomings and improve concise communication in AI outputs.
I think the reason that these models behave this way is that it's essentially the behavior of your typical Reddit commenter, right? What do they do? They say too much to sort of cover up the fact that they don't actually know what they're talking about.
The excess is rooted in ignorance. Answers are padded with needless explanations, obvious caveats, and inane argumental detours.
Collection
[
|
...
]