Learning from AI's Bullshit
Briefly

The article discusses the reliability issues surrounding modern AI systems, particularly large language models (LLMs), which often generate inaccurate outputs. It critiques the term "hallucination," arguing it misrepresents AI's failures as mere errors, while they actually reflect a lack of concern for truth. Philosophers liken AI outputs to "bullshit," a term that denotes statements made without regard for truthfulness. This critique invites a deeper understanding of AI behavior and emphasizes the philosophical implications of how we interpret the outputs of such technologies.
The main technology of LLMs are complex mathematical structures called 'transformers', which represent the statistical relationships between the order of words in their training data.
Philosophers argue that AI's outputs should be understood as 'bullshit'—statements made without concern for truth, contrasting with lying, which involves deception.
Read at Apaonline
[
|
]