Why experts are using the word 'bullshit' to describe AI's flaws
Briefly

The term 'hallucination' used to describe false outputs of AI language models might actually be more accurately termed as 'bullshit', according to a recent report by philosophy professors at Glasgow University.
The distinction between a 'liar' and a 'bullshitter' lies in the intention behind the false information - a bullshitter does not aim to deceive but rather presents information that sounds true to achieve a particular outcome.
Bullshit, as defined by the late moral philosopher Harry Frankfurt, is seen as a greater enemy of truth than lies, as it disregards the authority of truth altogether in favor of presenting information for its persuasive impact.
Read at Fast Company
[
|
]