Hallucinations Are A Feature of AI, Humans Are The Bug | HackerNoon
Briefly

"Large language models, like GPT-4, are statistical machines at their core. They don't 'know' facts the way humans do. Instead, they predict the most likely sequence of words based on patterns in the text they've been trained on."
"If we want to use AI better, it's time to stop expecting LLMs to perform roles they were never designed for and start leveraging them for what they're truly good at: aiding human creativity and generating possibilities—not absolute truths."
Read at Hackernoon
[
|
]