
"When I catch up on social media - quite a task these days - floods of examples are presented demonstrating the errors made by artificial intelligence. ChatGPT in particular seems to get the blame so often that one may agree that it is artificial, but it is certainly not intelligent. It makes mistakes in simple mathematical problems. It does not correctly count the letters of words. It confuses the times on a simple clock. It just does not get it right."
"ChatGPT, like most other models, is based on a Large Language Model, an LLM. In the case of ChatGPT, these models are trained on some 26 billion pages of text. That is almost 1.5 million years of newspapers. When these billions of training data are thrown into so-called transformer models, the model breaks down the data into smaller parts, for instance, words."
AI systems, especially large language models such as ChatGPT, often attract blame for errors including simple arithmetic mistakes, incorrect letter counts, and misreading clock times. These models are trained on enormous text corpora — around 26 billion pages for ChatGPT — and use transformer architectures that break text into smaller units and recombine them via attention mechanisms. The generative outputs are produced by patchworking learned fragments from training data. Many model failures resemble mistakes humans commonly make. That resemblance suggests reconsidering design goals: whether to optimize for raw intelligence or to intentionally produce more humanlike behavior.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]