#ai-hallucination

[ follow ]
Artificial intelligence
fromPoynter
16 hours ago

AI is everywhere. Editors should be, too. - Poynter

Editors must remain involved to fact-check AI outputs because AI frequently hallucinates, risking accuracy, credibility, and public trust.
Artificial intelligence
fromSmashing Magazine
1 week ago

The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence - Smashing Magazine

AI hallucinations collapse trust; measurable, designable trust must be prioritized to make generative AI safe and trustworthy in critical domains.
fromDefector
1 week ago

Mark Zuckerberg Demonstrates That His AI Smart Glasses Suck And Don't Work | Defector

"Um ... there we go ... uh-oh," said Meta CEO Mark Zuckerberg on stage as he attempted to answer a video call through a combination of movements between a wristband and a pair of glasses. "Well, I ... let's see what happened there ... that's too bad," he continued, shortly before cutting short the live demo. The video call went unanswered.
Wearables
Artificial intelligence
fromFuturism
2 weeks ago

ChatGPT Goes Completely Haywire If You Ask It to Show You a Seahorse Emoji

ChatGPT fabricates and misidentifies a nonexistent seahorse emoji, demonstrating AI tendencies to hallucinate and prioritize pleasing users over factual accuracy.
fromFortune
2 weeks ago

One of the most common reasons that AI products fail? Bad data | Fortune

"What we had noticed was there was an underlying problem with our data," Ahuja said. When her team investigated what had happened, they found that Salesforce had published contradictory "knowledge articles" on its website."It wasn't actually the agent. It was the agent that helped us identify a problem that always existed," Ahuja said. "We turned it into an auditor agent that actually checked our content across our public site for anomalies. Once we'd cleaned up our underlying data, we pointed it back out, and it's been functional."
Artificial intelligence
Artificial intelligence
fromArs Technica
2 weeks ago

Education report calling for ethical AI use contains over 15 fake sources

Advisory recommendations on AI education contained fabricated or erroneous citations that undermine trust and reveal flawed review processes.
fromPsychology Today
2 weeks ago

Why AI Cheats: The Deep Psychology Behind Deep Learning

A few months ago, I asked ChatGPT to recommend books by and about Hermann Joseph Muller, the Nobel Prize-winning geneticist who showed how X-rays can cause mutations. It dutifully gave me three titles. None existed. I asked again. Three more. Still wrong. By the third attempt, I had an epiphany: the system wasn't just mistaken, it was making things up.
Artificial intelligence
fromFuturism
2 weeks ago

Elon Musk's Grok AI Spread Ludicrous Misinformation After Charlie Kirk's Shooting, Saying Kirk Survived and Video Was Fake

Popular right wing influencer Charlie Kirk was killed in a shooting in Utah yesterday, rocking the nation and spurring debate over the role of divisive rhetoric in political violence. As is often the case in breaking news about public massacres, misinformation spread quickly. And fanning the flames this time was Elon Musk's Grok AI chatbot, which is now deeply integrated into X-formerly-Twitter as a fact-checking tool - giving it a position of authority from which it made a series of ludicrously false claims in the wake of the slaying.
Right-wing politics
fromZDNET
2 weeks ago

OpenAI's fix for hallucinations is simpler than you think

"Language models are optimized to be good test-takers, and guessing when uncertain improves test performance," the authors write in the paper. The current evaluation paradigm essentially uses a simple, binary grading metric, rewarding them for accurate responses and penalizing them for inaccurate ones. According to this method, admitting ignorance is judged as an inaccurate response, which pushes models toward generating what OpenAI describes as "overconfident, plausible falsehoods" -- hallucination, in other words.
Artificial intelligence
#google-ai-overviews
fromFuturism
1 month ago
Artificial intelligence

Google's AI Is Committing a Unique Evil: Giving Gamers Tips That Are Actually False

fromFuturism
1 month ago
Artificial intelligence

Local Restaurant Exhausted as Google AI Keeps Telling Customers About Daily Specials That Don't Exist

fromFuturism
1 month ago
Artificial intelligence

Google's AI Is Committing a Unique Evil: Giving Gamers Tips That Are Actually False

fromFuturism
1 month ago
Artificial intelligence

Local Restaurant Exhausted as Google AI Keeps Telling Customers About Daily Specials That Don't Exist

Artificial intelligence
fromHackernoon
2 months ago

AI Hallucinations Are Costing Businesses Millions: What BAML Is Doing to Prevent Them | HackerNoon

AI hallucinations in generative models pose serious risks for businesses, including compliance violations and reputation damage.
Artificial intelligence
fromTechCrunch
4 months ago

Anthropic CEO claims AI models hallucinate less than humans | TechCrunch

AI models may hallucinate less frequently than humans, according to Anthropic's CEO Dario Amodei.
Amodei remains optimistic about achieving AGI by 2026, despite challenges posed by hallucinations.
#ai
fromHackernoon
1 year ago
Gadgets

The TechBeat: Evaluating TnT-LLM Text Classification: Human Agreement and Scalable LLM Metrics (4/22/2025) | HackerNoon

Text embeddings are pivotal for AI understanding, converting words into machine-readable numbers.
fromTechzine Global
5 months ago
Artificial intelligence

New OpenAI models hallucinate more often than their predecessors

OpenAI's newer reasoning models, o3 and o4-mini, .hallucinate more frequently than older models, posing challenges to AI accuracy.
fromHackernoon
1 year ago
Gadgets

The TechBeat: Evaluating TnT-LLM Text Classification: Human Agreement and Scalable LLM Metrics (4/22/2025) | HackerNoon

[ Load more ]