Anthropic CEO claims AI models hallucinate less than humans | TechCrunch
Briefly

Dario Amodei, CEO of Anthropic, discussed AI hallucinations during a press event, suggesting they occur less frequently in AI models compared to humans, albeit in more unpredictable manners. He is optimistic about advancements towards AGI, predicting it could be achieved by 2026. Amodei insists that there are no significant barriers to AI development, countering other industry leaders who point out hallucinations as significant issues. Techniques such as integrating web searches with AI models are showing promise in reducing inaccuracies, positioning Anthropic as a leader in the pathway to AGI.
During a press briefing, Anthropic CEO Dario Amodei claimed that AI models might hallucinate less frequently than humans, despite producing unexpectedly erroneous outputs.
Amodei expressed optimism regarding AGI's arrival by 2026, stating 'the water is rising everywhere' in AI advancements, indicating rapid development towards achieving human-level intelligence.
He critiqued the notion of significant obstacles hindering AI progress, asserting 'there's no such thing' as hard blocks preventing advancements towards general intelligence.
Amodei acknowledged challenges like hallucination but argued that recent techniques, including web search access, are contributing to reduced rates of such inaccuracies in AI outputs.
Read at TechCrunch
[
|
]