Why we trust AI when it makes things up | MarTech
Briefly

Why we trust AI when it makes things up | MarTech
"Luis Lastras stated, 'Hallucinations are intentional.' This concept reveals that developers use these outputs to learn how models function, as they do not yet filter their responses effectively."
"Lastras demonstrated how AI can provide extraneous information, such as the distance from Earth when asked about Mars' moons, showcasing the need for validation in AI outputs."
"A study by Elon University found that nearly 70% of AI users believe the AI is always correct, indicating a significant trust in AI outputs despite potential inaccuracies."
At the All Things AI Conference, insights emerged about AI hallucinations, described as intentional outputs that help developers understand model behavior. Luis Lastras from IBM explained that small models validate outputs to minimize hallucinations. These hallucinations can include irrelevant information, as demonstrated by an example where an AI provided unnecessary details about Mars' moons. Despite inaccuracies, users often trust AI outputs, highlighting a tendency to assume AI correctness.
Read at MarTech
Unable to calculate read time
[
|
]