fromNielsen Norman Group
1 day agoExplainable AI in Chat Interfaces
As AI chat interfaces become more popular, users increasingly rely on AI outputs to make decisions. Without explanations, AI systems are black boxes. Explaining to people how an AI system has reached a particular output helps users form accurate mental models, prevents the spread of misinformation, and helps users decide whether to trust an AI output. However, the explanations currently offered by large language models (LLMs) are often inaccurate, hidden, or confusing.
Artificial intelligence




