
"AI chatbots such as ChatGPT-5 and Claude are powerful, convincing, and increasingly used for professional, educational, and personal guidance. Yet they can also generate fake and incorrect information, statements that sound plausible but are entirely false. People often trust AI because it is authoritative, articulate, and seemingly objective. But confident-sounding information can still be completely wrong. The result is an illusion of credibility."
"Gravel and colleagues, for instance, evaluated 59 references provided by ChatGPT and found that almost two-thirds were fabricated, despite appearing legitimate with plausible authors, journal titles, and DOI numbers (Gravel et al., 2023). In medicine, this can be especially dangerous: inaccurate references, poor descriptions of conditions, and fabricated case details have already misled students and professionals. These errors are not intentional deception but the byproduct of how AI generates text, drawing patterns from training data without verifying facts or consulting real databases."
Information now spreads faster through social media, marketing, and AI tools, yet not all available content is accurate. AI chatbots can produce persuasive, authoritative-sounding responses while generating fabricated or incorrect information. Many AI-generated citations and references are invented, sometimes appearing plausible with fabricated authors, journals, and DOIs. In medicine, law, and pharmacology these fabrications have misled students, professionals, and courts. The underlying cause is pattern-based text generation without fact verification or database consultation. The output can appear trustworthy while containing serious inaccuracies that carry real-world risks.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]