AI chatbots are already biasing research - we must establish guidelines for their use now
Briefly

AI chatbots are already biasing research - we must establish guidelines for their use now
"And the extraction rate of US-based AI start-up company Anthropic climbed even higher over the same period: from 6,000 pages to 60,000. Even tech giant Google, long considered an asset to publishers because of the referral traffic it generated, tripled its ratio from 6 pages to 18 with the launch of its AI Overviews feature. The current information ecosystem is dominated by 'answer engines' - AI chatbots that synthesize and deliver information directly, with users trusting the answers now more than ever."
"Although these tools can answer questions faster and often more accurately than search engines can, this efficiency has a price. In addition to the decimation of web traffic to publishers, there is a more insidious cost. Not AI's 'hallucinations' - fabrications that can be corrected - but the biases and vulnerabilities in the real information that these systems present to users."
"Consider what happens when researchers ask an AI tool to recommend peer reviewers in their field. One study focused on physics found that AI systems over-represented scholars with names that the scientists classified as belonging to white people and under-represented those with names they classified as Asian ( D. Barolo et al. Preprint at arXiv https://doi.org/p46k; 2025). Algorithms can amplify distinct prejudices depending on the field, the context and the wording of the query."
AI systems are consuming large volumes of online content while directing very few users to original publishers. Extraction ratios increased dramatically in 2025: OpenAI from roughly 250 to 1,500 pages per referral, Anthropic from 6,000 to 60,000, and Google from 6 to 18 after launching AI Overviews. AI chatbots now dominate information retrieval and enjoy strong user trust. This efficiency reduces web traffic to publishers and introduces deeper harms: real informational biases and vulnerabilities. Empirical evidence shows AI recommendations can over-represent names classified as white and under-represent names classified as Asian, demonstrating algorithmic amplification of prejudice.
Read at Nature
Unable to calculate read time
[
|
]