"Artificial-intelligence tools can summarize and help to write papers as well as generate policy text. But they can also omit critical perspectives, produce falsehoods and, as your News article notes, hallucinate references (see Nature 645, 569-570; 2025). With speed being prioritized over time to reflect and assess, these flaws are rippling across science. Universities are embedding AI detection tools with high error rates into assessments ( D. Weber-Wulff et al. Int. J. Educ. Integr. 19, 26; 2023)."
"Researchers should resist accelerated roll-out of AI tools. Just as robust results require replication and peer review, responsible use of AI requires pause and scrutiny. This should involve creating audit trails to indicate when outputs are machine generated; setting boundaries on tasks for which humans should always make a final decision, such as clinical diagnoses; and prompting users to ask tools for information that might be uncertain or missing from an answer."
Artificial-intelligence tools can assist with summarizing, writing papers and generating policy text, while also risking omissions of critical perspectives, falsehoods and hallucinated references. Prioritizing speed over reflection and assessment is allowing these flaws to spread throughout scientific practice. Examples include flawed AI-detection tools used in university assessments and triage chatbots that may overprescribe compared with physicians. Responsible deployment requires resistance to accelerated roll-out, establishment of audit trails to label machine-generated outputs, clear boundaries for human final decisions such as clinical diagnoses, and user prompts to probe uncertainty or missing information. Embedding these habits fosters a culture of slow AI valuing judgement, transparency and collective scrutiny.
Read at Nature
Unable to calculate read time
Collection
[
|
...
]