
"But that's not stopping some of the world's most popular artificial intelligence models from sending users looking for records such as these, according to a new International Committee of the Red Cross (ICRC) statement. OpenAI's ChatGPT, Google's Gemini, Microsoft's Copilot and other models are befuddling students, researchers and archivists by generating incorrect or fabricated archival references, according to the ICRC, which runs some of the world's used research archives."
"AI models not only point some users to false sources but also cause problems for researchers and librarians, who end up wasting their time looking for requested nonexistent records, says Library of Virginia chief of researcher engagement Sarah Falls. Her library estimates that 15 percent of emailed reference questions it receives are now ChatGPT-generated, and some include hallucinated citations for both published works and unique primary source documents. For our staff, it is much harder to prove that a unique record doesn't exist, she says."
Artificial intelligence models including OpenAI's ChatGPT, Google's Gemini and Microsoft's Copilot generate incorrect or fabricated archival references, journals and research papers. The International Committee of the Red Cross reported that models point users to nonexistent sources such as fabricated journals and archives. Librarians and researchers face increased workloads confirming and disproving AI-generated citations, with the Library of Virginia estimating 15 percent of emailed reference questions are ChatGPT-generated. Some AI outputs include hallucinated citations for published works and unique primary-source documents, making it difficult for staff to prove that a requested unique record does not exist. The ICRC recommends consulting online catalogs and established references.
Read at www.scientificamerican.com
Unable to calculate read time
Collection
[
|
...
]