In a recent filing for a legal case regarding Minnesota's political deepfake ban, Jeff Hancock admitted to including two fabricated citations and another bibliographic mistake. The renowned researcher acknowledged his errors stemmed from a careless use of ChatGPT, specifically with the new GPT-4o model. Though he is known for his expertise in misinformation, Hancock's reliance on AI tools to draft documents instead resulted in the unintentional insertion of incorrect sources, demonstrating the risks associated with trusting AI for academic integrity.
... I use tools like GPT-4o to enhance the quality and efficiency of my workflow, including search, analysis, formatting and drafting. But this time, the quality suffered. I fed bullet points into GPT-4o, asking the tool to draft a short paragraph based on what I'd written, including '[cite]' as a reminder to add the appropriate academic citations. Unfortunately, the AI replaced '[cite]' with unfortunately fabricated citations to non-existent journal articles.
I overlooked the two hallucinated citations and did not remember to include the correct ones. I am sorry for my oversight in both instances here and for the additional work it has taken to rectify these errors. This incident serves as a crucial reminder of the importance of vigilance when utilizing AI tools in scholarly work, as the technology can inadvertently propagate misinformation.
Collection
[
|
...
]