"Although we've been told that A.I. is poised to "revolutionize" work, at the moment it seems to be doing something else entirely: spreading chaos. All throughout American offices, A.I. platforms like ChatGPT are delivering answers that sound right even when they aren't, transcription tools that turn meetings into works of fiction, and documents that look polished on the surface but are riddled with factual errors and missing nuance."
"If you've read anything about A.I., you know that it sometimes "hallucinates" facts that simply aren't true, yet asserts them with so much confidence that its lies don't get caught. (Witness the Chicago Sun-Times' summer reading list that included nonexistent books by famous authors or the multiple lawyers who have been sanctioned for filing documents based on A.I.-fabricated legal citations.) Clearly, there's more work to do on this emerging technology, but in the meantime, it's ravaging some workplaces."
AI platforms often produce answers that sound plausible but are factually incorrect, a phenomenon known as hallucination. Transcription tools can convert meetings into inaccurate or fictionalized records. Generated documents sometimes appear polished while containing factual errors and lacking nuance. Hallucinations have caused tangible harms, including fabricated book listings, sanctioned lawyers for AI-generated citations, and employees posting false claims about company collaborations. These errors spread chaos in offices, undermine trust, waste time on verification, and create legal and ethical exposures. Organizations require stronger verification processes, human oversight, and clear policies before using or publishing AI-generated content.
Read at Slate Magazine
Unable to calculate read time
Collection
[
|
...
]