Institutions are drowning in AI-generated text and they can't keep up
Briefly

Institutions are drowning in AI-generated text and they can't keep up
"In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine's detailed story guidelines into an AI and sent in the results. And they weren't alone. Other fiction magazines have also reported a high number of AI-generated submissions. This is only one example of a ubiquitous trend."
"This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism, and hiring, it's the same story."
"Like Clarkesworld's initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI. Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI. Social media platforms turn to AI moderators. Court systems use AI to triage and process litigation volumes supercharged by AI."
Generative AI produces large volumes of synthetic content that overwhelms systems that formerly relied on the difficulty of writing to limit submissions. Fiction magazines, newspapers, academic journals, lawmakers, courts, conferences, social media, music, open source projects, education, investigative journalism, and hiring all face surges of AI-generated inputs. Some institutions close submission channels; others deploy AI tools for defense, including AI reviewers, moderators, triage systems, and automated applicant screening. These responses create adversarial arms races between offensive and defensive AI uses. The volume-driven strains can cause systemic harms such as clogged courts, degraded editorial processes, and distorted public comment and review systems.
Read at Fast Company
Unable to calculate read time
[
|
]