AI slop pushes data governance towards zero-trust models | Computer Weekly
Briefly

AI slop pushes data governance towards zero-trust models | Computer Weekly
"Unverified and low quality data generated by artificial intelligence (AI) models - often known as AI slop - is forcing more security leaders to look to zero-trust models for data governance, with 50% of organisations likely to start adopting such policies by 2028, according to Gartner's seers. Currently, large language models (LLMs) are typically trained on data scraped - with or without permission - from the world wide web and other sources including books, research papers, and code repositories."
"A Gartner study of CIOs and tech execs published in October 2025 found 84% of respondents expected to increase their generative AI (GenAI) funding in 2026 and as this trend accelerates, so will the volume of AI-generated data, meaning that future LLMs are trained more and more with outputs from current ones. This, said the analyst house, will heighten the risk of models collapsing entirely under the accumulated weight of their own hallucinations and inaccurate realities."
Unverified and low-quality data generated by AI models, often called AI slop, is pushing security leaders toward zero-trust data governance, with about 50% of organisations expected to adopt such policies by 2028. Large language models are typically trained on scraped data from the web, books, papers, and code repositories, many of which already contain AI-generated content that will proliferate. Growing generative AI investment will increase AI-generated data volumes, causing future models to be trained increasingly on outputs of current ones. That feedback loop raises risks of model collapse from accumulated hallucinations and unreliable realities. Zero-trust authentication, verification, and intensified regulation for 'AI-free' data are necessary to safeguard outcomes.
Read at ComputerWeekly.com
Unable to calculate read time
[
|
]