Why we need to fear the risk of AI model collapse
Briefly

There is little doubt that the potential of generative AI is enormous. It has, rightly, been presented as a capability that could herald a new, tech-led era full of benefits for humanity. It can speed up mundane tasks at work, aid medical breakthroughs and analyse patterns in ways that Alan Turing and the Bletchley Park codebreakers could only have dreamt of.
And the key to that is data. Model collapse happens when generative AI becomes unstable, wholly unreliable or simply ceases to function. This occurs when generative models are trained on AI-generated content or synthetic data instead of human-generated data. As time goes on, models begin to lose information about the less common but still important aspects of the data, producing less diverse outputs.
Read at www.standard.co.uk
[
add
]
[
|
|
]