It Turns Out That Google's AI Is Being Trained by an Army of Poorly Treated Human Grunts
Briefly

It Turns Out That Google's AI Is Being Trained by an Army of Poorly Treated Human Grunts
"many of these "AI raters," tasked with instructing the model and correcting its many mistakes, are facing poor working conditions and are often exposed to extremely disturbing content. It's yet again a worrying reminder that despite tech companies attempting to paint their AI models as miraculous, autonomous fountains of knowledge and cognition that could eventually replace human workers, the current reality is the exact opposite: AI relies on the labor of huge numbers of hidden humans to give it the illusion of intelligence."
""AI isn't magic; it's a pyramid scheme of human labor," Distributed AI Research Institute's Adio Dinika told the Guardian. "These raters are the middle rung: invisible, essential and expendable." In the realm of large language models like Google's Gemini, raters are being tasked with moderating the output of AI, not only ensuring the accuracy of its responses, but also that it doesn't expose users to inappropriate content."
Thousands of contract workers called AI raters perform data-labeling, moderation, and correction tasks for models such as Gemini. These raters often face poor working conditions, exposure to disturbing content, and lack of informed consent while being pressured to verify complex domain information without expertise. Corporations rely on this concealed human labor to train, validate, and sanitize AI outputs, producing apparent autonomy that masks substantial human involvement. Raters ensure accuracy, safety, and appropriateness across fields including medicine, architecture, and self-driving systems, yet they remain largely invisible, essential, and expendable.
Read at Futurism
Unable to calculate read time
[
|
]