Scale AI and labor platform Outlier were sued for alleged negligence regarding the mental health of contractors who label training data for AI models. The lawsuit claims these workers were not adequately protected from harmful content, which they encountered while performing tasks essential for supervised learning in AI. This process includes labeling and scoring input prompts to improve AI safety and efficiency. Despite the size of the AI training data market and reliance on various international labor resources, Scale AI disputes the lawsuit’s allegations.
The lawsuit accuses Scale AI and Outlier of failing to protect their data-labeled workforce from harmful content, shedding light on mental health issues in AI training.
Scale AI disputes the lawsuit's claims but faces serious allegations regarding the treatment of contractors responsible for labeling data that trains AI systems.
Collection
[
|
...
]