Europe politics
www.theguardian.comInstagram actively helping spread of self-harm among teenagers, study finds
A month-long study by Danish researchers revealed that Meta's moderation of self-harm content on Instagram was incredibly inadequate, with no content being removed from their private network. Despite Meta's claims of advanced removal processes, the study's findings highlight a troubling lack of action.
Digitalt Ansvar demonstrated that even a simple AI tool created for the study could identify 38% of self-harm images and 88% of the most severe types, suggesting that Meta has the technological capability to address the issue but fails to implement it effectively in practice.
The Digital Services Act mandates that large digital services, including Instagram, take responsibility for identifying and mitigating systemic risks to user wellbeing. The results from the study call into question whether Meta complies with this EU law regarding moderation of harmful content.
Meta spokespersons claim that content encouraging self-injury goes against their policies and that they removed over 12 million related pieces from Instagram in the first half of 2024. However, the study's findings suggest that these claims are not consistent with the reality of the platform's implementation.