Working in groups can help Republicans and Democrats agree on controversial content moderation online
Briefly

Working in groups can help Republicans and Democrats agree on controversial content moderation online
"The findings are published in the Journal of Social Computing. In an experiment involving over 600 participants with diverse political views, Centola and Guilbeault found that content moderators who classified controversial social media content in groups reached near-perfect agreement on what should remain online. Those who worked alone showed only 38% agreement by the end of the experiment."
""Morally controversial content, such as offensive and hateful images on social media, is especially challenging to categorize, given widespread disagreement in how people interpret and evaluate this content," Centola says. "Yet, recent large-scale analyses of classification patterns over social media suggest that separate populations, such as Democrats and Republicans, can reach surprising levels of agreement in the categorization of inflammatory content like fake news and hate speech, despite considerable differences in their moral reasoning and worldview. We wanted to know why.""
An experiment with over 600 politically diverse participants tested how moderators classify controversial social media content. Group classifiers reached near-perfect agreement about which content should remain online, while individual classifiers achieved only 38% agreement by the experiment's end. The experiment links group consensus to structural synchronization, a process in which social interaction filters individual variation and aligns separate networks' classifications. Team-based moderation reduced partisan disagreement and produced consistent decisions about offensive and hateful images. Collaborative moderation structures can improve the reliability of content classification across ideological differences.
Read at Phys
Unable to calculate read time
[
|
]