Is AI Moderation a Useful Tool or Another Failed Social Media Fix?
Briefly

Social media has the potential to connect people but has increasingly become a source of disinformation and mental health challenges due to issues like hate speech and cyberbullying. Researchers have developed an optimized Support Vector Machine (SVM) model that effectively classifies toxic from non-toxic content with 87% accuracy. The model was trained on a diverse dataset comprising comments in both English and Bangla, illustrating AI's role both as a contributor to the toxicity problem and as a tool to mitigate it through enhanced detection methods.
AI has emerged as both a creator of toxic content and a potential solution to the escalating issues of disinformation and mental health deterioration on social media.
The optimized SVM developed in the study achieved 87% accuracy in classifying toxic and non-toxic content, highlighting the potential for AI to effectively combat online toxicity.
Researchers from Bangladesh and Australia trained their model using over 9,000 comments, successfully addressing the complexities of identifying toxic discourse in diverse languages.
This new algorithm demonstrates how advances in AI can be harnessed to manage the challenges of online spaces while promoting healthier communication.
Read at ZME Science
[
|
]