#platform-accountability

[ follow ]
Artificial intelligence
fromIndependent
5 days ago

Catherine Prasifka: Internet has evolved from an empathy machine to an echo chamber where weird men use AI to undress women

Grok is flooding social feeds with AI-generated, non-consensual sexualised images of women and children, marking a dangerous point of no return for online spaces.
fromThe Atlantic
5 days ago

The Problem Is So Much Bigger Than Grok

In this episode of Galaxy Brain, Charlie Warzel confronts the growing crisis around AI-generated sexual abuse and the culture of impunity enabling it. He examines how Elon Musk's chatbot Grok is being used to create and circulate nonconsensual sexualized images often targeting women. Warzel lays out why this moment represents a red line for the internet: It is a test of whether society will tolerate tools that silence women through humiliation and intimidation under the guise of free speech.
Artificial intelligence
#ai-image-generation
fromwww.theguardian.com
1 week ago
UK politics

Liz Kendall's response to X nudification' is good but not enough to solve the problem | Nana Nwachukwu

AI image-generation tools enable rapid mass creation of nonconsensual intimate images, facilitating sexual harassment and allowing platforms to profit unless criminalised and tightly regulated.
fromwww.theguardian.com
1 week ago
UK news

The Guardian view on Ofcom versus Grok: chatbots cannot be allowed to undress children. | Editorial

AI image generation tools are producing sexualised and illegal images of women and children, requiring urgent regulatory action and stronger platform safeguards.
fromPinkNews | Latest lesbian, gay, bi and trans news | LGBTQ+ news
1 week ago

Grok is the 'real threat' to women, not trans people, cisgender women argue

For years, we've watched politicians express unfounded concern about trans people in bathrooms, changing rooms, and sports, claiming to protect women's safety. Yet when a billionaire with enormous political influence creates technology that is actively being used to violate thousands of women and children right now, the response has been empty statements and promises to 'look into it',
UK politics
#digital-services-act
Digital life
fromFast Company
1 month ago

Parents say online blackmail of kids is rising-and AI is making a bad problem worse

One in five parents supported a child who experienced online blackmail, often involving social media, encrypted messaging, and AI-generated deepfakes.
Right-wing politics
fromThe Nation
3 months ago

Right-Wing Moguls Dominate Social Media. One Legal Fix Can Help Stem the Tide.

Consolidation of major social platforms under right-wing owners concentrates influence and raises questions about platform accountability amid potential Section 230 reforms.
Social media marketing
fromwww.mercurynews.com
3 months ago

Opinion: If hate-fueled algorithms cause real-world harm, tech firms should pay

Engagement-focused social media algorithms prioritize anger and outrage to maximize user attention, fueling division, dehumanization and real-world harm.
Media industry
fromStaticmade
4 months ago

Turn Off the Internet

Big tech platforms use attention-maximizing algorithms that prioritize engagement and rage, actively shaping political polarization while avoiding publisher responsibility.
Miscellaneous
fromIrish Independent
3 months ago

Race to the Aras: Connolly says it is not up to Starmer to decide Hamas' role in Palestinian state

Malicious social media smears about presidential candidate Jim Gavin have caused distress and platforms have been slow or unresponsive in removing them.
Artificial intelligence
fromTechCrunch
4 months ago

FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others | TechCrunch

The FTC is investigating seven tech companies over safety, monetization, and parental awareness concerns regarding AI chatbot companions for minors following harmful outcomes.
Marketing tech
fromExchangewire
4 months ago

Brand Safety: When Ad Dollars Fear Headlines, Not Harm

Advertisers must redefine brand safety to prioritize platform-level ethics and AI harm prevention over mere content adjacency checks.
[ Load more ]