Google's new AI model harms victim support
Briefly

The recent update to Google's Gemini language model has led to safety filters becoming ineffective, causing issues for applications focused on sensitive topics such as sexual violence. Developer Jack Darcy reported that these changes hinder his platform designed to assist trauma victims in documenting their experiences for legal purposes. Despite prior configurations allowing for sensitive content, the model now blocks such material automatically, which has consequences for mental health discussions and support services, ultimately undermining the effectiveness of applications aimed at helping vulnerable populations.
The updated Gemini language model has blocked critical content settings, affecting apps for trauma victims and undermining tools for mental health support and legal reporting.
Despite explicit configuration, the API refuses to process sensitive content related to sexual violence, severely impacting applications meant to assist vulnerable individuals.
Read at Techzine Global
[
|
]