Google Gemini update breaks content filters
Briefly

Google Gemini update breaks content filters
"We've been building a platform for sexual assault survivors, rape victims, and so on to be able to use AI to outpour their experiences, and have it turn it into structured reports for police, and other legal matters, and well as offering a way for victims to simply externalize what happened."
"Google just cut it all off. They just pushed a model update that's cut off its willingness to talk about any of this kind of work despite it having an explicit settings panel to enable this and a warning system to allow it."
"While content filtering is appropriate for many AI-powered applications, software related to healthcare, the law, and news reporting, among other things, may need to describe difficult subjects."
"Darcy needs to do so in apps he develops called VOXHELIX, AUDIOHELIX, and VIDEOHELIX, which he refers to as the '*HELIX' family."
The recent update to Google’s Gemini model has compromised the safety settings that developers rely on to offer support for sensitive issues like sexual assault. This disruption has particularly affected applications designed to assist survivors by transforming their experiences into structured reports. Software developer Jack Darcy highlighted how the changes, despite an adjustable settings panel, led to the blocking of critical content, thus curtailing necessary conversations and support for victims. The fallout from this update raises questions about balancing AI safety filters with the need for open discussions on difficult subjects in specialized applications.
Read at Theregister
Unable to calculate read time
[
|
]