
"Ars connected with a European nonprofit, AI Forensics, which tested to confirm that X had blocked some outputs in the UK. A spokesperson confirmed that their testing did not include probing if harmful outputs could be generated using X's edit button. AI Forensics plans to conduct further testing, but its spokesperson noted it would be unethical to test the "edit" button functionality that The Verge confirmed still works."
"Last year, the Stanford Institute for Human-Centered Artificial Intelligence published research showing that Congress could "move the needle on model safety" by allowing tech companies to "rigorously test their generative models without fear of prosecution" for any CSAM red-teaming, Tech Policy Press reported. But until there is such a safe harbor carved out, it seems more likely that newly released AI tools could carry risks like those of Grok."
"It's possible that Grok's outputs, if left unchecked, could eventually put X in violation of the Take It Down Act, which comes into force in May and requires platforms to quickly remove AI revenge porn. One of the mothers of one of Musk's children, Ashley St. Clair, has described Grok outputs using her images as revenge porn. While the UK probe continues, Bonta has not yet made clear which laws he suspects X may be violating in the US."
Musk updated Grok to refuse some undress prompts in the UK, prompting claims that X had moved to comply with law. AI Forensics tested and confirmed that X blocked some outputs in the UK but did not probe the edit button, which was reported to still work; AI Forensics deemed testing that functionality unethical. Stanford research recommended congressional safe harbor for red-teaming to improve model safety, but no such protections exist yet. Grok's unchecked outputs could violate the upcoming Take It Down Act and have been described as revenge porn. California AG Bonta urged xAI to take immediate action and restrict problematic outputs, noting images showing victims in "minimal clothing" or depicting children in sexual positions cross legal and ethical lines.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]