
"U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images."
""What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures - if that's true, if that's what happened here - that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank."
A federal judge flagged immigration agents using artificial intelligence to write use-of-force reports, warning the practice undermines agents' credibility and may explain inaccuracies. Body camera footage showed factual discrepancies with official narratives in at least one incident where an agent asked ChatGPT to compile a narrative from a brief sentence and several images. Experts say relying on AI instead of an officer's firsthand perspective is a poor use of the technology that risks inaccurate accounts and privacy violations. Law enforcement agencies are grappling with how to set guardrails that preserve accuracy, privacy and professionalism while allowing responsible AI use.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]