
"U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images."
""What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures - if that's true, if that's what happened here - that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank."
Immigration agents used ChatGPT to generate use-of-force reports, producing narratives that may contain factual inaccuracies and undermining agents' credibility. At least one instance involved an agent asking ChatGPT to compile a report after providing a brief sentence and several images, with resulting narratives that conflicted with body camera footage. Creating reports with AI without relying on an officer's direct experience risks inaccuracy and privacy breaches. Law enforcement agencies face challenges establishing guardrails that permit AI use while maintaining accuracy, privacy, and professionalism. Such practices can further erode public confidence in policing and immigration enforcement.
#ai-assisted-reporting #chatgpt #use-of-force-accuracy #law-enforcement-credibility #privacy-concerns
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]