AI Is Mangling Police Radio Chatter, Posting It Online as Ridiculous Misinformation
Briefly

AI Is Mangling Police Radio Chatter, Posting It Online as Ridiculous Misinformation
"Law enforcement has embraced artificial intelligence tech to make the lives of officers a little easier. Yet the same tech is already turning into a considerable headache both for its own operations and members of the communities where they work. From kids sending their parents AI-manipulated pictures of them welcoming homeless men into their houses that trigger 911 calls to cops arresting the wrong perpetrators based on the suspicions of dubious AI tools, the tech isn't exactly fostering peace and order."
"Now, police in Oregon are warning that AI apps like CrimeRadar are generating misinformation based on hallucinated police radio chatter, as Central Oregon Daily News reports. CrimeRadar is designed to listen to police frequencies and turn incidents into AI-written blog posts - a disastrous idea that's unsurprisingly turning into a major headache for law enforcement. The AI is woefully misinterpreting what officers are saying on the radio, often reaching alarming - and entirely unfounded - conclusions."
""The officer was at a Shop with a Cop [event] up in Redmond," Bend police communications manager Sheila Miller told the Daily News, referring to a yearly holiday tradition involving deputies and volunteers going toy shopping with young kids. "It doesn't understand what Shop a Cop means. So they say 'shot with a cop,' and now they're suggesting that an officer has been shot in the line of duty in our community.""
Law enforcement agencies are adopting AI tools intended to streamline officer work, but those tools are producing dangerous misinformation. Some apps manipulate audio and imagery or misread police radio chatter, producing fabricated incident reports and alarming conclusions. Misinterpreted phrases like community events can be converted into claims of shootings, provoking fear among residents and families of officers. Automated alerts without human review have previously bungled facts and exposed sensitive information. The lack of accountability and verification for AI-generated outputs is creating operational headaches for police and undermining public trust and safety.
Read at Futurism
Unable to calculate read time
[
|
]