Who polices the police AI? Perplexity's public safety deal alarms experts - here's why
Briefly

Who polices the police AI? Perplexity's public safety deal alarms experts - here's why
"As anyone who routinely uses LLM-based tools like Perplexity and ChatGPT knows, these tools are fallible, to say the least: they're prone to hallucination, inaccuracy, regurgitation of cultural biases that have seeped into their training data, and as a general rule, they're designed to optimize for engagement rather than human well-being. Protocols around how to use them safely are very much a work in progress."
"In its announcement, the company said the program is intended to help officers make more informed decisions in real time, and to automate routine tasks like generating descriptions of crime scene photos, analyzing news stories and body camera transcripts, and turning collections of investigators' notes into polished, structured reports. Seems innocuous enough. But to Katie Kinsey, chief of staff and AI policy counsel at the Policing Project, that's precisely what makes them a red flag."
Perplexity launched Perplexity for Public Service Organizations to provide its AI to public safety professionals, offering one free year of Enterprise Pro for up to 200 seats and discounts for larger agencies. The program aims to help officers make more informed, real-time decisions and automate routine tasks such as generating crime scene photo descriptions, analyzing news and body-camera transcripts, and converting investigators' notes into structured reports. LLM-based tools remain fallible, prone to hallucination, inaccuracy, and biased outputs, and are generally optimized for engagement rather than safety. Protocols for safe use are still developing, raising concerns about errors in sensitive law enforcement contexts.
Read at ZDNET
Unable to calculate read time
[
|
]