Predictive policing has prejudice built in | Letters
Briefly

The article critiques the use of data-driven systems for crime prediction, asserting that they disproportionately affect racialized and low-income communities. Amnesty International and Statewatch highlight the lack of evidence supporting the effectiveness of these technologies. The feedback loop created by training these systems on biased data results in repetitive targeting of marginalized areas. Critics argue these tools are not truly predictive but rather reinforce existing patterns of discrimination. The analogy to dystopian literature underscores growing concerns about the implications of AI in policing.
The collection and automation of data has repeatedly led to the targeting of racialised and low-income communities, and must come to an end.
These systems are neither revelatory nor objective. They merely subject already marginalised communities to compounded discrimination.
Successive governments have invested in data-driven systems, yet evaluations found no compelling evidence that these systems have reduced crime.
As we are finding out with AI tools, these programs have built-in limitations and errors.
Read at www.theguardian.com
[
|
]