The article details how New York City's Administration for Children's Services has implemented an algorithmic risk assessment tool that categorizes families as 'high risk' based on a range of factors. This mechanism uses 279 variables to scrutinize families, often without sufficient evidence or oversight. Critics point to the algorithm's opacity and highlight its disproportionate impact on Black families, who are investigated at a rate seven times higher than white families. Concerns are raised regarding the ethical implications and potential discrimination inherent in the tool's deployment.
Using a grab-bag of factors like neighborhood and mother's age, this AI tool can put families under intensified scrutiny without proper justification and oversight.
The lack of transparency, accountability, or due process protections demonstrates that ACS has learned nothing from the failures of similar products in the realm of child services.
The algorithm operates in complete secrecy and the harms from this opaque 'AI theater' are not theoretical.
It is likely that the algorithm effectively automates and amplifies this discrimination.
Collection
[
|
...
]