Is Bias in AI Quantifiable? | HackerNoon
Briefly

It's unsettling, to say the least. But here's where things start getting tricky—can we really quantify that bias? Is there some magic metric that can definitively tell us how biased an AI is?
Bias doesn't look the same across situations. Take a facial recognition system: the data it's fed might disproportionately represent white men, resulting in systems that work great for them but trip up for others.
This bias isn't just about skin color or gender. For instance, consider crime-prediction algorithms trained on historically biased police data, which lead AI to learn that marginalized communities are crime hotspots.
Bias runs deep in the way we label, collect, and interpret data. A single bias metric? That's not going to cut it. Turns out, bias in AI ain't a monolith—it’s more like a hydra.
Read at Hackernoon
[
|
]