How AI-driven hiring tools are quietly reinforcing the biases they promised to fix - Silicon Canals
Briefly

How AI-driven hiring tools are quietly reinforcing the biases they promised to fix - Silicon Canals
"Every machine learning system is a mirror of its inputs. When you train a hiring algorithm on a decade of successful hires at a company that historically promoted white men into leadership, the system learns to favour white men. This should surprise no one. And yet, vendor after vendor continues to market these tools as "bias-free" or "objective.""
"Amazon discovered this the hard way in 2018 when its internal recruiting tool was found to systematically downgrade resumes containing the word "women's," as in "women's chess club captain." The system had learned from 10 years of hiring patterns in which men dominated technical roles. Amazon scrapped the tool, but the underlying logic persists across the industry."
"A Bloomberg investigation published in 2024 found that large language models used in resume screening consistently ranked candidates with names perceived as white and male higher than equally qualified candidates with names perceived as Black or female. The models weren't told to discriminate. They absorbed it from the world's text."
AI recruitment systems were promoted as solutions to eliminate unconscious bias in hiring by using algorithms to objectively evaluate candidates. However, research reveals these tools are replicating and intensifying existing biases. The core problem stems from training data: algorithms learn patterns from historical hiring decisions, so systems trained on decades of data favoring certain demographics perpetuate those preferences. Amazon's 2018 recruiting tool systematically downgraded resumes mentioning women's organizations because men historically dominated technical roles. A 2024 Bloomberg investigation found large language models consistently ranked candidates with white and male-perceived names higher than equally qualified candidates with Black or female-perceived names. The illusion of objectivity causes decision-makers to relax oversight, assuming algorithms are neutral when they actually encode historical discrimination into automated processes.
Read at Silicon Canals
Unable to calculate read time
[
|
]