A recent study reveals that the same biases affecting human reviews of resumes based on race and gender also persist when using large language models (LLMs). Researchers found that models with racial and gender distinctiveness scores favored resumes with white and male names, mirroring previous human biases.
The University of Washington researchers tested three Massive Text Embedding models on over three million resume and job description pairings. They found that even after ensuring terms matched without names, biases still emerged in the relevance scores assigned to resumes with distinct racial and gender names.
These findings underscore the importance of understanding how AI systems can inherit and propagate biases present in human decision-making processes. Such biases can result in systemic disadvantages for marginalized groups in job application contexts.
As AI increasingly plays a role in recruitment, it is essential to address these biases to ensure fair evaluation practices. The researchers call for further scrutiny and adjustment of these models to minimize discriminatory outcomes.
Collection
[
|
...
]