AI's increasing use in hiring leads to significant concerns about bias in candidate evaluation. Research indicates an overwhelming number of AI-generated résumés, with LinkedIn seeing an increase in applications. A study evaluated several advanced language models, revealing that while some showed gender parity, they exhibited racial bias. None achieved fair outcomes when assessing gender and race together. Impact ratios highlighting disparate impact fell significantly below impartial thresholds, raising alarm among workplace dynamics experts about the potential negative effects of AI tools on job seekers.
"The jobs market is chilly enough at the moment, so inflicting too much inhuman AI on job seekers seems like a cruel blow," says Stefan Stern.
"The models' impact ratios fell as low as 0.809 for race and 0.773 for intersectional groups. These figures are at least 20% below the threshold typically considered impartial."
Collection
[
|
...
]