Workers challenge 'hidden' AI hiring tools in class action with major regulatory stakes.
Briefly

Workers challenge 'hidden' AI hiring tools in class action with major regulatory stakes.
"AI algorithms then rank a candidate's "suitability" on a numerical scale of 0 to 5, based on "conclusions, inferences, and assumptions" about their culture fit, projected future career trajectory, and other factors. This method is intended to create a profile of the candidate's "behavior, attitudes, intelligence, aptitudes, and other characteristics," according to the lawsuit."
"However, these reports are "unreviewable" and "largely invisible" to candidates, who have no opportunity to dispute their contents before they are passed on to hiring managers, the plaintiffs argue. "Lower-ranked candidates are often discarded before a human being ever looks at their application." This method of report creation violates longstanding FCRA requirements, and there is no stipulated exemption for AI use, according to the suit."
AI systems assign each candidate a numerical suitability score from 0 to 5 based on conclusions, inferences, and assumptions about culture fit, projected career trajectory, and other factors. The systems produce profiles describing behavior, attitudes, intelligence, aptitudes, and other characteristics. Those AI-generated reports are unreviewable and largely invisible to candidates, preventing any opportunity to dispute their contents before reports reach hiring managers. Lower-ranked candidates are frequently discarded without human review. The practice is alleged to violate longstanding FCRA requirements, and no exemption for AI use is specified under those rules.
Read at Computerworld
Unable to calculate read time
[
|
]