Revealed: bias found in AI system used to detect UK benefits fraud
Briefly

An internal assessment revealed the UK's AI system for assessing welfare claims shows bias against certain groups based on age, disability, marital status, and nationality.
The DWP had previously claimed the AI system posed no immediate concerns of discrimination, asserting that a human ultimately decides on welfare payments.
Campaigners criticized the government for their 'hurt first, fix later' approach, urging more transparency on which groups may be wrongly suspected by the algorithm.
Caroline Selman stated that the DWP failed to evaluate whether automated processes disproportionately target marginalized groups, highlighting a lack of fairness analysis.
Read at www.theguardian.com
[
|
]