How to Reduce Majority Bias in AI Models | HackerNoon
Briefly

The article presents an analysis of the inductive biases inherent in fair supervised learning algorithms that pursue demographic parity (DP). It introduces a distributionally robust optimization approach to mitigate biases against majority sensitive attributes. The authors suggest further research into biases in pre-processing and post-processing methods in fair learning. The work also aims to theoretically compare different dependence measures and understand the trade-off between accuracy and fairness violations in DP-based fair learning scenarios, indicating its significance in future explorations.
In this work, we attempted to demonstrate the inductive biases of in-processing fair learning algorithms aiming to achieve demographic parity (DP).
Read at Hackernoon
[
|
]