Achieving Fair AI Without Sacrificing Accuracy | HackerNoon
Briefly

The article discusses the application of the Sensitive Attribute-based Distributional Robust Optimization (SA-DRO) within DP-based fair learning algorithms. Experiments highlight SA-DRO's effectiveness in reducing bias toward majority sensitive attributes, leading to improved prediction accuracy without significant loss of performance. Maintaining the fairness regularization penalty coefficient at λ = 0.9 proved critical in achieving a balance between fair outcomes and predictor reliability. Overall, SA-DRO advances the field of fair supervised learning, particularly in heterogeneous federated contexts where sensitive attributes influence model predictions.
In our experiments, we applied the SA-DRO algorithm to the DDP-based KDE fair learning algorithm proposed by [11], and RFI proposed by [13]. We kept the fairness regularization penalty coefficient to be λ = 0.9. The DRO regularization coefficient can take over the range [0, 1], in this table, we set ϵ = 0.9 for SA-DRO case.
We observed that the proposed SA-DRO reduces the tendency of the fair learning algorithm toward the majority sensitive attribute, and the resulting negative prediction rates conditioned to sensitive attribute outcomes became closer to the midpoint between the majority and minority conditional accuracies.
The SA-DRO-based algorithms still achieve a low DDP value while the accuracy drop is less than 1%. Moreover, we visualize the prediction shifting in Figure 4 by applying the SA-DRO algorithm.
The fairness regularization penalty coefficient, maintained at λ = 0.9, plays a crucial role in balancing accuracy and fairness outcomes across sensitive attributes.
Read at Hackernoon
[
|
]