This article explores the intersection of fair supervised learning and differential privacy (DP), proposing a distributionally robust optimization framework to bolster fairness criteria. The authors investigate the inductive biases of models trained under DP-based fair learning and present numerical results demonstrating the effectiveness of their approach, particularly in heterogeneous federated learning environments. The research includes theoretical proofs and empirical evaluations, particularly focusing on applications such as image classification with datasets like CelebA, highlighting key differences in classifier performance based on fairness methodologies.
The study focuses on developing a distributionally robust optimization approach for fair supervised learning that enhances fairness criteria while ensuring robustness against data variations.
Through inductive biases in differential privacy-based fair learning, we show how models can achieve fairness outcomes without sacrificing overall predictive performance, especially in federated settings.
Collection
[
|
...
]