» Articles » PMID: 37056515

How Robust is Your Fairness? Evaluating and Sustaining Fairness Under Unseen Distribution Shifts

Overview
Date 2023 Apr 14
PMID 37056515
Authors
Affiliations
Soon will be listed here.
Abstract

Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.

Citing Articles

Adapting Machine Learning Diagnostic Models to New Populations Using a Small Amount of Data: Results from Clinical Neuroscience.

Wang R, Erus G, Chaudhari P, Davatzikos C ArXiv. 2024; .

PMID: 39314511 PMC: 11419182.

References
1.
Miyato T, Maeda S, Koyama M, Ishii S . Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning. IEEE Trans Pattern Anal Mach Intell. 2018; 41(8):1979-1993. DOI: 10.1109/TPAMI.2018.2858821. View

2.
Martinez N, Bertran M, Sapiro G . Minimax Pareto Fairness: A Multi Objective Perspective. Proc Mach Learn Res. 2021; 119:6755-6764. PMC: 7912461. View