Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?. (arXiv:1912.01094v1 [cs.LG])

Multiple fairness constraints have been proposed in the literature, motivated
by a range of concerns about how demographic groups might be treated unfairly
by machine learning classifiers. In this work we consider a different
motivation; learning from biased training data. We posit several ways in which
training data may be biased, including having a more noisy or negatively biased
labeling process on members of a disadvantaged group, or a decreased prevalence
of positive or negative examples from the disadvantaged group, or both.

Given such biased training data, Empirical Risk Minimization (ERM) may
produce a classifier that not only is biased but also has suboptimal accuracy
on the true data distribution. We examine the ability of fairness-constrained
ERM to correct this problem. In particular, we find that the Equal Opportunity
fairness constraint (Hardt, Price, and Srebro 2016) combined with ERM will
provably recover the Bayes Optimal Classifier under a range of bias models. We
also consider other recovery methods including reweighting the training data,
Equalized Odds, and Demographic Parity. These theoretical results provide
additional motivation for considering fairness interventions even if an actor
cares primarily about accuracy.

Source link

Related posts

Capture the Flag: the emergence of complex cooperative agents


Large-scale empirical validation of Bayesian Network structure learning algorithms with noisy data. (arXiv:2005.09020v1 [cs.LG])


Big energy savings for tiny machines


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy


COVID-19 (Coronavirus) is a new illness that is having a major effect on all businesses globally LIVE COVID-19 STATISTICS FOR World