Interventions for Ranking in the Presence of Implicit Bias. (arXiv:2001.08767v1 [cs.CY])

Implicit bias is the unconscious attribution of particular qualities (or lack
thereof) to a member from a particular social group (e.g., defined by gender or
race). Studies on implicit bias have shown that these unconscious stereotypes
can have adverse outcomes in various social contexts, such as job screening,
teaching, or policing. Recently, (Kleinberg and Raghavan, 2018) considered a
mathematical model for implicit bias and showed the effectiveness of the Rooney
Rule as a constraint to improve the utility of the outcome for certain cases of
the subset selection problem. Here we study the problem of designing
interventions for the generalization of subset selection — ranking — that
requires to output an ordered set and is a central primitive in various social
and computational contexts. We present a family of simple and interpretable
constraints and show that they can optimally mitigate implicit bias for a
generalization of the model studied in (Kleinberg and Raghavan, 2018).
Subsequently, we prove that under natural distributional assumptions on the
utilities of items, simple, Rooney Rule-like, constraints can also surprisingly
recover almost all the utility lost due to implicit biases. Finally, we augment
our theoretical results with empirical findings on real-world distributions
from the IIT-JEE (2009) dataset and the Semantic Scholar Research corpus.

Source link

Related posts

Computer system transcribes words users “speak silently”


Ice Pasupat "Natural Language Interface for Web Interaction via Compositional Generation"


We Are Not Robots: 5 Ways Leading Companies Combine AI and Human Intelligence – MarTech Advisor


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy


COVID-19 (Coronavirus) is a new illness that is having a major effect on all businesses globally LIVE COVID-19 STATISTICS FOR World