AI/ML

Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation. (arXiv:2005.10430v1 [cs.CV])


Automated computer vision systems have been applied in many domains including
security, law enforcement, and personal devices, but recent reports suggest
that these systems may produce biased results, discriminating against people in
certain demographic groups. Diagnosing and understanding the underlying true
causes of model biases, however, are challenging tasks because modern computer
vision systems rely on complex black-box models whose behaviors are hard to
decode. We propose to use an encoder-decoder network developed for image
attribute manipulation to synthesize facial images varying in the dimensions of
gender and race while keeping other signals intact. We use these synthesized
images to measure counterfactual fairness of commercial computer vision
classifiers by examining the degree to which these classifiers are affected by
gender and racial cues controlled in the images, e.g., feminine faces may
elicit higher scores for the concept of nurse and lower scores for STEM-related
concepts. We also report the skewed gender representations in an online search
service on profession-related keywords, which may explain the origin of the
biases encoded in the models.

Source link

Related posts

Messenger Platform: Build your own chatbot for Facebook Messenger

Newsemia

An Automated Voice Calling Chatbot : Using Amazon Lex, Amazon Connect, Amazon Polly and AWS Lambda…

Newsemia

Improved security for cloud-based machine learning – Digital Journal

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy

COVID-19

COVID-19 (Coronavirus) is a new illness that is having a major effect on all businesses globally LIVE COVID-19 STATISTICS FOR World