Skip to content

My research, studying and assessing the extent to which latent bias is present in commercial facial recognition systems offered by Google Cloud, Face++, and Microsoft Azure. Created a 1600 image dataset with faces from 8 regions in South Asia. Used this dataset to assess each cloud service.

Notifications You must be signed in to change notification settings

armeetj/southasian-face-recognition-bias-analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Bias in Industry Leading Facial Recognition Services: A Regional Analysis Across South Asian Regions

Published Paper

Published in International Journal for Innovative Research in Interdisciplinary Fields
https://www.ijirmf.com/wp-content/uploads/IJIRMF202108001.pdf image

Author

Armeet Profile Picture

Armeet Singh Jatyani
Independent Researcher – San Jose, California, United States.
Email – [email protected]

Abstract

The objective of this study is to assess the extent to which bias is present, if any, in facial recognition services offered by FacePlusPlus, Google Cloud, and Microsoft Azure. This study assesses the selected services across eight different South Asian regions: Kashmir (North), Ladakh (North), Punjab (North), Rajasthan (Northwest), Jharkhand (East), Telangana (South), Tamil Nadu (Deep South), and Gujarat (West). Our results reveal interesting and concerning patterns between characteristics of regions, and final accuracy scores. FacePlusPlus had unevenly distributed beauty scores, with a range of approximately 9.49, was more likely to correctly identify the gender of males, and severely struggled to accurately detect the gender and faces of groups with heavy facial hair, such as males in the Punjab (North) region. Microsoft Azure was more likely to accurately predict the gender and face of females, and struggled the most (out of all three services) with groups that had heavy facial hair, with a gender detection accuracy of just 63% for the Punjab (North) region. Finally, Google Cloud performed phenomenally, with facial detection accuracy percentages higher than 90% across all eight regions and genders. The results reveal disturbing biases present in the FacePlusPlus and Microsoft Azure facial recognition/detection services, that should be addressed to maintain ethical integrity.

Keywords

bias, ethnic bias, regional bias, gender bias, computer vision, faceplusplus, google cloud, microsoft azure, gender classification, detection accuracy

Dataset

  • Total: 1600 Images
  • 8 Regions: 200 each (100 male, 100 female) Samples:

Graphs

Only a few graphs are shown here. To view all results and conclusions please read the full paper: https://www.ijirmf.com/wp-content/uploads/IJIRMF202108001.pdf

image image image

About

My research, studying and assessing the extent to which latent bias is present in commercial facial recognition systems offered by Google Cloud, Face++, and Microsoft Azure. Created a 1600 image dataset with faces from 8 regions in South Asia. Used this dataset to assess each cloud service.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published