Aug. 11, 2023, 6:51 a.m. | Hao Liang, Pietro Perona, Guha Balakrishnan

cs.CV updates on arXiv.org arxiv.org

We propose an experimental method for measuring bias in face recognition
systems. Existing methods to measure bias depend on benchmark datasets that are
collected in the wild and annotated for protected (e.g., race, gender) and
non-protected (e.g., pose, lighting) attributes. Such observational datasets
only permit correlational conclusions, e.g., "Algorithm A's accuracy is
different on female and male faces in dataset X.". By contrast, experimental
methods manipulate attributes individually and thus permit causal conclusions,
e.g., "Algorithm A's accuracy is affected by …

algorithmic bias arxiv benchmark benchmarking bias datasets evaluation experimental face face recognition gender human lighting race recognition synthetic systems

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States