Oct. 26, 2022, 1:14 a.m. | Jean-Rémy Conti, Nathan Noiry, Vincent Despiegel, Stéphane Gentric, Stéphan Clémençon

cs.CV updates on arXiv.org arxiv.org

In spite of the high performance and reliability of deep learning algorithms
in a wide range of everyday applications, many investigations tend to show that
a lot of models exhibit biases, discriminating against specific subgroups of
the population (e.g. gender, ethnicity). This urges the practitioner to develop
fair systems with a uniform/comparable performance across sensitive groups. In
this work, we investigate the gender bias of deep Face Recognition networks. In
order to measure this bias, we introduce two new metrics, …

arxiv bias face face recognition gender gender bias

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA