Feb. 21, 2024, 5:46 a.m. | Jean-R\'emy Conti, Nathan Noiry, Vincent Despiegel, St\'ephane Gentric, St\'ephan Cl\'emen\c{c}on

cs.CV updates on arXiv.org arxiv.org

arXiv:2210.13664v2 Announce Type: replace
Abstract: In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, …

abstract algorithms applications arxiv bias biases cs.ai cs.cv deep learning deep learning algorithms face face recognition fisher gender gender bias investigations performance population recognition reliability show subgroups type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AIML - Sr Machine Learning Engineer, Data and ML Innovation

@ Apple | Seattle, WA, United States

Senior Data Engineer

@ Palta | Palta Cyprus, Palta Warsaw, Palta remote