Aug. 10, 2023, 4:48 a.m. | Alex Fan, Xingshuo Xiao, Peter Washington

cs.CV updates on arXiv.org arxiv.org

Fairness in deep learning models trained with high-dimensional inputs and
subjective labels remains a complex and understudied area. Facial emotion
recognition, a domain where datasets are often racially imbalanced, can lead to
models that yield disparate outcomes across racial groups. This study focuses
on analyzing racial bias by sub-sampling training sets with varied racial
distributions and assessing test performance across these simulations. Our
findings indicate that smaller datasets with posed faces improve on both
fairness and performance metrics as the …

arxiv bias datasets deep learning emotion fairness labels racial bias recognition sampling study training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US