all AI news
Addressing Racial Bias in Facial Emotion Recognition. (arXiv:2308.04674v1 [cs.CV])
cs.CV updates on arXiv.org arxiv.org
Fairness in deep learning models trained with high-dimensional inputs and
subjective labels remains a complex and understudied area. Facial emotion
recognition, a domain where datasets are often racially imbalanced, can lead to
models that yield disparate outcomes across racial groups. This study focuses
on analyzing racial bias by sub-sampling training sets with varied racial
distributions and assessing test performance across these simulations. Our
findings indicate that smaller datasets with posed faces improve on both
fairness and performance metrics as the …
arxiv bias datasets deep learning emotion fairness labels racial bias recognition sampling study training