all AI news
Fast Model Debias with Machine Unlearning. (arXiv:2310.12560v2 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
Recent discoveries have revealed that deep neural networks might behave in a
biased manner in many real-world scenarios. For instance, deep networks trained
on a large-scale face recognition dataset CelebA tend to predict blonde hair
for females and black hair for males. Such biases not only jeopardize the
robustness of models but also perpetuate and amplify social biases, which is
especially concerning for automated decision-making processes in healthcare,
recruitment, etc., as they could exacerbate unfair economic and social
inequalities among …
arxiv biases dataset discoveries face face recognition hair instance machine networks neural networks recognition robustness scale unlearning world