all AI news
Optimising Equal Opportunity Fairness in Model Training. (arXiv:2205.02393v1 [cs.LG])
May 6, 2022, 1:11 a.m. | Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann
cs.LG updates on arXiv.org arxiv.org
Real-world datasets often encode stereotypes and societal biases. Such biases
can be implicitly captured by trained models, leading to biased predictions and
exacerbating existing societal preconceptions. Existing debiasing methods, such
as adversarial training and removing protected information from
representations, have been shown to reduce bias. However, a disconnect between
fairness criteria and training objectives makes it difficult to reason
theoretically about the effectiveness of different techniques. In this work, we
propose two novel training objectives which directly optimise for the …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
IT Data Engineer
@ Procter & Gamble | BUCHAREST OFFICE
Data Engineer (w/m/d)
@ IONOS | Deutschland - Remote
Staff Data Science Engineer, SMAI
@ Micron Technology | Hyderabad - Phoenix Aquila, India
Academically & Intellectually Gifted Teacher (AIG - Elementary)
@ Wake County Public School System | Cary, NC, United States