all AI news
How to be fair? A study of label and selection bias
March 22, 2024, 4:42 a.m. | Marco Favier, Toon Calders, Sam Pinxteren, Jonathan Meyer
cs.LG updates on arXiv.org arxiv.org
Abstract: It is widely accepted that biased data leads to biased and thus potentially unfair models. Therefore, several measures for bias in data and model predictions have been proposed, as well as bias mitigation techniques whose aim is to learn models that are fair by design. Despite the myriad of mitigation techniques developed in the past decade, however, it is still poorly understood under what circumstances which methods work. Recently, Wick et al. showed, with experiments …
abstract aim arxiv bias biased data cs.ai cs.cy cs.lg data fair leads learn predictions study type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Data Scientist, Mid
@ Booz Allen Hamilton | DEU, Stuttgart (Kurmaecker St)
Tech Excellence Data Scientist
@ Booz Allen Hamilton | Undisclosed Location - USA, VA, Mclean