Feb. 2, 2024, 9:46 p.m. | Hsiang HsuRichard Guihong LiRichard Shaohan HuRichard Chun-FuRichard Chen

cs.LG updates on arXiv.org arxiv.org

Predictive multiplicity refers to the phenomenon in which classification tasks may admit multiple competing models that achieve almost-equally-optimal performance, yet generate conflicting outputs for individual samples. This presents significant concerns, as it can potentially result in systemic exclusion, inexplicable discrimination, and unfairness in practical applications. Measuring and mitigating predictive multiplicity, however, is computationally challenging due to the need to explore all such almost-equally-optimal models, known as the Rashomon set, in potentially huge hypothesis spaces. To address this challenge, we propose …

applications classification concerns cs.lg discrimination dropout exploration generate measuring multiple performance practical predictive samples set stat.ml tasks

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Data Scientist, Mid

@ Booz Allen Hamilton | DEU, Stuttgart (Kurmaecker St)

Tech Excellence Data Scientist

@ Booz Allen Hamilton | Undisclosed Location - USA, VA, Mclean