all AI news
Discover and Mitigate Multiple Biased Subgroups in Image Classifiers
March 20, 2024, 4:46 a.m. | Zeliang Zhang, Mingqian Feng, Zhiheng Li, Chenliang Xu
cs.CV updates on arXiv.org arxiv.org
Abstract: Machine learning models can perform well on in-distribution data but often fail on biased subgroups that are underrepresented in the training data, hindering the robustness of models for reliable applications. Such subgroups are typically unknown due to the absence of subgroup labels. Discovering biased subgroups is the key to understanding models' failure modes and further improving models' robustness. Most previous works of subgroup discovery make an implicit assumption that models only underperform on a single …
abstract applications arxiv classifiers cs.ai cs.cv data distribution image labels machine machine learning machine learning models multiple robustness subgroups training training data type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Alternance DATA/AI Engineer (H/F)
@ SQLI | Le Grand-Quevilly, France