March 19, 2024, 4:45 a.m. | Rui Qiao, Bryan Kian Hsiang Low

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.14846v2 Announce Type: replace
Abstract: Despite the rapid development of machine learning algorithms for domain generalization (DG), there is no clear empirical evidence that the existing DG algorithms outperform the classic empirical risk minimization (ERM) across standard benchmarks. To better understand this phenomenon, we investigate whether there are benefits of DG algorithms over ERM through the lens of label noise. Specifically, our finite-sample analysis reveals that label noise exacerbates the effect of spurious correlations for ERM, undermining generalization. Conversely, we …

abstract algorithms arxiv benchmarks benefits clear cs.cv cs.lg development domain erm evidence machine machine learning machine learning algorithms noise perspective risk robustness standard type understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)