May 13, 2024, 4:42 a.m. | Kuo-Wei Lai, Vidya Muthukumar

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.06546v1 Announce Type: cross
Abstract: Overparameterized models that achieve zero training error are observed to generalize well on average, but degrade in performance when faced with data that is under-represented in the training sample. In this work, we study an overparameterized Gaussian mixture model imbued with a spurious feature, and sharply analyze the in-distribution and out-of-distribution test error of a cost-sensitive interpolating solution that incorporates "importance weights". Compared to recent work Wang et al. (2021), Behnia et al. (2022), our …

abstract analysis arxiv cs.it cs.lg data distribution error importance math.it performance sample stat.ml study training type work

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Global Clinical Data Manager

@ Warner Bros. Discovery | CRI - San Jose - San Jose (City Place)

Global Clinical Data Manager

@ Warner Bros. Discovery | COL - Cundinamarca - Bogotá (Colpatria)

Ingénieur Data Manager / Pau

@ Capgemini | Paris, FR