all AI news
Universal Generalization Guarantees for Wasserstein Distributionally Robust Models
Feb. 20, 2024, 5:46 a.m. | Tam LeUGA, LJK, J\'er\^ome MalickUGA, CNRS, Grenoble INP, LJK
stat.ML updates on arXiv.org arxiv.org
Abstract: Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that robust models built from Wasserstein ambiguity sets have nice generalization guarantees, breaking the curse of dimensionality. However, these results are obtained in specific cases, at the cost of approximations, or under assumptions difficult to verify in practice. In contrast, we establish, in this article, exact generalization guarantees that …
abstract arxiv breaking data dimensionality distribution machine machine learning machine learning models math.oc nice optimization robust robust models statistical stat.ml the curse of dimensionality train type uncertainty
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US