March 26, 2024, 4:44 a.m. | Shixiong Wang, Haowei Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2212.09962v3 Announce Type: replace
Abstract: Bayesian methods, distributionally robust optimization methods, and regularization methods are three pillars of trustworthy machine learning combating distributional uncertainty, e.g., the uncertainty of an empirical distribution compared to the true underlying distribution. This paper investigates the connections among the three frameworks and, in particular, explores why these frameworks tend to have smaller generalization errors. Specifically, first, we suggest a quantitative definition for "distributional robustness", propose the concept of "robustness measure", and formalize several philosophical concepts …

abstract arxiv bayesian cs.lg distribution errors frameworks machine machine learning optimization paper regularization robust robustness stat.ml true trustworthy type uncertainty

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne