Sept. 29, 2022, 1:12 a.m. | Yiping Lu, Wenlong Ji, Zachary Izzo, Lexing Ying

cs.LG updates on arXiv.org arxiv.org

Although overparameterized models have shown their success on many machine
learning tasks, the accuracy could drop on the testing distribution that is
different from the training one. This accuracy drop still limits applying
machine learning in the wild. At the same time, importance weighting, a
traditional technique to handle distribution shifts, has been demonstrated to
have less or even no effect on overparameterized models both empirically and
theoretically. In this paper, we propose importance tempering to improve the
decision boundary …

arxiv importance robustness

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC