Web: http://arxiv.org/abs/2209.08745

Sept. 29, 2022, 1:13 a.m. | Yiping Lu, Wenlong Ji, Zachary Izzo, Lexing Ying

stat.ML updates on arXiv.org arxiv.org

Although overparameterized models have shown their success on many machine
learning tasks, the accuracy could drop on the testing distribution that is
different from the training one. This accuracy drop still limits applying
machine learning in the wild. At the same time, importance weighting, a
traditional technique to handle distribution shifts, has been demonstrated to
have less or even no effect on overparameterized models both empirically and
theoretically. In this paper, we propose importance tempering to improve the
decision boundary …

arxiv importance robustness

More from arxiv.org / stat.ML updates on arXiv.org

DATA ANALYST /- CONTROLE DE GESTION ET FINANCE H/F

@ METRO/MAKRO | Nanterre, France

Data Analyst

@ Netcentric | Barcelona, Spain

Power BI Developer

@ Lendi Group | Sydney, Australia

Staff Data Scientist - Merchant Services (Remote, North America)

@ Shopify | Dallas, TX, United States

Machine Learning / Data Engineer

@ WATI | Vietnam - Remote

F/H Data Manager

@ Bosch Group | Saint-Ouen-sur-Seine, France

[Fixed-term contract until July 2023] Data Quality Controller - Space Industry Luxembourg (m/f/o)

@ LuxSpace Sarl | Betzdorf, Luxembourg

Senior Data Engineer (Azure DataBricks/datalake)

@ SpectraMedix | East Windsor, NJ, United States

Abschlussarbeit im Bereich Data Analytics (w/m/div.)

@ Bosch Group | Rülzheim, Germany

Data Engineer - Marketing

@ Publicis Groupe | London, United Kingdom

Data Engineer (Consulting division)

@ Starschema | Budapest, Hungary

Team Leader, Master Data Management - Support CN, HK & TW

@ Publicis Groupe | Kuala Lumpur, Malaysia