all AI news
Improving Out-of-Distribution Robustness via Selective Augmentation. (arXiv:2201.00299v1 [cs.LG])
Jan. 4, 2022, 2:10 a.m. | Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn
cs.LG updates on arXiv.org arxiv.org
Machine learning algorithms typically assume that training and test examples
are drawn from the same distribution. However, distribution shift is a common
problem in real-world applications and can cause models to perform dramatically
worse at test time. In this paper, we specifically consider the problems of
domain shifts and subpopulation shifts (eg. imbalanced data). While prior works
often seek to explicitly regularize internal representations and predictors of
the model to be domain invariant, we instead aim to regularize the whole …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Program Control Data Analyst
@ Ford Motor Company | Mexico
Vice President, Business Intelligence / Data & Analytics
@ AlphaSense | Remote - United States