May 26, 2022, 1:13 a.m. | Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Rong Jin, Xiangyang Ji, Antoni B. Chan

cs.CV updates on arXiv.org arxiv.org

The performance of machine learning models under distribution shift has been
the focus of the community in recent years. Most of current methods have been
proposed to improve the robustness to distribution shift from the algorithmic
perspective, i.e., designing better training algorithms to help the
generalization in shifted test distributions. This paper studies the
distribution shift problem from the perspective of pre-training and data
augmentation, two important factors in the practice of deep learning that have
not been systematically investigated …

arxiv augmentation cv data distribution perspective pre-training robustness shift study training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Vice President, Data Science, Marketplace

@ Xometry | North Bethesda, Maryland, Lexington, KY, Remote

Field Solutions Developer IV, Generative AI, Google Cloud

@ Google | Toronto, ON, Canada; Atlanta, GA, USA