Oct. 25, 2022, 1:12 a.m. | Yuexi Wang, Nicholas G. Polson, Vadim O. Sokolov

cs.LG updates on arXiv.org arxiv.org

Deep Learning (DL) methods have emerged as one of the most powerful tools for
functional approximation and prediction. While the representation properties of
DL have been well studied, uncertainty quantification remains challenging and
largely unexplored. Data augmentation techniques are a natural approach to
provide uncertainty quantification and to incorporate stochastic Monte Carlo
search into stochastic gradient descent (SGD) methods. The purpose of our paper
is to show that training DL architectures with data augmentation leads to
efficiency gains. We use …

arxiv augmentation bayesian bayesian deep learning data deep learning

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada