Aug. 25, 2022, 1:12 a.m. | Theodore Papamarkou

stat.ML updates on arXiv.org arxiv.org

This paper identifies several characteristics of approximate MCMC in Bayesian
deep learning. It proposes an approximate sampling algorithm for neural
networks. By analogy to sampling data batches from big datasets, it is proposed
to sample parameter subgroups from neural network parameter spaces of high
dimensions. While the advantages of minibatch MCMC have been discussed in the
literature, blocked Gibbs sampling has received less research attention in
Bayesian deep learning.

arxiv bayesian bayesian deep learning deep learning learning mcmc ml

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada