all AI news
The premise of approximate MCMC in Bayesian deep learning. (arXiv:2208.11389v1 [stat.ML])
Aug. 25, 2022, 1:12 a.m. | Theodore Papamarkou
stat.ML updates on arXiv.org arxiv.org
This paper identifies several characteristics of approximate MCMC in Bayesian
deep learning. It proposes an approximate sampling algorithm for neural
networks. By analogy to sampling data batches from big datasets, it is proposed
to sample parameter subgroups from neural network parameter spaces of high
dimensions. While the advantages of minibatch MCMC have been discussed in the
literature, blocked Gibbs sampling has received less research attention in
Bayesian deep learning.
arxiv bayesian bayesian deep learning deep learning learning mcmc ml
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Software Engineer, Generative AI (C++)
@ SoundHound Inc. | Toronto, Canada