Web: http://arxiv.org/abs/2206.07673

June 16, 2022, 1:11 a.m. | Jiri Hron, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein

cs.LG updates on arXiv.org arxiv.org

We introduce repriorisation, a data-dependent reparameterisation which
transforms a Bayesian neural network (BNN) posterior to a distribution whose KL
divergence to the BNN prior vanishes as layer widths grow. The repriorisation
map acts directly on parameters, and its analytic simplicity complements the
known neural network Gaussian process (NNGP) behaviour of wide BNNs in function
space. Exploiting the repriorisation, we develop a Markov chain Monte Carlo
(MCMC) posterior sampling algorithm which mixes faster the wider the BNN. This
contrasts with the …

arxiv bayesian ml networks neural neural networks posterior sampling theory

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY