Web: http://arxiv.org/abs/2209.07157

Sept. 16, 2022, 1:11 a.m. | Richard Kurle, Ralf Herbrich, Tim Januschowski, Yuyang Wang, Jan Gasthaus

cs.LG updates on arXiv.org arxiv.org

Variational Bayesian posterior inference often requires simplifying
approximations such as mean-field parametrisation to ensure tractability.
However, prior work has associated the variational mean-field approximation for
Bayesian neural networks with underfitting in the case of small datasets or
large model sizes. In this work, we show that invariances in the likelihood
function of over-parametrised models contribute to this phenomenon because
these invariances complicate the structure of the posterior by introducing
discrete and/or continuous modes which cannot be well approximated by Gaussian …

arxiv inference likelihood

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Canada, Remote)

@ FreshBooks | Canada

Data Engineer

@ Amazon.com | Irvine, California, USA

Senior Autonomy Behavior II, Performance Assessment Engineer

@ Cruise LLC | San Francisco, CA

Senior Data Analytics Engineer

@ Intercom | Dublin, Ireland

Data Analyst Intern

@ ADDX | Singapore

Data Science Analyst - Consumer

@ Yelp | London, England, United Kingdom

Senior Data Analyst - Python+Hadoop

@ Capco | India - Bengaluru

DevOps Engineer, Data Team

@ SingleStore | Hyderabad, India

Software Engineer (Machine Learning, AI Platform)

@ Phaidra | Remote

Sr. UI/UX Designer - Artificial Intelligence (ID:1213)

@ Truelogic Software | Remote, anywhere in LATAM

Analytics Engineer

@ carwow | London, England, United Kingdom

HRIS Data Analyst

@ SecurityScorecard | Remote