Nov. 18, 2022, 2:13 a.m. | Jiayu Yao, Yaniv Yacoby, Beau Coker, Weiwei Pan, Finale Doshi-Velez

stat.ML updates on arXiv.org arxiv.org

Comparing Bayesian neural networks (BNNs) with different widths is
challenging because, as the width increases, multiple model properties change
simultaneously, and, inference in the finite-width case is intractable. In this
work, we empirically compare finite- and infinite-width BNNs, and provide
quantitative and qualitative explanations for their performance difference. We
find that when the model is mis-specified, increasing width can hurt BNN
performance. In these cases, we provide evidence that finite-width BNNs
generalize better partially due to the properties of their …

advantages analysis arxiv bayesian networks neural networks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States