April 2, 2024, 7:44 p.m. | Hanlin Yu, Marcelo Hartmann, Bernardo Williams, Arto Klami

cs.LG updates on arXiv.org arxiv.org

arXiv:2303.05101v4 Announce Type: replace
Abstract: Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks. It has been observed that the methods in which notions of differential geometry are included tend to have better performances, with the Riemannian metric improving posterior exploration by accounting for the local curvature. However, the existing methods often resort to simple diagonal metrics to remain computationally efficient. This loses some of the gains. We propose two non-diagonal metrics that can be used …

abstract accounting arxiv bayesian bayesian inference cs.lg differential dynamics exploration geometry gradient improving inference metrics networks neural networks performances posterior sampling scalable stat.co stochastic stochastic-gradient type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Tableau/PowerBI Developer (A.Con)

@ KPMG India | Bengaluru, Karnataka, India

Software Engineer, Backend - Data Platform (Big Data Infra)

@ Benchling | San Francisco, CA