Jan. 27, 2022, 2:11 a.m. | Francisco Vargas, Andrius Ovsianas, David Fernandes, Mark Girolami, Neil D. Lawrence, Nikolas Nüsken

cs.LG updates on arXiv.org arxiv.org

In this work we explore a new framework for approximate Bayesian inference in
large datasets based on stochastic control. We advocate stochastic control as a
finite time and low variance alternative to popular steady-state methods such
as stochastic gradient Langevin dynamics (SGLD). Furthermore, we discuss and
adapt the existing theoretical guarantees of this framework and establish
connections to already existing VI routines in SDE-based models.

arxiv bayesian learning ml

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States