July 20, 2022, 1:10 a.m. | Lei Li, Yuliang Wang

cs.LG updates on arXiv.org arxiv.org

We establish a sharp uniform-in-time error estimate for the Stochastic
Gradient Langevin Dynamics (SGLD), which is a popular sampling algorithm. Under
mild assumptions, we obtain a uniform-in-time $O(\eta^2)$ bound for the
KL-divergence between the SGLD iteration and the Langevin diffusion, where
$\eta$ is the step size (or learning rate). Our analysis is also valid for
varying step sizes. Based on this, we are able to obtain an $O(\eta)$ bound for
the distance between the SGLD iteration and the invariant distribution …

arxiv dynamics error gradient math pr stochastic time uniform

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US