March 26, 2024, 4:41 a.m. | Lorenz Vaitl, Ludwig Winkler, Lorenz Richter, Pan Kessel

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.15881v1 Announce Type: new
Abstract: Recent work shows that path gradient estimators for normalizing flows have lower variance compared to standard estimators for variational inference, resulting in improved training. However, they are often prohibitively more expensive from a computational point of view and cannot be applied to maximum likelihood training in a scalable manner, which severely hinders their widespread adoption. In this work, we overcome these crucial limitations. Specifically, we propose a fast path gradient estimator which improves computational efficiency …

abstract arxiv computational cs.lg gradient however inference likelihood path shows standard stat.ml training type variance view work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne