all AI news
Guaranteed Bounds for Posterior Inference in Universal Probabilistic Programming. (arXiv:2204.02948v1 [cs.PL])
April 7, 2022, 1:11 a.m. | Raven Beutner, Luke Ong, Fabian Zaiser
cs.LG updates on arXiv.org arxiv.org
We propose a new method to approximate the posterior distribution of
probabilistic programs by means of computing guaranteed bounds. The starting
point of our work is an interval-based trace semantics for a recursive,
higher-order probabilistic programming language with continuous distributions.
Taking the form of (super-/subadditive) measures, these lower/upper bounds are
non-stochastic and provably correct: using the semantics, we prove that the
actual posterior of a given program is sandwiched between the lower and upper
bounds (soundness); moreover the bounds converge …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineering Manager, Generative AI - Characters
@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA
Senior Operations Research Analyst / Predictive Modeler
@ LinQuest | Colorado Springs, Colorado, United States