all AI news
Towards Model-Agnostic Posterior Approximation for Fast and Accurate Variational Autoencoders
March 15, 2024, 4:42 a.m. | Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez
cs.LG updates on arXiv.org arxiv.org
Abstract: Inference for Variational Autoencoders (VAEs) consists of learning two models: (1) a generative model, which transforms a simple distribution over a latent space into the distribution over observed data, and (2) an inference model, which approximates the posterior of the latent codes given data. The two components are learned jointly via a lower bound to the generative model's log marginal likelihood. In early phases of joint training, the inference model poorly approximates the latent code …
abstract approximation arxiv autoencoders cs.lg data distribution generative inference model-agnostic posterior simple space stat.ml type variational autoencoders
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
AI Engineering Manager
@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain