Nov. 2, 2022, 1:12 a.m. | James Langley, Miguel Monteiro, Charles Jones, Nick Pawlowski, Ben Glocker

cs.LG updates on arXiv.org arxiv.org

Variational autoencoders (VAEs) are a popular class of deep generative models
with many variants and a wide range of applications. Improvements upon the
standard VAE mostly focus on the modelling of the posterior distribution over
the latent space and the properties of the neural network decoder. In contrast,
improving the model for the observational distribution is rarely considered and
typically defaults to a pixel-wise independent categorical or normal
distribution. In image synthesis, sampling from such distributions produces
spatially-incoherent results with …

arxiv observation space uncertainty variational autoencoders

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH

@ Deloitte | Kuala Lumpur, MY