Nov. 2, 2022, 1:15 a.m. | James Langley, Miguel Monteiro, Charles Jones, Nick Pawlowski, Ben Glocker

cs.CV updates on arXiv.org arxiv.org

Variational autoencoders (VAEs) are a popular class of deep generative models
with many variants and a wide range of applications. Improvements upon the
standard VAE mostly focus on the modelling of the posterior distribution over
the latent space and the properties of the neural network decoder. In contrast,
improving the model for the observational distribution is rarely considered and
typically defaults to a pixel-wise independent categorical or normal
distribution. In image synthesis, sampling from such distributions produces
spatially-incoherent results with …

arxiv observation space uncertainty variational autoencoders

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote