all AI news
Disentangling Variational Autoencoders. (arXiv:2211.07700v1 [cs.LG])
Nov. 16, 2022, 2:11 a.m. | Rafael Pastrana
cs.LG updates on arXiv.org arxiv.org
A variational autoencoder (VAE) is a probabilistic machine learning framework
for posterior inference that projects an input set of high-dimensional data to
a lower-dimensional, latent space. The latent space learned with a VAE offers
exciting opportunities to develop new data-driven design processes in creative
disciplines, in particular, to automate the generation of multiple novel
designs that are aesthetically reminiscent of the input data but that were
unseen during training. However, the learned latent space is typically
disorganized and entangled: traversing …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote