all AI news
Disentangling Variational Autoencoders. (arXiv:2211.07700v1 [cs.LG])
Nov. 16, 2022, 2:15 a.m. | Rafael Pastrana
cs.CV updates on arXiv.org arxiv.org
A variational autoencoder (VAE) is a probabilistic machine learning framework
for posterior inference that projects an input set of high-dimensional data to
a lower-dimensional, latent space. The latent space learned with a VAE offers
exciting opportunities to develop new data-driven design processes in creative
disciplines, in particular, to automate the generation of multiple novel
designs that are aesthetically reminiscent of the input data but that were
unseen during training. However, the learned latent space is typically
disorganized and entangled: traversing …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Engineer - Sr. Consultant level
@ Visa | Bellevue, WA, United States