all AI news
A Geometric Perspective on Variational Autoencoders. (arXiv:2209.07370v2 [stat.ML] UPDATED)
Nov. 4, 2022, 1:13 a.m. | Clément Chadebec, Stéphanie Allassonnière
stat.ML updates on arXiv.org arxiv.org
This paper introduces a new interpretation of the Variational Autoencoder
framework by taking a fully geometric point of view. We argue that vanilla VAE
models unveil naturally a Riemannian structure in their latent space and that
taking into consideration those geometrical aspects can lead to better
interpolations and an improved generation procedure. This new proposed sampling
method consists in sampling from the uniform distribution deriving
intrinsically from the learned Riemannian latent space and we show that using
this scheme can …
More from arxiv.org / stat.ML updates on arXiv.org
Mixture of partially linear experts
3 hours ago |
arxiv.org
Adaptive deep learning for nonlinear time series models
1 day, 3 hours ago |
arxiv.org
A Full Adagrad algorithm with O(Nd) operations
1 day, 3 hours ago |
arxiv.org
Minimax Regret Learning for Data with Heterogeneous Subgroups
1 day, 3 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. BI Analyst
@ AkzoNobel | Pune, IN