all AI news
$t^3$-Variational Autoencoder: Learning Heavy-tailed Data with Student's t and Power Divergence
March 5, 2024, 2:45 p.m. | Juno Kim, Jaehyuk Kwon, Mincheol Cho, Hyunjong Lee, Joong-Ho Won
cs.LG updates on arXiv.org arxiv.org
Abstract: The variational autoencoder (VAE) typically employs a standard normal prior as a regularizer for the probabilistic latent encoder. However, the Gaussian tail often decays too quickly to effectively accommodate the encoded points, failing to preserve crucial structures hidden in the data. In this paper, we explore the use of heavy-tailed models to combat over-regularization. Drawing upon insights from information geometry, we propose $t^3$VAE, a modified VAE framework that incorporates Student's t-distributions for the prior, encoder, …
abstract arxiv autoencoder cs.lg data divergence encoder hidden normal paper power prior standard stat.ml type vae
More from arxiv.org / cs.LG updates on arXiv.org
Training robust and generalizable quantum models
12 minutes ago |
arxiv.org
Causal Discovery Under Local Privacy
12 minutes ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Consultant Senior Power BI & Azure - CDI - H/F
@ Talan | Lyon, France