Oct. 23, 2023, 8:48 p.m. | /u/Chromobacterium

Machine Learning www.reddit.com

A while back, I came across the "[From Variational to Deterministic Autoencoders](https://arxiv.org/abs/1903.12436)", which provided a novel insight into the generative properties of autoencoders by framing the objective through the lens of regularization. However, I couldn't help but notice that the deterministic models studied felt incomplete, namely due to the inherent lack of sampling in those models (which is something that the authors acknowledge).

To provide a short recap of the paper, the authors surgically decompose the variational autoencoder objective into …

authors autoencoder case deviation distribution general leads machinelearning noise paper recap standard vae variance

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Research Scientist

@ d-Matrix | San Diego, Ca