Feb. 9, 2024, 5:44 a.m. | Beatrix M. G. Nielsen Anders Christensen Andrea Dittadi Ole Winther

cs.LG updates on arXiv.org arxiv.org

Diffusion models may be viewed as hierarchical variational autoencoders (VAEs) with two improvements: parameter sharing for the conditional distributions in the generative process and efficient computation of the loss as independent terms over the hierarchy. We consider two changes to the diffusion model that retain these advantages while adding flexibility to the model. Firstly, we introduce a data- and depth-dependent mean function in the diffusion process, which leads to a modified diffusion loss. Our proposed framework, DiffEnc, achieves a statistically …

advantages autoencoders computation cs.cv cs.lg diffusion diffusion model diffusion models encoder flexibility generative hierarchical improvements independent loss process stat.ml terms variational autoencoders

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote