Web: http://arxiv.org/abs/2206.08889

June 20, 2022, 1:11 a.m. | Lucas Theis, Tim Salimans, Matthew D. Hoffman, Fabian Mentzer

cs.LG updates on arXiv.org arxiv.org

We describe a novel lossy compression approach called DiffC which is based on
unconditional diffusion generative models. Unlike modern compression schemes
which rely on transform coding and quantization to restrict the transmitted
information, DiffC relies on the efficient communication of pixels corrupted by
Gaussian noise. We implement a proof of concept and find that it works
surprisingly well despite the lack of an encoder transform, outperforming the
state-of-the-art generative compression method HiFiC on ImageNet 64x64. DiffC
only uses a single …

arxiv compression ml

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY