Feb. 1, 2024, 12:42 p.m. | Yura Perugachi-Diaz Arwin Gansekoele Sandjai Bhulai

cs.CV updates on arXiv.org arxiv.org

Neural image compression has made a great deal of progress. State-of-the-art models are based on variational autoencoders and are outperforming classical models. Neural compression models learn to encode an image into a quantized latent representation that can be efficiently sent to the decoder, which decodes the quantized latent into a reconstructed image. While these models have proven successful in practice, they lead to sub-optimal results due to imperfect optimization and limitations in the encoder and decoder capacity. Recent work shows …

art autoencoders compression cs.cv cs.lg deal decoder encode image learn neural compression overfitting progress representation state state-of-the-art models stat.ml the decoder variational autoencoders

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence