Jan. 17, 2022, 2:10 a.m. | Mingtian Zhang, Jamie Townsend, Ning Kang, David Barber

cs.LG updates on arXiv.org arxiv.org

The recently proposed Neural Local Lossless Compression (NeLLoC), which is
based on a local autoregressive model, has achieved state-of-the-art (SOTA)
out-of-distribution (OOD) generalization performance in the image compression
task. In addition to the encouragement of OOD generalization, the local model
also allows parallel inference in the decoding stage. In this paper, we propose
a parallelization scheme for local autoregressive models. We discuss the
practicalities of implementing this scheme, and provide experimental evidence
of significant gains in compression runtime compared to …

arxiv compression

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Software Engineer - Artificial Intelligence, LLM

@ OpenText | Hyderabad, TG, IN

Lead Software Engineer- Python Data Engineer

@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom

Data Analyst (m/w/d)

@ Collaboration Betters The World | Berlin, Germany

Data Engineer, Quality Assurance

@ Informa Group Plc. | Boulder, CO, United States

Director, Data Science - Marketing

@ Dropbox | Remote - Canada