Jan. 24, 2022, 2:10 a.m. | Alberto Delmas Lascorz (1), Mostafa Mahmoud (1), Andreas Moshovos (1 and 2) ((1) University of Toronto (2) Vector Institute)

cs.LG updates on arXiv.org arxiv.org

Data accesses between on- and off-chip memories account for a large fraction
of overall energy consumption during inference with deep learning networks. We
present APack, a simple and effective, lossless, off-chip memory compression
technique for fixed-point quantized models. APack reduces data widths by
exploiting the non-uniform value distribution in deep learning applications.
APack can be used to increase the effective memory capacity, to reduce off-chip
traffic, and/or to achieve the desired performance/energy targets while using
smaller off-chip memories. APack builds …

ar arxiv compression data deep learning deep learning inference learning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne