Web: http://arxiv.org/abs/2201.08830

Jan. 24, 2022, 2:10 a.m. | Alberto Delmas Lascorz (1), Mostafa Mahmoud (1), Andreas Moshovos (1 and 2) ((1) University of Toronto (2) Vector Institute)

cs.LG updates on arXiv.org arxiv.org

Data accesses between on- and off-chip memories account for a large fraction
of overall energy consumption during inference with deep learning networks. We
present APack, a simple and effective, lossless, off-chip memory compression
technique for fixed-point quantized models. APack reduces data widths by
exploiting the non-uniform value distribution in deep learning applications.
APack can be used to increase the effective memory capacity, to reduce off-chip
traffic, and/or to achieve the desired performance/energy targets while using
smaller off-chip memories. APack builds …

ar arxiv compression data deep deep learning deep learning inference learning

More from arxiv.org / cs.LG updates on arXiv.org

Director, Data Engineering and Architecture

@ Chainalysis | California | New York | Washington DC | Remote - USA

Deep Learning Researcher

@ Topaz Labs | Dallas, TX

Sr Data Engineer (Contractor)

@ SADA | US - West

Senior Cloud Database Administrator

@ Findhelp | Remote

Senior Data Analyst

@ System1 | Remote

Speech Machine Learning Research Engineer

@ Samsung Research America | Mountain View, CA