Feb. 11, 2022, 2:11 a.m. | Qing Jin, Jian Ren, Richard Zhuang, Sumant Hanumante, Zhengang Li, Zhiyu Chen, Yanzhi Wang, Kaiyuan Yang, Sergey Tulyakov

cs.LG updates on arXiv.org arxiv.org

Neural network quantization is a promising compression technique to reduce
memory footprint and save energy consumption, potentially leading to real-time
inference. However, there is a performance gap between quantized and
full-precision models. To reduce it, existing quantization approaches require
high-precision INT32 or full-precision multiplication during inference for
scaling or dequantization. This introduces a noticeable cost in terms of
memory, speed, and required energy. To tackle these issues, we present F8Net, a
novel quantization framework consisting of only fixed-point 8-bit
multiplication. …

arxiv cv fixed-point network quantization

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada