all AI news
F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization. (arXiv:2202.05239v1 [cs.CV])
Feb. 11, 2022, 2:11 a.m. | Qing Jin, Jian Ren, Richard Zhuang, Sumant Hanumante, Zhengang Li, Zhiyu Chen, Yanzhi Wang, Kaiyuan Yang, Sergey Tulyakov
cs.LG updates on arXiv.org arxiv.org
Neural network quantization is a promising compression technique to reduce
memory footprint and save energy consumption, potentially leading to real-time
inference. However, there is a performance gap between quantized and
full-precision models. To reduce it, existing quantization approaches require
high-precision INT32 or full-precision multiplication during inference for
scaling or dequantization. This introduces a noticeable cost in terms of
memory, speed, and required energy. To tackle these issues, we present F8Net, a
novel quantization framework consisting of only fixed-point 8-bit
multiplication. …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Global Clinical Data Manager
@ Warner Bros. Discovery | CRI - San Jose - San Jose (City Place)
Global Clinical Data Manager
@ Warner Bros. Discovery | COL - Cundinamarca - Bogotá (Colpatria)
Ingénieur Data Manager / Pau
@ Capgemini | Paris, FR