Feb. 26, 2024, 5:43 a.m. | Andrey Kuzmin, Mart Van Baalen, Yuwei Ren, Markus Nagel, Jorn Peters, Tijmen Blankevoort

cs.LG updates on arXiv.org arxiv.org

arXiv:2208.09225v2 Announce Type: replace
Abstract: When quantizing neural networks for efficient inference, low-bit integers are the go-to format for efficiency. However, low-bit floating point numbers have an extra degree of freedom, assigning some bits to work on an exponential scale instead. This paper in-depth investigates this benefit of the floating point format for neural network inference. We detail the choices that can be made for the FP8 format, including the important choice of the number of bits for the mantissa …

abstract arxiv benefit cs.lg efficiency extra floating point format freedom inference integers low networks neural networks numbers paper power quantization scale type work

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A

GN SONG MT Market Research Data Analyst 09

@ Accenture | Bengaluru, BDC7A