Web: http://arxiv.org/abs/2206.07527

June 16, 2022, 1:12 a.m. | Alessandro Pappalardo, Yaman Umuroglu, Michaela Blott, Jovan Mitrevski, Ben Hawks, Nhan Tran, Vladimir Loncar, Sioni Summers, Hendrik Borras, Jules Mu

stat.ML updates on arXiv.org arxiv.org

We present extensions to the Open Neural Network Exchange (ONNX) intermediate
representation format to represent arbitrary-precision quantized neural
networks. We first introduce support for low precision quantization in existing
ONNX-based quantization formats by leveraging integer clipping, resulting in
two new backward-compatible variants: the quantized operator format with
clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel
higher-level ONNX format called quantized ONNX (QONNX) that introduces three
new operators -- Quant, BipolarQuant, and Trunc -- in order to represent
uniform …

arxiv lg networks neural neural networks precision

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY