Jan. 27, 2022, 2:11 a.m. | Jinjie Zhang, Yixuan Zhou, Rayan Saab

cs.LG updates on arXiv.org arxiv.org

While neural networks have been remarkably successful in a wide array of
applications, implementing them in resource-constrained hardware remains an
area of intense research. By replacing the weights of a neural network with
quantized (e.g., 4-bit, or binary) counterparts, massive savings in computation
cost, memory, and power consumption are attained. We modify a post-training
neural-network quantization method, GPFQ, that is based on a greedy
path-following mechanism, and rigorously analyze its error. We prove that for
quantizing a single-layer network, the …

arxiv networks neural networks training

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Architect

@ Pathward | Remote

Diagnostic Imaging Information Systems (DIIS) Technologist

@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8

Intern Data Scientist - Residual Value Risk Management (f/m/d)

@ BMW Group | Munich, DE

Analytics Engineering Manager

@ PlayStation Global | United Kingdom, London

Junior Insight Analyst (PR&Comms)

@ Signal AI | Lisbon, Lisbon, Portugal