all AI news
Post-training Quantization for Neural Networks with Provable Guarantees. (arXiv:2201.11113v1 [cs.LG])
Web: http://arxiv.org/abs/2201.11113
Jan. 27, 2022, 2:11 a.m. | Jinjie Zhang, Yixuan Zhou, Rayan Saab
cs.LG updates on arXiv.org arxiv.org
While neural networks have been remarkably successful in a wide array of
applications, implementing them in resource-constrained hardware remains an
area of intense research. By replacing the weights of a neural network with
quantized (e.g., 4-bit, or binary) counterparts, massive savings in computation
cost, memory, and power consumption are attained. We modify a post-training
neural-network quantization method, GPFQ, that is based on a greedy
path-following mechanism, and rigorously analyze its error. We prove that for
quantizing a single-layer network, the …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Data Operations Analyst
@ Mintel | Chicago
Data Analyst
@ PEAK6 | Austin, Chicago, Dallas, New York, Portland, Seattle
Data Scientist, Commercial Systems
@ Canonical Ltd. | Home based - EMEA
Sr. ML Data Associate, Information Data Operations
@ Amazon.com | US, CA, Virtual Location - California
Data Analyst (Europe & Australia)
@ Marley Spoon | Lisbon, Lisbon, Portugal - Remote
Healthcare ETL Developer
@ HealthVerity | United States