all AI news
An Empirical Study of Low Precision Quantization for TinyML. (arXiv:2203.05492v1 [cs.LG])
March 11, 2022, 2:11 a.m. | Shaojie Zhuo, Hongyu Chen, Ramchalam Kinattinkara Ramakrishnan, Tommy Chen, Chen Feng, Yicheng Lin, Parker Zhang, Liang Shen
cs.LG updates on arXiv.org arxiv.org
Tiny machine learning (tinyML) has emerged during the past few years aiming
to deploy machine learning models to embedded AI processors with highly
constrained memory and computation capacity. Low precision quantization is an
important model compression technique that can greatly reduce both memory
consumption and computation cost of model inference. In this study, we focus on
post-training quantization (PTQ) algorithms that quantize a model to low-bit
(less than 8-bit) precision with only a small set of calibration data and
benchmark …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Business Intelligence Developer / Analyst
@ Transamerica | Work From Home, USA
Data Analyst (All Levels)
@ Noblis | Bethesda, MD, United States