all AI news
Adaptive quantization with mixed-precision based on low-cost proxy
Feb. 28, 2024, 5:46 a.m. | Junzhe Chen, Qiao Yang, Senmao Tian, Shunli Zhang
cs.CV updates on arXiv.org arxiv.org
Abstract: It is critical to deploy complicated neural network models on hardware with limited resources. This paper proposes a novel model quantization method, named the Low-Cost Proxy-Based Adaptive Mixed-Precision Model Quantization (LCPAQ), which contains three key modules. The hardware-aware module is designed by considering the hardware limitations, while an adaptive mixed-precision quantization module is developed to evaluate the quantization sensitivity by using the Hessian matrix and Pareto frontier techniques. Integer linear programming is used to fine-tune …
abstract arxiv cost cs.cv deploy hardware key limitations low mixed mixed-precision modules network neural network novel paper precision quantization resources type
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
2 days, 17 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
2 days, 17 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne