Feb. 28, 2024, 5:46 a.m. | Junzhe Chen, Qiao Yang, Senmao Tian, Shunli Zhang

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.17706v1 Announce Type: new
Abstract: It is critical to deploy complicated neural network models on hardware with limited resources. This paper proposes a novel model quantization method, named the Low-Cost Proxy-Based Adaptive Mixed-Precision Model Quantization (LCPAQ), which contains three key modules. The hardware-aware module is designed by considering the hardware limitations, while an adaptive mixed-precision quantization module is developed to evaluate the quantization sensitivity by using the Hessian matrix and Pareto frontier techniques. Integer linear programming is used to fine-tune …

abstract arxiv cost cs.cv deploy hardware key limitations low mixed mixed-precision modules network neural network novel paper precision quantization resources type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne