all AI news
Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers
Feb. 15, 2024, 5:41 a.m. | Junhan Kim, Kyungphil Park, Chungman Lee, Ho-young Kim, Joonyoung Kim, Yongkweon Jeon
cs.LG updates on arXiv.org arxiv.org
Abstract: With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile devices and TVs. Existing PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyper-parameter tunings are required. As a cost-effective alternative, one-shot PTQ schemes have been proposed. Still, the performance is somewhat limited because they …
abstract ai models arxiv complexity cs.ai cs.lg devices edge edge devices generative generative ai models mobile mobile devices next quantization resources scale solution training transformers type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Principal Data Architect - Azure & Big Data
@ MGM Resorts International | Home Office - US, NV
GN SONG MT Market Research Data Analyst 11
@ Accenture | Bengaluru, BDC7A