all AI news
FrameQuant: Flexible Low-Bit Quantization for Transformers
March 12, 2024, 4:41 a.m. | Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas Singh
cs.LG updates on arXiv.org arxiv.org
Abstract: Transformers are the backbone of powerful foundation models for many Vision and Natural Language Processing tasks. But their compute and memory/storage footprint is large, and so, serving such models is expensive often requiring high-end hardware. To mitigate this difficulty, Post-Training Quantization seeks to modify a pre-trained model and quantize it to eight bits or lower, significantly boosting compute/memory/latency efficiency. Such models have been successfully quantized to four bits with some performance loss. In this work, …
abstract and natural language processing arxiv compute cs.cl cs.lg foundation hardware language language processing low memory natural natural language natural language processing processing quantization storage tasks training transformers type vision
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH
@ Deloitte | Kuala Lumpur, MY