April 17, 2024, 4:43 a.m. | Yilong Zhao, Chien-Yu Lin, Kan Zhu, Zihao Ye, Lequn Chen, Size Zheng, Luis Ceze, Arvind Krishnamurthy, Tianqi Chen, Baris Kasikci

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.19102v3 Announce Type: replace
Abstract: The growing demand for Large Language Models (LLMs) in applications such as content generation, intelligent chatbots, and sentiment analysis poses considerable challenges for LLM service providers. To efficiently use GPU resources and boost throughput, batching multiple requests has emerged as a popular paradigm; to further speed up batching, LLM quantization techniques reduce memory consumption and increase computing capacity. However, prevalent quantization schemes (e.g., 8-bit weight-activation quantization) cannot fully leverage the capabilities of modern GPUs, such …

abstract analysis applications arxiv atom batching boost challenges chatbots content generation cs.lg demand gpu gpu resources intelligent language language models large language large language models llm llms low multiple paradigm popular quantization resources sentiment sentiment analysis service service providers speed type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist - XR Input Perception

@ Meta | Sausalito, CA | Redmond, WA | Burlingame, CA

Sr. Data Engineer

@ Oportun | Remote - India