March 5, 2024, 2:41 p.m. | Juntao Zhao, Borui Wan, Yanghua Peng, Haibin Lin, Chuan Wu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.01136v1 Announce Type: new
Abstract: Recent breakthroughs in Large-scale language models (LLMs) have demonstrated impressive performance on various tasks. The immense sizes of LLMs have led to very high resource demand and cost for running the models. Though the models are largely served using uniform high-caliber GPUs nowadays, utilizing a heterogeneous cluster with a mix of available high- and low-capacity GPUs can potentially substantially reduce the serving cost. There is a lack of designs to support efficient LLM serving using …

abstract arxiv cost cs.ai cs.dc cs.lg demand gpus language language models llm llms performance quantization running scale tasks type uniform

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA