April 24, 2024, 4:47 a.m. | Chen Zhang, Zhuorui Liu, Dawei Song

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.14897v1 Announce Type: new
Abstract: With the increasingly giant scales of (causal) large language models (LLMs), the inference efficiency comes as one of the core concerns along the improved performance. In contrast to the memory footprint, the latency bottleneck seems to be of greater importance as there can be billions of requests to a LLM (e.g., GPT-4) per day. The bottleneck is mainly due to the autoregressive innateness of LLMs, where tokens can only be generated sequentially during decoding. To …

abstract arxiv beyond causal concerns contrast core cs.ai cs.cl efficiency game importance inference language language models large language large language models latency llms memory performance survey type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne