April 16, 2024, 4:51 a.m. | Tian Jin, Wanzin Yazar, Zifei Xu, Sayeh Sharify, Xin Wang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09336v1 Announce Type: new
Abstract: Large language models (LLMs) can solve challenging tasks. However, their inference computation on modern GPUs is highly inefficient due to the increasing number of tokens they must attend to as they generate new ones. To address this inefficiency, we capitalize on LLMs' problem-solving capabilities to optimize their own inference-time efficiency. We demonstrate with two specific tasks: (a) evaluating complex arithmetic expressions and (b) summarizing news articles. For both tasks, we create custom datasets to fine-tune …

abstract arxiv attention capabilities computation cs.ai cs.cl generate gpus however inference language language model language models large language large language model large language models llms modern ones problem-solving solve tasks tokens type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore