April 16, 2024, 4:51 a.m. | Tian Jin, Wanzin Yazar, Zifei Xu, Sayeh Sharify, Xin Wang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09336v1 Announce Type: new
Abstract: Large language models (LLMs) can solve challenging tasks. However, their inference computation on modern GPUs is highly inefficient due to the increasing number of tokens they must attend to as they generate new ones. To address this inefficiency, we capitalize on LLMs' problem-solving capabilities to optimize their own inference-time efficiency. We demonstrate with two specific tasks: (a) evaluating complex arithmetic expressions and (b) summarizing news articles. For both tasks, we create custom datasets to fine-tune …

abstract arxiv attention capabilities computation cs.ai cs.cl generate gpus however inference language language model language models large language large language model large language models llms modern ones problem-solving solve tasks tokens type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US