April 11, 2024, 4:42 a.m. | Thomas Merth, Qichen Fu, Mohammad Rastegari, Mahyar Najibi

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.06910v1 Announce Type: cross
Abstract: Despite the successes of large language models (LLMs), they exhibit significant drawbacks, particularly when processing long contexts. Their inference cost scales quadratically with respect to sequence length, making it expensive for deployment in some real-world text processing applications, such as retrieval-augmented generation (RAG). Additionally, LLMs also exhibit the "distraction phenomenon," where irrelevant context in the prompt degrades output quality. To address these drawbacks, we propose a novel RAG prompting methodology, superposition prompting, which can be …

abstract applications arxiv cost cs.ai cs.cl cs.lg deployment improving inference language language models large language large language models llms making processing prompting rag retrieval retrieval-augmented superposition text type world

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain