Feb. 1, 2024, 12:41 p.m. | Parth Sarthi Salman Abdullah Aditi Tuli Shubh Khanna Anna Goldie Christopher D. Manning

cs.CL updates on arXiv.org arxiv.org

Retrieval-augmented language models can better adapt to changes in world state and incorporate long-tail knowledge. However, most existing methods retrieve only short contiguous chunks from a retrieval corpus, limiting holistic understanding of the overall document context. We introduce the novel approach of recursively embedding, clustering, and summarizing chunks of text, constructing a tree with differing levels of summarization from the bottom up. At inference time, our RAPTOR model retrieves from this tree, integrating information across lengthy documents at different levels …

adapt clustering context cs.cl cs.lg document embedding knowledge language language models novel processing recursive retrieval retrieval-augmented state summarizing text tree understanding world

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne