April 15, 2024, 4:42 a.m. | William Fleshman, Aleem Khan, Marc Marone, Benjamin Van Durme

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08417v1 Announce Type: new
Abstract: Large language models (LLMs) are increasingly capable of completing knowledge intensive tasks by recalling information from a static pretraining corpus. Here we are concerned with LLMs in the context of evolving data requirements. For instance: batches of new data that are introduced periodically; subsets of data with user-based access controls; or requirements on dynamic removal of documents with guarantees that associated knowledge cannot be recalled. We wish to satisfy these requirements while at the same …

abstract access access-control arxiv context continuous control cs.ai cs.cl cs.lg data information instance knowledge language language models large language large language models llms pretraining requirements subsets tasks training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote