all AI news
Learn When (not) to Trust Language Models: A Privacy-Centric Adaptive Model-Aware Approach
April 5, 2024, 4:47 a.m. | Chengkai Huang, Rui Wang, Kaige Xie, Tong Yu, Lina Yao
cs.CL updates on arXiv.org arxiv.org
Abstract: Retrieval-augmented large language models (LLMs) have been remarkably competent in various NLP tasks. Despite their great success, the knowledge provided by the retrieval process is not always useful for improving the model prediction, since in some samples LLMs may already be quite knowledgeable and thus be able to answer the question correctly without retrieval. Aiming to save the cost of retrieval, previous work has proposed to determine when to do/skip the retrieval in a data-aware …
abstract arxiv cs.ai cs.cl improving knowledge language language models large language large language models learn llms nlp prediction privacy process retrieval retrieval-augmented samples success tasks trust type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote