Feb. 13, 2024, 5:44 a.m. | Wei Zou Runpeng Geng Binghui Wang Jinyuan Jia

cs.LG updates on arXiv.org arxiv.org

Large language models (LLMs) have achieved remarkable success due to their exceptional generative capabilities. Despite their success, they also have inherent limitations such as a lack of up-to-date knowledge and hallucination. Retrieval-Augmented Generation (RAG) is a state-of-the-art technique to mitigate those limitations. In particular, given a question, RAG retrieves relevant knowledge from a knowledge database to augment the input of the LLM. For instance, the retrieved knowledge could be a set of top-k texts that are most semantically similar to …

art attacks capabilities cs.cr cs.lg generative hallucination knowledge language language models large language large language models limitations llms poisoning attacks question rag retrieval retrieval-augmented state success

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Business Consultant-AI/ML

@ Bosch Group | Bengaluru, India

Senior Network Defense Analyst (AI/ML) - Hybrid

@ Noblis | Linthicum, MD, United States

Senior Data Analyst

@ Peloton | New York City

SC2024-003425 Data Scientist (NS) - WED 6 Mar

@ EMW, Inc. | Brussels, Brussels, Belgium