May 6, 2024, 4:47 a.m. | Qingqing Cao, Sewon Min, Yizhong Wang, Hannaneh Hajishirzi

cs.CL updates on arXiv.org arxiv.org

arXiv:2310.01329v2 Announce Type: replace
Abstract: Retrieval augmentation addresses many critical problems in large language models such as hallucination, staleness, and privacy leaks. However, running retrieval-augmented language models (LMs) is slow and difficult to scale due to processing large amounts of retrieved text. We introduce binary token representations (BTR), which use 1-bit vectors to precompute every token in passages, significantly reducing computation during inference. Despite the potential loss of accuracy, our new calibration techniques and training objectives restore performance. Combined with …

abstract arxiv augmentation binary cs.ai cs.cl hallucination however language language models large language large language models leaks lms privacy processing retrieval retrieval-augmented running scale text token type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US