Nov. 5, 2023, 6:47 a.m. | Sukmin Cho, Jeongyeon Seo, Soyeong Jeong, Jong C. Park

cs.CL updates on arXiv.org arxiv.org

Large language models (LLMs) enable zero-shot approaches in open-domain
question answering (ODQA), yet with limited advancements as the reader is
compared to the retriever. This study aims at the feasibility of a zero-shot
reader that addresses the challenges of computational cost and the need for
labeled data. We find that LLMs are distracted due to irrelevant documents in
the retrieved set and the overconfidence of the generated answers when they are
exploited as zero-shot readers. To tackle these problems, we …

arxiv challenges computational cost distractions documents domain language language models large language large language models llms question answering study

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US