all AI news
Improving Zero-shot Reader by Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering. (arXiv:2310.17490v2 [cs.CL] UPDATED)
cs.CL updates on arXiv.org arxiv.org
Large language models (LLMs) enable zero-shot approaches in open-domain
question answering (ODQA), yet with limited advancements as the reader is
compared to the retriever. This study aims at the feasibility of a zero-shot
reader that addresses the challenges of computational cost and the need for
labeled data. We find that LLMs are distracted due to irrelevant documents in
the retrieved set and the overconfidence of the generated answers when they are
exploited as zero-shot readers. To tackle these problems, we …
arxiv challenges computational cost distractions documents domain language language models large language large language models llms question answering study