Feb. 13, 2024, 5:48 a.m. | Kyungha Kim Sangyun Lee Kung-Hsiang Huang Hou Pong Chan Manling Li Heng Ji

cs.CL updates on arXiv.org arxiv.org

Fact-checking research has extensively explored verification but less so the generation of natural-language explanations, crucial for user trust. While Large Language Models (LLMs) excel in text generation, their capability for producing faithful explanations in fact-checking remains underexamined. Our study investigates LLMs' ability to generate such explanations, finding that zero-shot prompts often result in unfaithfulness. To address these challenges, we propose the Multi-Agent Debate Refinement (MADR) framework, leveraging multiple LLMs as agents with diverse roles in an iterative refining process aimed …

agent capability cs.cl excel fact-checking generate language language models large language large language models llms multi-agent natural research study text text generation trust user trust verification via

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Data Architect

@ Dyson | India - Bengaluru IT Capability Centre

GTM Operation and Marketing Data Analyst

@ DataVisor | Toronto, Ontario, Canada - Remote

Associate - Strategy & Business Intelligence

@ Hitachi | (HE)Office Rotterdam

Senior Executive - Data Analysis

@ Publicis Groupe | Beirut, Lebanon