March 7, 2024, 5:47 a.m. | Jiahui Geng, Yova Kementchedjhieva, Preslav Nakov, Iryna Gurevych

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.03627v1 Announce Type: new
Abstract: Multimodal large language models (MLLMs) carry the potential to support humans in processing vast amounts of information. While MLLMs are already being used as a fact-checking tool, their abilities and limitations in this regard are understudied. Here is aim to bridge this gap. In particular, we propose a framework for systematically assessing the capacity of current multimodal models to facilitate real-world fact-checking. Our methodology is evidence-free, leveraging only these models' intrinsic knowledge and reasoning capabilities. …

abstract aim arxiv bridge cs.ai cs.cl fact-checking gap humans information language language models large language large language models limitations mllms multimodal processing regard support tool type vast world

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571