all AI news
Pitfalls of Conversational LLMs on News Debiasing
April 10, 2024, 4:47 a.m. | Ipek Baris Schlicht, Defne Altiok, Maryanne Taouk, Lucie Flek
cs.CL updates on arXiv.org arxiv.org
Abstract: This paper addresses debiasing in news editing and evaluates the effectiveness of conversational Large Language Models in this task. We designed an evaluation checklist tailored to news editors' perspectives, obtained generated texts from three popular conversational models using a subset of a publicly available dataset in media bias, and evaluated the texts according to the designed checklist. Furthermore, we examined the models as evaluator for checking the quality of debiased model outputs. Our findings indicate …
abstract arxiv bias checklist conversational cs.ai cs.cl dataset editing editors evaluation generated language language models large language large language models llms media paper perspectives popular type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer
@ Samsara | Canada - Remote
Machine Learning & Data Engineer - Consultant
@ Arcadis | Bengaluru, Karnataka, India