all AI news
Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors. (arXiv:2205.12854v1 [cs.CL])
May 26, 2022, 1:12 a.m. | Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Kryściński, Justin F. Rousseau, Greg Dur
cs.CL updates on arXiv.org arxiv.org
The propensity of abstractive summarization systems to make factual errors
has been the subject of significant study, including work on models to detect
factual errors and annotation of errors in current systems' outputs. However,
the ever-evolving nature of summarization systems, error detectors, and
annotated benchmarks make factuality evaluation a moving target; it is hard to
get a clear picture of how techniques compare. In this work, we collect labeled
factuality errors from across nine datasets of annotated summary outputs and …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Data Scientist
@ Motive | India - Remote
Senior Perception Engineer
@ NVIDIA | US, CA, Santa Clara
Business Data Analyst, Finance and Treasury Data Repositories, Senior Associate
@ State Street | Krakow, Poland
Junior AI Engineer (Internship)
@ Sony | SEU - Italy - Roma
Manager, Data Science 3
@ PayPal | USA - Pennsylvania - Virtual