April 16, 2024, 4:51 a.m. | Nailia Mirzakhmedova, Marcel Gohsen, Chia Hao Chang, Benno Stein

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09696v1 Announce Type: new
Abstract: Evaluating the quality of arguments is a crucial aspect of any system leveraging argument mining. However, it is a challenge to obtain reliable and consistent annotations regarding argument quality, as this usually requires domain-specific expertise of the annotators. Even among experts, the assessment of argument quality is often inconsistent due to the inherent subjectivity of this task. In this paper, we study the potential of using state-of-the-art large language models (LLMs) as proxies for argument …

abstract annotations arxiv assessment challenge consistent cs.ai cs.cl cs.et domain expertise experts however language language models large language large language models mining quality type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York