April 16, 2024, 4:51 a.m. | Nailia Mirzakhmedova, Marcel Gohsen, Chia Hao Chang, Benno Stein

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09696v1 Announce Type: new
Abstract: Evaluating the quality of arguments is a crucial aspect of any system leveraging argument mining. However, it is a challenge to obtain reliable and consistent annotations regarding argument quality, as this usually requires domain-specific expertise of the annotators. Even among experts, the assessment of argument quality is often inconsistent due to the inherent subjectivity of this task. In this paper, we study the potential of using state-of-the-art large language models (LLMs) as proxies for argument …

abstract annotations arxiv assessment challenge consistent cs.ai cs.cl cs.et domain expertise experts however language language models large language large language models mining quality type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada