April 25, 2022, 1:11 a.m. | Xin Zhang, Guangwei Xu, Yueheng Sun, Meishan Zhang, Xiaobin Wang, Min Zhang

cs.CL updates on arXiv.org arxiv.org

Recent works of opinion expression identification (OEI) rely heavily on the
quality and scale of the manually-constructed training corpus, which could be
extremely difficult to satisfy. Crowdsourcing is one practical solution for
this problem, aiming to create a large-scale but quality-unguaranteed corpus.
In this work, we investigate Chinese OEI with extremely-noisy crowdsourcing
annotations, constructing a dataset at a very low cost. Following zhang et al.
(2021), we train the annotator-adapter model by regarding all annotations as
gold-standard in terms of …

annotations arxiv crowdsourcing opinion

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne