April 25, 2022, 1:11 a.m. | Xin Zhang, Guangwei Xu, Yueheng Sun, Meishan Zhang, Xiaobin Wang, Min Zhang

cs.CL updates on arXiv.org arxiv.org

Recent works of opinion expression identification (OEI) rely heavily on the
quality and scale of the manually-constructed training corpus, which could be
extremely difficult to satisfy. Crowdsourcing is one practical solution for
this problem, aiming to create a large-scale but quality-unguaranteed corpus.
In this work, we investigate Chinese OEI with extremely-noisy crowdsourcing
annotations, constructing a dataset at a very low cost. Following zhang et al.
(2021), we train the annotator-adapter model by regarding all annotations as
gold-standard in terms of …

annotations arxiv crowdsourcing opinion

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A