June 20, 2022, 1:12 a.m. | David Alfter, Therese Lindström Tiedemann, Elena Volodina

cs.CL updates on arXiv.org arxiv.org

In this study we investigate to which degree experts and non-experts agree on
questions of difficulty in a crowdsourcing experiment. We ask non-experts
(second language learners of Swedish) and two groups of experts (teachers of
Swedish as a second/foreign language and CEFR experts) to rank multi-word
expressions in a crowdsourcing experiment. We find that the resulting rankings
by all the three tested groups correlate to a very high degree, which suggests
that judgments produced in a comparative setting are not …

arxiv crowdsourcing experts

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne