Web: http://arxiv.org/abs/2205.02129

May 5, 2022, 1:11 a.m. | Yang Xiao, Jinlan Fu, See-Kiong Ng, Pengfei Liu

cs.CL updates on arXiv.org arxiv.org

In this paper, we ask the research question of whether all the datasets in
the benchmark are necessary. We approach this by first characterizing the
distinguishability of datasets when comparing different systems. Experiments on
9 datasets and 36 systems show that several existing benchmark datasets
contribute little to discriminating top-scoring systems, while those less used
datasets exhibit impressive discriminative power. We further, taking the text
classification task as a case study, investigate the possibility of predicting
dataset discrimination based on …

arxiv benchmark classification dataset datasets evaluation pilot study text text classification

More from arxiv.org / cs.CL updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California