April 13, 2022, 1:12 a.m. | Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Mikhail Yurochkin, Moninder Singh

cs.LG updates on arXiv.org arxiv.org

The popularity of pretrained language models in natural language processing
systems calls for a careful evaluation of such models in down-stream tasks,
which have a higher potential for societal impact. The evaluation of such
systems usually focuses on \textit{accuracy measures}. Our findings in this
paper call for attention to be paid to \textit{fairness measures} as well.
Through the analysis of more than a dozen pretrained language models of varying
sizes on two toxic text classification tasks (English), we demonstrate that …

arxiv classification fairness language language models text text classification

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Analyst, Tableau

@ NTT DATA | Bengaluru, KA, IN

Junior Machine Learning Researcher

@ Weill Cornell Medicine | Doha, QA, 24144

Marketing Data Analytics Intern

@ Sloan | Franklin Park, IL, US, 60131

Senior Machine Learning Scientist

@ Adyen | Amsterdam

Data Engineer

@ Craft.co | Warsaw, Mazowieckie