all AI news
Your fairness may vary: Group fairness of pretrained language models in toxic text classification. (arXiv:2108.01250v2 [cs.CL] UPDATED)
April 13, 2022, 1:12 a.m. | Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Mikhail Yurochkin, Moninder Singh
cs.LG updates on arXiv.org arxiv.org
The popularity of pretrained language models in natural language processing
systems calls for a careful evaluation of such models in down-stream tasks,
which have a higher potential for societal impact. The evaluation of such
systems usually focuses on \textit{accuracy measures}. Our findings in this
paper call for attention to be paid to \textit{fairness measures} as well.
Through the analysis of more than a dozen pretrained language models of varying
sizes on two toxic text classification tasks (English), we demonstrate that …
arxiv classification fairness language language models text text classification
More from arxiv.org / cs.LG updates on arXiv.org
Regularization by Texts for Latent Diffusion Inverse Solvers
1 day, 13 hours ago |
arxiv.org
When can transformers reason with abstract symbols?
1 day, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Data Analyst, Tableau
@ NTT DATA | Bengaluru, KA, IN
Junior Machine Learning Researcher
@ Weill Cornell Medicine | Doha, QA, 24144
Marketing Data Analytics Intern
@ Sloan | Franklin Park, IL, US, 60131
Senior Machine Learning Scientist
@ Adyen | Amsterdam
Data Engineer
@ Craft.co | Warsaw, Mazowieckie