Feb. 22, 2024, 5:48 a.m. | Panatchakorn Anantaprayoon, Masahiro Kaneko, Naoaki Okazaki

cs.CL updates on arXiv.org arxiv.org

arXiv:2309.09697v2 Announce Type: replace
Abstract: Discriminatory gender biases have been found in Pre-trained Language Models (PLMs) for multiple languages. In Natural Language Inference (NLI), existing bias evaluation methods have focused on the prediction results of a specific label out of three labels, such as neutral. However, such evaluation methods can be inaccurate since unique biased inferences are associated with unique prediction labels. Addressing this limitation, we propose a bias evaluation method for PLMs that considers all the three labels of …

abstract arxiv bias biases cs.cl evaluation found gender gender bias inference labels language language models languages multiple natural natural language prediction type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA