all AI news
Investigating Markers and Drivers of Gender Bias in Machine Translations
March 19, 2024, 4:53 a.m. | Peter J Barclay (Edinburgh Napier University), Ashkan Sami (Edinburgh Napier University)
cs.CL updates on arXiv.org arxiv.org
Abstract: Implicit gender bias in Large Language Models (LLMs) is a well-documented problem, and implications of gender introduced into automatic translations can perpetuate real-world biases. However, some LLMs use heuristics or post-processing to mask such bias, making investigation difficult. Here, we examine bias in LLMss via back-translation, using the DeepL translation API to investigate the bias evinced when repeatedly translating a set of 56 Software Engineering tasks used in a previous study. Each statement starts with …
abstract arxiv bias biases cs.cl cs.cy cs.se drivers gender gender bias heuristics however investigation language language models large language large language models llms machine making post-processing processing type via world
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Reporting & Data Analytics Lead (Sizewell C)
@ EDF | London, GB
Data Analyst
@ Notable | San Mateo, CA