March 19, 2024, 4:53 a.m. | Peter J Barclay (Edinburgh Napier University), Ashkan Sami (Edinburgh Napier University)

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.11896v1 Announce Type: new
Abstract: Implicit gender bias in Large Language Models (LLMs) is a well-documented problem, and implications of gender introduced into automatic translations can perpetuate real-world biases. However, some LLMs use heuristics or post-processing to mask such bias, making investigation difficult. Here, we examine bias in LLMss via back-translation, using the DeepL translation API to investigate the bias evinced when repeatedly translating a set of 56 Software Engineering tasks used in a previous study. Each statement starts with …

abstract arxiv bias biases cs.cl cs.cy cs.se drivers gender gender bias heuristics however investigation language language models large language large language models llms machine making post-processing processing type via world

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US