Feb. 21, 2024, 5:49 a.m. | Daisuke Oba, Masahiro Kaneko, Danushka Bollegala

cs.CL updates on arXiv.org arxiv.org

arXiv:2309.07251v2 Announce Type: replace
Abstract: Despite their impressive performance in a wide range of NLP tasks, Large Language Models (LLMs) have been reported to encode worrying-levels of gender biases. Prior work has proposed debiasing methods that require human labelled examples, data augmentation and fine-tuning of LLMs, which are computationally costly. Moreover, one might not even have access to the model parameters for performing debiasing such as in the case of closed LLMs such as GPT-4. To address this challenge, we …

abstract arxiv augmentation bias biases cs.cl data encode examples fine-tuning gender gender bias human language language models large language large language models llms nlp performance prior tasks type work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA