Feb. 26, 2024, 5:42 a.m. | Amit Haim, Alejandro Salinas, Julian Nyarko

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.14875v1 Announce Type: cross
Abstract: We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we elicit prompt the models for advice regarding an individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across …

abstract advice art arxiv audit bias biases car cs.ai cs.cl cs.cy cs.lg design election gender gender bias gpt gpt-4 language language models large language large language models negotiations prompt purchase race race and gender state study type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States