Feb. 26, 2024, 5:42 a.m. | Amit Haim, Alejandro Salinas, Julian Nyarko

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.14875v1 Announce Type: cross
Abstract: We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we elicit prompt the models for advice regarding an individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across …

abstract advice art arxiv audit bias biases car cs.ai cs.cl cs.cy cs.lg design election gender gender bias gpt gpt-4 language language models large language large language models negotiations prompt purchase race race and gender state study type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US