Feb. 6, 2024, 5:45 a.m. | Isabel O. Gallegos Ryan A. Rossi Joe Barrow Md Mehrab Tanjim Tong Yu Hanieh Deilamsalehy Ruiyi Zhang

cs.LG updates on arXiv.org arxiv.org

Large language models (LLMs) have shown remarkable advances in language generation and understanding but are also prone to exhibiting harmful social biases. While recognition of these behaviors has generated an abundance of bias mitigation techniques, most require modifications to the training data, model parameters, or decoding strategy, which may be infeasible without access to a trainable model. In this work, we leverage the zero-shot capabilities of LLMs to reduce stereotyping in a technique we introduce as zero-shot self-debiasing. With two …

advances bias biases cs.ai cs.cl cs.cy cs.lg data decoding generated language language generation language models large language large language models llms parameters recognition social stereotypes strategy training training data understanding zero-shot

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US