March 7, 2024, 5:43 a.m. | Jacob-Junqi Tian, Omkar Dige, David Emerson, Faiza Khan Khattak

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.00071v2 Announce Type: replace-cross
Abstract: Given that language models are trained on vast datasets that may contain inherent biases, there is a potential danger of inadvertently perpetuating systemic discrimination. Consequently, it becomes essential to examine and address biases in language models, integrating fairness into their development to ensure these models are equitable and free from bias. In this work, we demonstrate the importance of reasoning in zero-shot stereotype identification based on Vicuna-13B-v1.3. While we do observe improved accuracy by scaling …

abstract arxiv biases cs.ai cs.cl cs.cy cs.lg danger datasets development discrimination fairness free identification language language models reasoning through type vast

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US