April 5, 2024, 4:47 a.m. | Johann D. Gaebler, Sharad Goel, Aziz Huq, Prasanna Tambe

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.03086v1 Announce Type: cross
Abstract: Regulatory efforts to protect against algorithmic bias have taken on increased urgency with rapid advances in large language models (LLMs), which are machine learning models that can achieve performance rivaling human experts on a wide array of tasks. A key theme of these initiatives is algorithmic "auditing," but current regulations -- as well as the scientific literature -- provide little guidance on how to conduct these assessments. Here we propose and investigate one approach for …

abstract advances algorithmic bias array arxiv bias cs.cl decisions experts guide hiring human key language language models large language large language models llms machine machine learning machine learning models performance protect regulatory stat.ap tasks type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York