all AI news
Auditing the Use of Language Models to Guide Hiring Decisions
April 5, 2024, 4:47 a.m. | Johann D. Gaebler, Sharad Goel, Aziz Huq, Prasanna Tambe
cs.CL updates on arXiv.org arxiv.org
Abstract: Regulatory efforts to protect against algorithmic bias have taken on increased urgency with rapid advances in large language models (LLMs), which are machine learning models that can achieve performance rivaling human experts on a wide array of tasks. A key theme of these initiatives is algorithmic "auditing," but current regulations -- as well as the scientific literature -- provide little guidance on how to conduct these assessments. Here we propose and investigate one approach for …
abstract advances algorithmic bias array arxiv bias cs.cl decisions experts guide hiring human key language language models large language large language models llms machine machine learning machine learning models performance protect regulatory stat.ap tasks type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Intelligence Manager
@ Sanofi | Budapest
Principal Engineer, Data (Hybrid)
@ Homebase | Toronto, Ontario, Canada