Feb. 26, 2024, 5:48 a.m. | Yiran LiuEqual contributions, Tsinghua University, Ke YangEqual contributions, University of Illinois Urbana-Champaign, Zehan QiTsinghua University, X

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.15481v1 Announce Type: new
Abstract: The growing integration of large language models (LLMs) into social operations amplifies their impact on decisions in crucial areas such as economics, law, education, and healthcare, raising public concerns about these models' discrimination-related safety and reliability. However, prior discrimination measuring frameworks solely assess the average discriminatory behavior of LLMs, often proving inadequate due to the overlook of an additional discrimination-leading factor, i.e., the LLMs' prediction variation across diverse contexts. In this work, we present the …

abstract arxiv concerns cs.cl cs.cy decisions discrimination economics education framework frameworks healthcare impact integration language language models large language large language models law llms measuring operations prior public reliability safety social statistical type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US