Feb. 26, 2024, 5:48 a.m. | Yiran LiuEqual contributions, Tsinghua University, Ke YangEqual contributions, University of Illinois Urbana-Champaign, Zehan QiTsinghua University, X

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.15481v1 Announce Type: new
Abstract: The growing integration of large language models (LLMs) into social operations amplifies their impact on decisions in crucial areas such as economics, law, education, and healthcare, raising public concerns about these models' discrimination-related safety and reliability. However, prior discrimination measuring frameworks solely assess the average discriminatory behavior of LLMs, often proving inadequate due to the overlook of an additional discrimination-leading factor, i.e., the LLMs' prediction variation across diverse contexts. In this work, we present the …

abstract arxiv concerns cs.cl cs.cy decisions discrimination economics education framework frameworks healthcare impact integration language language models large language large language models law llms measuring operations prior public reliability safety social statistical type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Research Scientist

@ d-Matrix | San Diego, Ca