May 13, 2024, 4:46 a.m. | Aadesh Salecha, Molly E. Ireland, Shashanka Subrahmanya, Jo\~ao Sedoc, Lyle H. Ungar, Johannes C. Eichstaedt

cs.CL updates on arXiv.org arxiv.org

arXiv:2405.06058v1 Announce Type: cross
Abstract: As Large Language Models (LLMs) become widely used to model and simulate human behavior, understanding their biases becomes critical. We developed an experimental framework using Big Five personality surveys and uncovered a previously undetected social desirability bias in a wide range of LLMs. By systematically varying the number of questions LLMs were exposed to, we demonstrate their ability to infer when they are being evaluated. When personality evaluation is inferred, LLMs skew their scores towards …

abstract arxiv become behavior bias biases big cs.ai cs.cl cs.cy cs.hc experimental five framework human human-like language language models large language large language models llms personality responses show social survey surveys type understanding

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

AI Engineer

@ Holcim Group | Navi Mumbai, MH, IN, 400708