all AI news
Evaluating Implicit Bias in Large Language Models by Attacking From a Psychometric Perspective
June 21, 2024, 4:42 a.m. | Yuchen Wen, Keping Bi, Wei Chen, Jiafeng Guo, Xueqi Cheng
cs.CL updates on arXiv.org arxiv.org
Abstract: As Large Language Models (LLMs) become an important way of information seeking, there have been increasing concerns about the unethical content LLMs may generate. In this paper, we conduct a rigorous evaluation of LLMs' implicit bias towards certain groups by attacking them with carefully crafted instructions to elicit biased responses. Our attack methodology is inspired by psychometric principles in cognitive and social psychology. We propose three attack approaches, i.e., Disguise, Deception, and Teaching, based on …
abstract arxiv become bias concerns cs.ai cs.cl evaluation generate important information language language models large language large language models llms paper perspective them type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
Hybrid Cloud Engineer
@ Vanguard | Wayne, PA
Senior Software Engineer
@ F5 | San Jose
Software Engineer, Backend, 3+ Years of Experience
@ Snap Inc. | Bellevue - 110 110th Ave NE
Global Head of Commercial Data Foundations
@ Sanofi | Cambridge