April 3, 2024, 4:46 a.m. | Pengda Wang, Zilin Xiao, Hanjie Chen, Frederick L. Oswald

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.01461v1 Announce Type: new
Abstract: Although large language models (LLMs) have demonstrated remarkable proficiency in understanding text and generating human-like text, they may exhibit biases acquired from training data in doing so. Specifically, LLMs may be susceptible to a common cognitive trap in human decision-making called the representativeness heuristic. This is a concept in psychology that refers to judging the likelihood of an event based on how closely it resembles a well-known prototype or typical example versus considering broader facts …

abstract acquired arxiv biases cognitive cs.cl cs.hc data human human-like language language models large language large language models llms text training training data type understanding will

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant Senior Power BI & Azure - CDI - H/F

@ Talan | Lyon, France