Nov. 5, 2023, 6:47 a.m. | Yuhan Zhang, Edward Gibson, Forrest Davis

cs.CL updates on arXiv.org arxiv.org

Language models (LMs) have been argued to overlap substantially with human
beings in grammaticality judgment tasks. But when humans systematically make
errors in language processing, should we expect LMs to behave like cognitive
models of language and mimic human behavior? We answer this question by
investigating LMs' more subtle judgments associated with "language illusions"
-- sentences that are vague in meaning, implausible, or ungrammatical but
receive unexpectedly high acceptability judgments by humans. We looked at three
illusions: the comparative illusion …

arxiv behavior beings cognitive errors expect human humans judgment language language models language processing processing semantics syntax tasks

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence