Feb. 23, 2024, 5:48 a.m. | Ningyu Xu, Qi Zhang, Menghan Zhang, Peng Qian, Xuanjing Huang

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.14404v1 Announce Type: new
Abstract: Probing and enhancing large language models' reasoning capacity remains a crucial open question. Here we re-purpose the reverse dictionary task as a case study to probe LLMs' capacity for conceptual inference. We use in-context learning to guide the models to generate the term for an object concept implied in a linguistic description. Models robustly achieve high accuracy in this task, and their representation space encodes information about object categories and fine-grained features. Further experiments suggest …

abstract arxiv capacity case case study context cs.ai cs.cl dictionary guide in-context learning inference language language models large language large language models llms probe question reasoning representation study type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne