April 24, 2024, 4:48 a.m. | Zhivar Sourati, Filip Ilievski, Pia Sommerauer, Yifan Jiang

cs.CL updates on arXiv.org arxiv.org

arXiv:2310.00996v3 Announce Type: replace
Abstract: As a core cognitive skill that enables the transferability of information across domains, analogical reasoning has been extensively studied for both humans and computational models. However, while cognitive theories of analogy often focus on narratives and study the distinction between surface, relational, and system similarities, existing work in natural language processing has a narrower focus as far as relational analogies between word pairs. This gap brings a natural question: can state-of-the-art large language models (LLMs) …

abstract analogy arxiv cognitive computational core cs.cl domains focus however humans information language natural natural language reasoning relational study surface type work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA