all AI news
Evidence from counterfactual tasks supports emergent analogical reasoning in large language models
April 23, 2024, 4:49 a.m. | Taylor Webb, Keith J. Holyoak, Hongjing Lu
cs.CL updates on arXiv.org arxiv.org
Abstract: We recently reported evidence that large language models are capable of solving a wide range of text-based analogy problems in a zero-shot manner, indicating the presence of an emergent capacity for analogical reasoning. Two recent commentaries have challenged these results, citing evidence from so-called `counterfactual' tasks in which the standard sequence of the alphabet is arbitrarily permuted so as to decrease similarity with materials that may have been present in the language model's training data. …
abstract analogy arxiv capacity counterfactual cs.ai cs.cl evidence language language models large language large language models reasoning results tasks text type zero-shot
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US