all AI news
Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment
Feb. 22, 2024, 5:48 a.m. | William Merrill, Zhaofeng Wu, Norihito Naka, Yoon Kim, Tal Linzen
cs.CL updates on arXiv.org arxiv.org
Abstract: Do LMs infer the semantics of text from co-occurrence patterns in their training data? Merrill et al. (2022) argue that, in theory, probabilities predicted by an optimal LM encode semantic information about entailment relations, but it is unclear whether neural LMs trained on corpora learn entailment in this way because of strong idealizing assumptions made by Merrill et al. In this work, we investigate whether their theory can be used to decode entailment judgments from …
abstract arxiv case cs.cl data encode information learn lms next patterns prediction relations semantic semantics text theory through training training data type word
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US