all AI news
Towards Logically Consistent Language Models via Probabilistic Reasoning
April 22, 2024, 4:42 a.m. | Diego Calanzone, Stefano Teso, Antonio Vergari
cs.LG updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) are a promising venue for natural language understanding and generation tasks. However, current LLMs are far from reliable: they are prone to generate non-factual information and, more crucially, to contradict themselves when prompted to reason about beliefs of the world. These problems are currently addressed with large scale fine-tuning or by delegating consistent reasoning to external tools. In this work, we strive for a middle ground and introduce a training objective …
abstract arxiv consistent cs.cl cs.lg current generate however information language language models language understanding large language large language models llms natural natural language reason reasoning tasks type understanding via world
More from arxiv.org / cs.LG updates on arXiv.org
Sliced Wasserstein with Random-Path Projecting Directions
1 day, 15 hours ago |
arxiv.org
Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
1 day, 15 hours ago |
arxiv.org
The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
1 day, 15 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York