April 22, 2024, 4:42 a.m. | Diego Calanzone, Stefano Teso, Antonio Vergari

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12843v1 Announce Type: new
Abstract: Large language models (LLMs) are a promising venue for natural language understanding and generation tasks. However, current LLMs are far from reliable: they are prone to generate non-factual information and, more crucially, to contradict themselves when prompted to reason about beliefs of the world. These problems are currently addressed with large scale fine-tuning or by delegating consistent reasoning to external tools. In this work, we strive for a middle ground and introduce a training objective …

abstract arxiv consistent cs.cl cs.lg current generate however information language language models language understanding large language large language models llms natural natural language reason reasoning tasks type understanding via world

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA