all AI news
Reasoning over Description Logic-based Contexts with Transformers
Feb. 27, 2024, 5:51 a.m. | Angelos Poulis, Eleni Tsalapati, Manolis Koubarakis
cs.CL updates on arXiv.org arxiv.org
Abstract: One way that the current state of the art measures the reasoning ability of transformer-based models is by evaluating accuracy in downstream tasks like logical question answering or proof generation over synthetic contexts expressed in natural language. However, most of the contexts used are in practice very simple; in most cases, they are generated from short first-order logic sentences with only a few logical operators and quantifiers. In this work, we seek to answer the …
abstract accuracy art arxiv cs.ai cs.cl current language logic natural natural language practice question question answering reasoning simple state state of the art synthetic tasks transformer transformers type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York