all AI news
Unsupervised Pretraining for Fact Verification by Language Model Distillation
March 8, 2024, 5:43 a.m. | Adri\'an Bazaga, Pietro Li\`o, Gos Micklem
cs.LG updates on arXiv.org arxiv.org
Abstract: Fact verification aims to verify a claim using evidence from a trustworthy knowledge base. To address this challenge, algorithms must produce features for every claim that are both semantically meaningful, and compact enough to find a semantic alignment with the source information. In contrast to previous work, which tackled the alignment problem by learning over annotated corpora of claims and their corresponding labels, we propose SFAVEL (Self-supervised Fact Verification via Language Model Distillation), a novel …
abstract algorithms alignment arxiv challenge claim contrast cs.cl cs.lg distillation every evidence features information knowledge knowledge base language language model model distillation pretraining semantic stat.ml trustworthy type unsupervised verification verify
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne