all AI news
Unsupervised Pretraining for Fact Verification by Language Model Distillation
March 8, 2024, 5:43 a.m. | Adri\'an Bazaga, Pietro Li\`o, Gos Micklem
cs.LG updates on arXiv.org arxiv.org
Abstract: Fact verification aims to verify a claim using evidence from a trustworthy knowledge base. To address this challenge, algorithms must produce features for every claim that are both semantically meaningful, and compact enough to find a semantic alignment with the source information. In contrast to previous work, which tackled the alignment problem by learning over annotated corpora of claims and their corresponding labels, we propose SFAVEL (Self-supervised Fact Verification via Language Model Distillation), a novel …
abstract algorithms alignment arxiv challenge claim contrast cs.cl cs.lg distillation every evidence features information knowledge knowledge base language language model model distillation pretraining semantic stat.ml trustworthy type unsupervised verification verify
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US