all AI news
Incorporating Word Sense Disambiguation in Neural Language Models. (arXiv:2106.07967v2 [cs.CL] UPDATED)
Jan. 31, 2022, 2:10 a.m. | Jan Philip Wahle, Terry Ruas, Norman Meuschke, Bela Gipp
cs.CL updates on arXiv.org arxiv.org
We present two supervised (pre-)training methods to incorporate gloss
definitions from lexical resources into neural language models (LMs). The
training improves our models' performance for Word Sense Disambiguation (WSD)
but also benefits general language understanding tasks while adding almost no
parameters. We evaluate our techniques with seven different neural LMs and find
that XLNet is more suitable for WSD than BERT. Our best-performing methods
exceeds state-of-the-art WSD techniques on the SemCor 3.0 dataset by 0.5% F1
and increase BERT's performance …
More from arxiv.org / cs.CL updates on arXiv.org
VAL: Interactive Task Learning with GPT Dialog Parsing
1 day, 5 hours ago |
arxiv.org
DBCopilot: Scaling Natural Language Querying to Massive Databases
1 day, 5 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Alternant Data Engineering
@ Aspire Software | Angers, FR
Senior Software Engineer, Generative AI
@ Google | Dublin, Ireland