all AI news
Incorporating Word Sense Disambiguation in Neural Language Models. (arXiv:2106.07967v2 [cs.CL] UPDATED)
Web: http://arxiv.org/abs/2106.07967
Jan. 31, 2022, 2:10 a.m. | Jan Philip Wahle, Terry Ruas, Norman Meuschke, Bela Gipp
cs.CL updates on arXiv.org arxiv.org
We present two supervised (pre-)training methods to incorporate gloss
definitions from lexical resources into neural language models (LMs). The
training improves our models' performance for Word Sense Disambiguation (WSD)
but also benefits general language understanding tasks while adding almost no
parameters. We evaluate our techniques with seven different neural LMs and find
that XLNet is more suitable for WSD than BERT. Our best-performing methods
exceeds state-of-the-art WSD techniques on the SemCor 3.0 dataset by 0.5% F1
and increase BERT's performance …
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Data Analytics and Technical support Lead
@ Coupa Software, Inc. | Bogota, Colombia
Data Science Manager
@ Vectra | San Jose, CA
Data Analyst Sr
@ Capco | Brazil - Sao Paulo
Data Scientist (NLP)
@ Builder.ai | London, England, United Kingdom - Remote
Senior Data Analyst
@ BuildZoom | Scottsdale, AZ/ San Francisco, CA/ Remote
Senior Research Scientist, Speech Recognition
@ SoundHound Inc. | Toronto, Canada