Web: http://arxiv.org/abs/2206.08081

June 17, 2022, 1:10 a.m. | Nishtha Madaan, Prateek Chaudhury, Nishant Kumar, Srikanta Bedathur

cs.LG updates on arXiv.org arxiv.org

In modern NLP applications, word embeddings are a crucial backbone that can
be readily shared across a number of tasks. However as the text distributions
change and word semantics evolve over time, the downstream applications using
the embeddings can suffer if the word representations do not conform to the
data drift. Thus, maintaining word embeddings to be consistent with the
underlying data distribution is a key problem. In this work, we tackle this
problem and propose TransDrift, a transformer-based prediction …

arxiv embedding modeling transformer

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY