Oct. 12, 2022, 1:17 a.m. | Stephanie Brandl, David Lassner, Anne Baillot, Shinichi Nakajima

cs.CL updates on arXiv.org arxiv.org

Complementary to finding good general word embeddings, an important question
for representation learning is to find dynamic word embeddings, e.g., across
time or domain. Current methods do not offer a way to use or predict
information on structure between sub-corpora, time or domain and dynamic
embeddings can only be compared after post-alignment. We propose novel word
embedding methods that provide general word representations for the whole
corpus, domain-specific representations for each sub-corpus, sub-corpus
structure, and embedding alignment simultaneously. We present …

arxiv prediction word embeddings

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA