March 12, 2024, 4:44 a.m. | Sebastian Ruder, Ryan Cotterell, Yova Kementchedjhieva, Anders S{\o}gaard

cs.LG updates on arXiv.org arxiv.org

arXiv:1808.09334v3 Announce Type: replace-cross
Abstract: We introduce a novel discriminative latent variable model for bilingual lexicon induction. Our model combines the bipartite matching dictionary prior of Haghighi et al. (2008) with a representation-based approach (Artetxe et al., 2017). To train the model, we derive an efficient Viterbi EM algorithm. We provide empirical results on six language pairs under two metrics and show that the prior improves the induced bilingual lexicons. We also demonstrate how previous work may be viewed as …

abstract algorithm arxiv bilingual cs.cl cs.lg dictionary latent variable model novel prior representation stat.ml train type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA