Nov. 3, 2022, 1:16 a.m. | Mingqi Li, Fei Ding, Dan Zhang, Long Cheng, Hongxin Hu, Feng Luo

cs.CL updates on arXiv.org arxiv.org

Pre-trained multilingual language models play an important role in
cross-lingual natural language understanding tasks. However, existing methods
did not focus on learning the semantic structure of representation, and thus
could not optimize their performance. In this paper, we propose Multi-level
Multilingual Knowledge Distillation (MMKD), a novel method for improving
multilingual language models. Specifically, we employ a teacher-student
framework to adopt rich semantic representation knowledge in English BERT. We
propose token-, word-, sentence-, and structure-level alignment objectives to
encourage multiple levels …

arxiv distillation knowledge language language model multilingual language model pre-training semantic training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Engineer

@ Kaseya | Bengaluru, Karnataka, India