all AI news
Multi-level Distillation of Semantic Knowledge for Pre-training Multilingual Language Model. (arXiv:2211.01200v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Pre-trained multilingual language models play an important role in
cross-lingual natural language understanding tasks. However, existing methods
did not focus on learning the semantic structure of representation, and thus
could not optimize their performance. In this paper, we propose Multi-level
Multilingual Knowledge Distillation (MMKD), a novel method for improving
multilingual language models. Specifically, we employ a teacher-student
framework to adopt rich semantic representation knowledge in English BERT. We
propose token-, word-, sentence-, and structure-level alignment objectives to
encourage multiple levels …
arxiv distillation knowledge language language model multilingual language model pre-training semantic training