all AI news
Generalizing Multimodal Pre-training into Multilingual via Language Acquisition. (arXiv:2206.11091v1 [cs.CL])
Web: http://arxiv.org/abs/2206.11091
June 23, 2022, 1:12 a.m. | Liang Zhang, Anwen Hu, Qin Jin
cs.CL updates on arXiv.org arxiv.org
English-based Vision-Language Pre-training (VLP) has achieved great success
in various downstream tasks. Some efforts have been taken to generalize this
success to non-English languages through Multilingual Vision-Language
Pre-training (M-VLP). However, due to the large number of languages, M-VLP
models often require huge computing resources and cannot be flexibly extended
to new languages. In this work, we propose a \textbf{M}ulti\textbf{L}ingual
\textbf{A}cquisition (MLA) framework that can easily generalize a monolingual
Vision-Language Pre-training model into multilingual. Specifically, we design a
lightweight language acquisition …
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY