Nov. 24, 2022, 7:18 a.m. | Fahim Faisal, Antonios Anastasopoulos

cs.CL updates on arXiv.org arxiv.org

Large pretrained multilingual models, trained on dozens of languages, have
delivered promising results due to cross-lingual learning capabilities on
variety of language tasks. Further adapting these models to specific languages,
especially ones unseen during pre-training, is an important goal towards
expanding the coverage of language technologies. In this study, we show how we
can use language phylogenetic information to improve cross-lingual transfer
leveraging closely related languages in a structured, linguistically-informed
manner. We perform adapter-based training on languages from diverse language …

arxiv

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City