April 18, 2024, 4:47 a.m. | Shaomu Tan, Di Wu, Christof Monz

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.11201v1 Announce Type: new
Abstract: Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference. Language-specific modeling methods show promise in reducing interference. However, they often rely on heuristics to distribute capacity and struggle to foster cross-lingual transfer via isolated modules. In this paper, we explore intrinsic task modularity within multilingual networks and leverage these observations to circumvent interference under multilingual translation. We show that neurons in the feed-forward layers tend to be activated in a language-specific …

abstract arxiv capacity cross-lingual cs.cl explore heuristics however interference intrinsic knowledge language machine machine translation modeling modules multilingual negative neuron paper show struggle training transfer translation type via

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City