April 18, 2024, 4:47 a.m. | Shaomu Tan, Di Wu, Christof Monz

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.11201v1 Announce Type: new
Abstract: Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference. Language-specific modeling methods show promise in reducing interference. However, they often rely on heuristics to distribute capacity and struggle to foster cross-lingual transfer via isolated modules. In this paper, we explore intrinsic task modularity within multilingual networks and leverage these observations to circumvent interference under multilingual translation. We show that neurons in the feed-forward layers tend to be activated in a language-specific …

abstract arxiv capacity cross-lingual cs.cl explore heuristics however interference intrinsic knowledge language machine machine translation modeling modules multilingual negative neuron paper show struggle training transfer translation type via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York