Oct. 25, 2022, 1:18 a.m. | Tyler A. Chang, Zhuowen Tu, Benjamin K. Bergen

cs.CL updates on arXiv.org arxiv.org

We assess how multilingual language models maintain a shared multilingual
representation space while still encoding language-sensitive information in
each language. Using XLM-R as a case study, we show that languages occupy
similar linear subspaces after mean-centering, evaluated based on causal
effects on language modeling performance and direct comparisons between
subspaces for 88 languages. The subspace means differ along language-sensitive
axes that are relatively stable throughout middle layers, and these axes encode
information such as token vocabularies. Shifting representations by language …

arxiv geometry language language model multilingual language model

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Alternance DATA/AI Engineer (H/F)

@ SQLI | Le Grand-Quevilly, France