Feb. 29, 2024, 5:48 a.m. | Shaoyang Xu, Weilong Dong, Zishan Guo, Xinwei Wu, Deyi Xiong

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.18120v1 Announce Type: new
Abstract: Prior research in representation engineering has revealed that LLMs encode concepts within their representation spaces, predominantly centered around English. In this study, we extend this philosophy to a multilingual scenario, delving into multilingual human value concepts in LLMs. Through our comprehensive exploration covering 7 types of human values, 16 languages and 3 LLM series with distinct multilinguality, we empirically substantiate the existence of multilingual human values in LLMs. Further cross-lingual analysis on these concepts discloses …

abstract alignment arxiv concepts consistent cs.cl encode engineering english human language language models languages large language large language models llms multilingual philosophy prior representation research spaces study type value

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne