Feb. 1, 2024, 12:41 p.m. | Pinzhen Chen Shaoxiong Ji Nikolay Bogoychev Andrey Kutuzov Barry Haddow Kenneth Heafield

cs.CL updates on arXiv.org arxiv.org

Foundational large language models (LLMs) can be instruction-tuned to perform open-domain question answering, facilitating applications like chat assistants. While such efforts are often carried out in a single language, we empirically analyze cost-efficient strategies for multilingual scenarios. Our study employs the Alpaca dataset and machine translations of it to form multilingual data, which is then used to tune LLMs through either low-rank adaptation or full-parameter training. Under a controlled computation budget, comparisons show that multilingual tuning is on par or …

alpaca analyze applications assistants chat cost cs.ai cs.cl dataset domain form instruction-tuned language language models large language large language models llms machine multilingual question question answering strategies study

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne