all AI news
Bailong: Bilingual Transfer Learning based on QLoRA and Zip-tie Embedding
April 2, 2024, 7:51 p.m. | Lung-Chuan Chen, Zong-Ru Li
cs.CL updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) have demonstrated exceptional performance in various NLP applications. However, the majority of existing open-source LLMs are pre-trained primarily on English data and little part of other languages. This deficiency in multilingual training data results in suboptimal performance when applied to languages with fewer available resources. Furthermore, enhancing the performance of LLMs on low-resource languages by full-parameter fine-tuning with additional data requires substantial computational resources, posing computational barriers for research organizations and …
abstract applications arxiv bilingual cs.ai cs.cl data embedding english however language language models languages large language large language models llms multilingual nlp part performance qlora results training training data transfer transfer learning type zip
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne