all AI news
Vi-Mistral-X: Building a Vietnamese Language Model with Advanced Continual Pre-training
March 26, 2024, 4:50 a.m. | James Vo
cs.CL updates on arXiv.org arxiv.org
Abstract: The advancement of Large Language Models (LLMs) has significantly transformed the field of natural language processing, although the focus on English-centric models has created a noticeable research gap for specific languages, including Vietnamese. To address this issue, this paper presents vi-mistral-x, an innovative Large Language Model designed expressly for the Vietnamese language. It utilizes a unique method of continual pre-training, based on the Mistral architecture, which incorporates grouped-query attention and sliding window attention techniques. This …
abstract advanced advancement arxiv building continual cs.cl english focus gap issue language language model language models language processing languages large language large language models llms mistral natural natural language natural language processing paper pre-training processing research training type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 18 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 18 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Director, Clinical Data Science
@ Aura | Remote USA
Research Scientist, AI (PhD)
@ Meta | Menlo Park, CA | New York City