Feb. 5, 2024, 6:48 a.m. | Haoran Xu Amr Sharaf Yunmo Chen Weiting Tan Lingfeng Shen Benjamin Van Durme Kenton Murray You

cs.CL updates on arXiv.org arxiv.org

Moderate-sized large language models (LLMs) -- those with 7B or 13B parameters -- exhibit promising machine translation (MT) performance. However, even the top-performing 13B LLM-based translation models, like ALMA, does not match the performance of state-of-the-art conventional encoder-decoder translation models or larger-scale LLMs such as GPT-4. In this study, we bridge this performance gap. We first assess the shortcomings of supervised fine-tuning for LLMs in the MT task, emphasizing the quality issues present in the reference data, despite being human-generated. …

13b art cs.cl decoder encoder encoder-decoder gpt gpt-4 language language models large language large language models llm llm performance llms machine machine translation match optimization parameters performance scale state translation

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote