Feb. 12, 2024, 5:46 a.m. | Garrett Tanzer Mirac Suzgun Eline Visser Dan Jurafsky Luke Melas-Kyriazi

cs.CL updates on arXiv.org arxiv.org

Large language models (LLMs) can perform impressive feats with in-context learning or lightweight finetuning. It is natural to wonder how well these models adapt to genuinely new tasks, but how does one find tasks that are unseen in internet-scale training sets? We turn to a field that is explicitly motivated and bottlenecked by a scarcity of web data: low-resource languages. In this paper, we introduce MTOB (Machine Translation from One Book), a benchmark for learning to translate between English and …

adapt benchmark book context cs.cl finetuning grammar in-context learning internet language language models large language large language models llms natural scale tasks training translate

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne