Feb. 20, 2024, 5:52 a.m. | Michela Lorandi, Anya Belz

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.12267v1 Announce Type: new
Abstract: The performance of NLP methods for severely under-resourced languages cannot currently hope to match the state of the art in NLP methods for well resourced languages. We explore the extent to which pretrained large language models (LLMs) can bridge this gap, via the example of data-to-text generation for Irish, Welsh, Breton and Maltese. We test LLMs on these under-resourced languages and English, in a range of scenarios. We find that LLMs easily set the state …

abstract art arxiv box bridge cs.cl data explore gap language language models languages large language large language models llms match nlp performance quality quality data state state of the art text text generation type via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York