April 8, 2024, 4:46 a.m. | Tong Su, Xin Peng, Sarubi Thillainathan, David Guzm\'an, Surangika Ranathunga, En-Shiun Annie Lee

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.04212v1 Announce Type: new
Abstract: Parameter-efficient fine-tuning (PEFT) methods are increasingly vital in adapting large-scale pre-trained language models for diverse tasks, offering a balance between adaptability and computational efficiency. They are important in Low-Resource Language (LRL) Neural Machine Translation (NMT) to enhance translation accuracy with minimal resources. However, their practical effectiveness varies significantly across different languages. We conducted comprehensive empirical experiments with varying LRL domains and sizes to evaluate the performance of 8 PEFT methods with in total of 15 …

abstract accuracy adaptability arxiv balance computational cs.cl diverse efficiency fine-tuning however language language models language translation low machine machine translation neural machine translation peft practical resources scale tasks translation type vital

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US