all AI news
MAPLE: Multilingual Evaluation of Parameter Efficient Finetuning of Large Language Models
Feb. 21, 2024, 5:49 a.m. | Divyanshu Aggarwal, Ashutosh Sathe, Ishaan Watts, Sunayana Sitaram
cs.CL updates on arXiv.org arxiv.org
Abstract: Parameter Efficient Finetuning (PEFT) has emerged as a viable solution for improving the performance of Large Language Models (LLMs) without requiring massive resources and compute. Prior work on multilingual evaluation has shown that there is a large gap between the performance of LLMs on English and other languages. Further, there is also a large gap between the performance of smaller open-source models and larger LLMs. Finetuning can be an effective way to bridge this gap …
abstract arxiv compute cs.cl evaluation finetuning gap language language models large language large language models llms massive multilingual peft performance prior resources solution type work
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US