all AI news
Analyzing and Adapting Large Language Models for Few-Shot Multilingual NLU: Are We There Yet?
March 5, 2024, 2:52 p.m. | Evgeniia Razumovskaia, Ivan Vuli\'c, Anna Korhonen
cs.CL updates on arXiv.org arxiv.org
Abstract: Supervised fine-tuning (SFT), supervised instruction tuning (SIT) and in-context learning (ICL) are three alternative, de facto standard approaches to few-shot learning. ICL has gained popularity recently with the advent of LLMs due to its simplicity and sample efficiency. Prior research has conducted only limited investigation into how these approaches work for multilingual few-shot learning, and the focus so far has been mostly on their performance. In this work, we present an extensive and systematic comparison …
abstract arxiv context cs.cl efficiency few-shot few-shot learning fine-tuning in-context learning language language models large language large language models llms multilingual nlu prior research sample sft simplicity standard supervised fine-tuning type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
2 days, 9 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
2 days, 9 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO
@ Eurofins | Pueblo, CO, United States
Camera Perception Engineer
@ Meta | Sunnyvale, CA