all AI news
Analyzing and Adapting Large Language Models for Few-Shot Multilingual NLU: Are We There Yet?
March 5, 2024, 2:52 p.m. | Evgeniia Razumovskaia, Ivan Vuli\'c, Anna Korhonen
cs.CL updates on arXiv.org arxiv.org
Abstract: Supervised fine-tuning (SFT), supervised instruction tuning (SIT) and in-context learning (ICL) are three alternative, de facto standard approaches to few-shot learning. ICL has gained popularity recently with the advent of LLMs due to its simplicity and sample efficiency. Prior research has conducted only limited investigation into how these approaches work for multilingual few-shot learning, and the focus so far has been mostly on their performance. In this work, we present an extensive and systematic comparison …
abstract arxiv context cs.cl efficiency few-shot few-shot learning fine-tuning in-context learning language language models large language large language models llms multilingual nlu prior research sample sft simplicity standard supervised fine-tuning type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Market Development Specialist - M2P & Automation ( Location - Bangalore/Mumbai))
@ Danaher | IND - Bengaluru North - Beckman Coulter India Private Limited
Senior Software Engineer - AI Compilers
@ Microsoft | Redmond, Washington, United States
Senior AI Platform Engineer
@ AstraZeneca | Spain - Barcelona