April 16, 2024, 4:51 a.m. | Flor Miriam Plaza-del-Arco, Debora Nozza, Dirk Hovy

cs.CL updates on arXiv.org arxiv.org

arXiv:2307.12973v2 Announce Type: replace
Abstract: Large Language Models (LLMs) exhibit remarkable text classification capabilities, excelling in zero- and few-shot learning (ZSL and FSL) scenarios. However, since they are trained on different datasets, performance varies widely across tasks between those models. Recent studies emphasize the importance of considering human label variation in data annotation. However, how this human label variation also applies to LLMs remains unexplored. Given this likely model specialization, we ask: Do aggregate LLM labels improve over individual models …

abstract arxiv capabilities classification cs.cl datasets few-shot few-shot learning however human importance instruction-tuned language language model language models large language large language models llms performance studies tasks text text classification type variation

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru