April 22, 2024, 4:42 a.m. | Gregory Yauney, David Mimno

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13020v1 Announce Type: cross
Abstract: Evaluating the in-context learning classification performance of language models poses challenges due to small dataset sizes, extensive prompt-selection using the validation set, and intentionally difficult tasks that lead to near-random performance. The standard random baseline -- the expected accuracy of guessing labels uniformly at random -- is stable when the evaluation set is used only once or when the dataset is large. We account for the common practice of validation set reuse and existing small …

abstract accuracy arxiv challenges classification context cs.cl cs.lg dataset in-context learning labels language language models near performance prompt random set small standard tasks type validation

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town