all AI news
Examining the robustness of LLM evaluation to the distributional assumptions of benchmarks
April 29, 2024, 4:47 a.m. | Melissa Ailem, Katerina Marazopoulou, Charlotte Siska, James Bono
cs.CL updates on arXiv.org arxiv.org
Abstract: Benchmarks have emerged as the central approach for evaluating Large Language Models (LLMs). The research community often relies on a model's average performance across the test prompts of a benchmark to evaluate the model's performance. This is consistent with the assumption that the test prompts within a benchmark represent a random sample from a real-world distribution of interest. We note that this is generally not the case; instead, we hold that the distribution of interest …
abstract arxiv assumptions benchmark benchmarks community consistent cs.cl evaluation language language models large language large language models llm llms performance prompts research research community robustness s performance test type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US