April 3, 2024, 4:46 a.m. | Qixiang Fang, Daniel L. Oberski, Dong Nguyen

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.01799v1 Announce Type: new
Abstract: Many existing benchmarks of large (multimodal) language models (LLMs) focus on measuring LLMs' academic proficiency, often with also an interest in comparing model performance with human test takers. While these benchmarks have proven key to the development of LLMs, they suffer from several limitations, including questionable measurement quality (e.g., Do they measure what they are supposed to in a reliable way?), lack of quality assessment on the item level (e.g., Are some items more important …

abstract academic arxiv benchmarking benchmarks case case study cs.cl cs.cy development focus human key language language models large language large language models llms mathematics measuring multimodal performance psychometrics study test type

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States