April 3, 2024, 4:46 a.m. | Qixiang Fang, Daniel L. Oberski, Dong Nguyen

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.01799v1 Announce Type: new
Abstract: Many existing benchmarks of large (multimodal) language models (LLMs) focus on measuring LLMs' academic proficiency, often with also an interest in comparing model performance with human test takers. While these benchmarks have proven key to the development of LLMs, they suffer from several limitations, including questionable measurement quality (e.g., Do they measure what they are supposed to in a reliable way?), lack of quality assessment on the item level (e.g., Are some items more important …

abstract academic arxiv benchmarking benchmarks case case study cs.cl cs.cy development focus human key language language models large language large language models llms mathematics measuring multimodal performance psychometrics study test type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York