June 26, 2024, 4:42 a.m. | Kun Qian, Shunji Wan, Claudia Tang, Youzhi Wang, Xuanming Zhang, Maximillian Chen, Zhou Yu

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.17681v1 Announce Type: new
Abstract: As large language models achieve impressive scores on traditional benchmarks, an increasing number of researchers are becoming concerned about benchmark data leakage during pre-training, commonly known as the data contamination problem. To ensure fair evaluation, recent benchmarks release only the training and validation sets, keeping the test set labels closed-source. They require anyone wishing to evaluate his language model to submit the model's predictions for centralized processing and then publish the model's result on their …

abstract arxiv benchmark benchmarking benchmarks cs.cl data data leakage dynamic evaluation fair language language model language models large language large language models pre-training problem release researchers robust through training type validation

Performance Marketing Manager

@ Jerry | New York City

Senior Growth Marketing Manager (FULLY REMOTE)

@ Jerry | Seattle, WA

Growth Marketing Channel Manager

@ Jerry | New York City

Azure Integration Developer - Consultant - Bangalore

@ KPMG India | Bengaluru, Karnataka, India

Director - Technical Program Manager

@ Capital One | Bengaluru, In

Lead Developer-Process Automation -Python Developer

@ Diageo | Bengaluru Karle Town SEZ