all AI news
RoleEval: A Bilingual Role Evaluation Benchmark for Large Language Models
Feb. 19, 2024, 5:48 a.m. | Tianhao Shen, Sun Li, Quan Tu, Deyi Xiong
cs.CL updates on arXiv.org arxiv.org
Abstract: The rapid evolution of large language models necessitates effective benchmarks for evaluating their role knowledge, which is essential for establishing connections with the real world and providing more immersive interactions. This paper introduces RoleEval, a bilingual benchmark designed to assess the memorization, utilization, and reasoning capabilities of role knowledge. RoleEval comprises RoleEval-Global (including internationally recognized characters) and RoleEval-Chinese (including characters popular in China), with 6,000 Chinese-English parallel multiple-choice questions focusing on 300 influential people and …
abstract arxiv benchmark benchmarks bilingual cs.cl evaluation evolution immersive interactions knowledge language language models large language large language models paper reasoning role type world
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US