April 22, 2024, 4:47 a.m. | Hangfeng He, Hongming Zhang, Dan Roth

cs.CL updates on arXiv.org arxiv.org

arXiv:2310.00074v2 Announce Type: replace
Abstract: To comprehensively gauge the capacity of current models for complex reasoning, it is crucial to assess their step-by-step reasoning in a scalable manner. Established reference-based evaluation metrics rely on human-annotated reasoning chains as references to assess the model-derived chains. However, such "gold-standard" human-written reasoning chains may not be unique and their acquisition is often labor-intensive. Existing reference-free reasoning evaluation metrics, while eliminating the need for human-crafted reasoning chains as references, often require fine-tuning with human-derived …

abstract arxiv capacity cs.ai cs.cl current evaluation evaluation metrics free however human language language models large language large language models metrics reasoning reference scalable standard step-by-step type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA