April 22, 2024, 4:47 a.m. | Hangfeng He, Hongming Zhang, Dan Roth

cs.CL updates on arXiv.org arxiv.org

arXiv:2310.00074v2 Announce Type: replace
Abstract: To comprehensively gauge the capacity of current models for complex reasoning, it is crucial to assess their step-by-step reasoning in a scalable manner. Established reference-based evaluation metrics rely on human-annotated reasoning chains as references to assess the model-derived chains. However, such "gold-standard" human-written reasoning chains may not be unique and their acquisition is often labor-intensive. Existing reference-free reasoning evaluation metrics, while eliminating the need for human-crafted reasoning chains as references, often require fine-tuning with human-derived …

abstract arxiv capacity cs.ai cs.cl current evaluation evaluation metrics free however human language language models large language large language models metrics reasoning reference scalable standard step-by-step type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US