Aug. 26, 2022, 1:14 a.m. | Cyril Chhun, Pierre Colombo, Chloé Clavel, Fabian M. Suchanek

cs.CL updates on arXiv.org arxiv.org

Research on Automatic Story Generation (ASG) relies heavily on human and
automatic evaluation. However, there is no consensus on which human evaluation
criteria to use, and no analysis of how well automatic criteria correlate with
them. In this paper, we propose to re-evaluate ASG evaluation. We introduce a
set of 6 orthogonal and comprehensive human criteria, carefully motivated by
the social sciences literature. We also present HANNA, an annotated dataset of
1,056 stories produced by 10 different ASG systems. HANNA …

arxiv benchmark evaluation generation human metrics

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York