all AI news
Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations. (arXiv:2305.03117v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Human-annotated labels and explanations are critical for training explainable
NLP models. However, unlike human-annotated labels whose quality is easier to
calibrate (e.g., with a majority vote), human-crafted free-form explanations
can be quite subjective, as some recent works have discussed. Before blindly
using them as ground truth to train ML models, a vital question needs to be
asked: How do we evaluate a human-annotated explanation's quality? In this
paper, we build on the view that the quality of a human-annotated explanation …
arxiv evaluation free human labels language natural natural language nlp nlp models quality training