all AI news
What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception
April 3, 2024, 4:47 a.m. | Chaitanya Malaviya, Subin Lee, Dan Roth, Mark Yatskar
cs.CL updates on arXiv.org arxiv.org
Abstract: Eliciting feedback from end users of NLP models can be beneficial for improving models. However, how should we present model responses to users so they are most amenable to be corrected from user feedback? Further, what properties do users value to understand and trust responses? We answer these questions by analyzing the effect of rationales (or explanations) generated by QA models to support their answers. We specifically consider decomposed QA models that first extract an …
abstract arxiv cs.cl end users feedback however human human feedback improving nlp nlp models perception responses said type user feedback
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US