Jan. 14, 2022, 2:10 a.m. | Yuyan Chen, Yanghua Xiao, Bang Liu

cs.CL updates on arXiv.org arxiv.org

Interpreting the predictions of existing Question Answering (QA) models is
critical to many real-world intelligent applications, such as QA systems for
healthcare, education, and finance. However, existing QA models lack
interpretability and provide no feedback or explanation for end-users to help
them understand why a specific prediction is the answer to a question.In this
research, we argue that the evidences of an answer is critical to enhancing the
interpretability of QA models. Unlike previous research that simply extracts
several sentence(s) …

arxiv clip

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Chubb | Simsbury, CT, United States

Research Analyst , NA Light Vehicle Powertrain Forecasting

@ S&P Global | US - MI - VIRTUAL

Sr. Data Scientist - ML Ops Job

@ Yash Technologies | Indore, IN

Alternance-Data Management

@ Keolis | Courbevoie, FR, 92400