all AI news
Faithfulness Measurable Masked Language Models
May 13, 2024, 4:43 a.m. | Andreas Madsen, Siva Reddy, Sarath Chandar
cs.LG updates on arXiv.org arxiv.org
Abstract: A common approach to explaining NLP models is to use importance measures that express which tokens are important for a prediction. Unfortunately, such explanations are often wrong despite being persuasive. Therefore, it is essential to measure their faithfulness. One such metric is if tokens are truly important, then masking them should result in worse model performance. However, token masking introduces out-of-distribution issues, and existing solutions that address this are computationally expensive and employ proxy models. …
abstract arxiv cs.cl cs.lg express importance language language models masking nlp nlp models prediction replace them tokens type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Coding Data Quality Auditor
@ Neuberger Berman | Work At Home-Georgia
Post Graduate (Year-Round) Intern - Market Research Analyst and Agreement Support
@ National Renewable Energy Laboratory | CO - Golden
Retail Analytics Engineering - Sr. Manager (Data)
@ Axalta | Woonsocket-1 CVS Drive