all AI news
Local Interpretations for Explainable Natural Language Processing: A Survey
March 19, 2024, 4:54 a.m. | Siwen Luo, Hamish Ivison, Caren Han, Josiah Poon
cs.CL updates on arXiv.org arxiv.org
Abstract: As the use of deep learning techniques has grown across various fields over the past decade, complaints about the opaqueness of the black-box models have increased, resulting in an increased focus on transparency in deep learning models. This work investigates various methods to improve the interpretability of deep neural networks for Natural Language Processing (NLP) tasks, including machine translation and sentiment analysis. We provide a comprehensive discussion on the definition of the term interpretability and …
abstract arxiv box cs.ai cs.cl deep learning deep learning techniques fields focus interpretability language language processing natural natural language natural language processing processing survey transparency type work
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Research Scientist
@ d-Matrix | San Diego, Ca