Feb. 29, 2024, 5:48 a.m. | Zhenxiao Cheng, Jie Zhou, Wen Wu, Qin Chen, Liang He

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.18145v1 Announce Type: new
Abstract: Gradient-based explanation methods are increasingly used to interpret neural models in natural language processing (NLP) due to their high fidelity. Such methods determine word-level importance using dimension-level gradient values through a norm function, often presuming equal significance for all gradient dimensions. However, in the context of Aspect-based Sentiment Analysis (ABSA), our preliminary research suggests that only specific dimensions are pertinent. To address this, we propose the Information Bottleneck-based Gradient (\texttt{IBG}) explanation framework for ABSA. This …

abstract analysis arxiv cs.cl dimensions fidelity function gradient importance information intrinsic language language processing natural natural language natural language processing nlp norm processing sentiment sentiment analysis significance through type values via word

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US