April 4, 2024, 4:47 a.m. | Federico Ruggeri, Marco Lippi, Paolo Torroni

cs.CL updates on arXiv.org arxiv.org

arXiv:2110.00125v3 Announce Type: replace
Abstract: Many NLP applications require models to be interpretable. However, many successful neural architectures, including transformers, still lack effective interpretation methods. A possible solution could rely on building explanations from domain knowledge, which is often available as plain, natural language text. We thus propose an extension to transformer models that makes use of external memories to store natural language explanations and use them to explain classification outputs. We conduct an experimental evaluation on two domains, legal …

abstract applications architectures arxiv building cs.ai cs.cl domain domain knowledge extension however interpretation knowledge language natural natural language neural architectures nlp solution text transformer transformer models transformers type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States