Web: http://arxiv.org/abs/2209.10876

Sept. 23, 2022, 1:11 a.m. | Nikolaos Mylonas, Ioannis Mollas, Grigorios Tsoumakas

cs.LG updates on arXiv.org arxiv.org

Transformers are widely used in NLP, where they consistently achieve
state-of-the-art performance. This is due to their attention-based
architecture, which allows them to model rich linguistic relations between
words. However, transformers are difficult to interpret. Being able to provide
reasoning for its decisions is an important property for a model in domains
where human lives are affected, such as hate speech detection and biomedicine.
With transformers finding wide use in these fields, the need for
interpretability techniques tailored to them …

arxiv attention classification interpretability text text classification transformers

More from arxiv.org / cs.LG updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France