Oct. 14, 2022, 1:18 a.m. | Mor Geva, Avi Caciularu, Guy Dar, Paul Roit, Shoval Sadde, Micah Shlain, Bar Tamir, Yoav Goldberg

cs.CL updates on arXiv.org arxiv.org

The opaque nature and unexplained behavior of transformer-based language
models (LMs) have spurred a wide interest in interpreting their predictions.
However, current interpretation methods mostly focus on probing models from
outside, executing behavioral tests, and analyzing salience input features,
while the internal prediction construction process is largely not understood.
In this work, we introduce LM-Debugger, an interactive debugger tool for
transformer-based LMs, which provides a fine-grained interpretation of the
model's internal prediction process, as well as a powerful framework for …

arxiv debugger interactive language language models tool transformer

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant Senior Power BI & Azure - CDI - H/F

@ Talan | Lyon, France