Oct. 28, 2022, 1:16 a.m. | Chih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, Pradeep Ravikumar

cs.CL updates on arXiv.org arxiv.org

The ability to identify influential training examples enables us to debug
training data and explain model behavior. Existing techniques to do so are
based on the flow of training data influence through the model parameters. For
large models in NLP applications, it is often computationally infeasible to
study this flow through all model parameters, therefore techniques usually pick
the last layer of weights. However, we observe that since the activation
connected to the last layer of weights contains "shared logic", …

arxiv data influence language language data

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston