March 15, 2024, 4:42 a.m. | Shauli Ravfogel, Francisco Vargas, Yoav Goldberg, Ryan Cotterell

cs.LG updates on arXiv.org arxiv.org

arXiv:2201.12191v5 Announce Type: replace
Abstract: The representation space of neural models for textual data emerges in an unsupervised manner during training. Understanding how those representations encode human-interpretable concepts is a fundamental problem. One prominent approach for the identification of concepts in neural representations is searching for a linear subspace whose erasure prevents the prediction of the concept from the representations. However, while many linear erasure algorithms are tractable and interpretable, neural networks do not necessarily represent concepts in a linear …

abstract arxiv concept concepts cs.cl cs.lg data encode human identification linear prediction representation searching space textual training type understanding unsupervised

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US