April 2, 2024, 7:52 p.m. | David Chanin, Anthony Hunter, Oana-Maria Camburu

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.08968v2 Announce Type: replace
Abstract: Transformer language models (LMs) have been shown to represent concepts as directions in the latent space of hidden activations. However, for any human-interpretable concept, how can we find its direction in the latent space? We present a technique called linear relational concepts (LRC) for finding concept directions corresponding to human-interpretable concepts by first modeling the relation between subject and object as a linear relational embedding (LRE). We find that inverting the LRE and using earlier …

abstract arxiv concept concepts cs.ai cs.cl hidden however human language language models large language large language models linear lms relational space transformer transformer language models type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US