Jan. 31, 2024, 4:41 p.m. | Shrayani Mondal, Rishabh Garodia, Arbaaz Qureshi, Taesung Lee, Youngja Park

cs.CL updates on arXiv.org arxiv.org

Recent developments in transformer-based language models have allowed them to
capture a wide variety of world knowledge that can be adapted to downstream
tasks with limited resources. However, what pieces of information are
understood in these models is unclear, and neuron-level contributions in
identifying them are largely unknown. Conventional approaches in neuron
explainability either depend on a finite set of pre-defined descriptors or
require manual annotations for training a secondary model that can then explain
the neurons of the primary …

arxiv cs.cl information knowledge language language models neuron neurons resources tasks textual them transformer world

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US