Jan. 31, 2024, 4:41 p.m. | Zeping Yu, Sophia Ananiadou

cs.CL updates on arXiv.org arxiv.org

We find the location of factual knowledge in large language models by
exploring the residual stream and analyzing subvalues in vocabulary space. We
find the reason why subvalues have human-interpretable concepts when projecting
into vocabulary space. The before-softmax values of subvalues are added by an
addition function, thus the probability of top tokens in vocabulary space will
increase. Based on this, we find using log probability increase to compute the
significance of layers and subvalues is better than probability increase, …

arxiv concepts cs.cl human knowledge language language models large language large language models location reason residual softmax space values

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US