June 17, 2024, 4:40 a.m. | Jack Merullo, Carsten Eickhoff, Ellie Pavlick

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.09519v1 Announce Type: new
Abstract: Although it is known that transformer language models (LMs) pass features from early layers to later layers, it is not well understood how this information is represented and routed by the model. By analyzing particular mechanism LMs use to accomplish this, we find that it is also used to recall items from a list, and show that this mechanism can explain an otherwise arbitrary-seeming sensitivity of the model to the order of items in the …

abstract arxiv communication cs.ai cs.cl features information language language models later layer lms transformer transformer language models type understanding

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Data Engineer

@ Displate | Warsaw

Solutions Architect

@ PwC | Bucharest - 1A Poligrafiei Boulevard

Research Fellow (Social and Cognition Factors, CLIC)

@ Nanyang Technological University | NTU Main Campus, Singapore

Research Aide - Research Aide I - Department of Psychology

@ Cornell University | Ithaca (Main Campus)

Technical Architect - SMB/Desk

@ Salesforce | Ireland - Dublin