March 19, 2024, 4:43 a.m. | Haozhe Chen, Carl Vondrick, Chengzhi Mao

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10949v1 Announce Type: cross
Abstract: How do large language models (LLMs) obtain their answers? The ability to explain and control an LLM's reasoning process is key for reliability, transparency, and future model developments. We propose SelfIE (Self-Interpretation of Embeddings), a framework that enables LLMs to interpret their own embeddings in natural language by leveraging their ability to respond inquiry about a given passage. Capable of interpreting open-world concepts in the hidden embeddings, SelfIE reveals LLM internal reasoning in cases such …

abstract arxiv control cs.ai cs.cl cs.lg embeddings framework future interpretation key language language model language models large language large language model large language models llm llms natural natural language process reasoning reliability selfie transparency type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States