March 6, 2024, 5:48 a.m. | Stefan Hackmann, Haniyeh Mahmoudian, Mark Steadman, Michael Schmidt

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.03028v1 Announce Type: cross
Abstract: The emergence of large language models (LLMs) has revolutionized numerous applications across industries. However, their "black box" nature often hinders the understanding of how they make specific decisions, raising concerns about their transparency, reliability, and ethical use. This study presents a method to improve the explainability of LLMs by varying individual words in prompts to uncover their statistical impact on the model outputs. This approach, inspired by permutation importance for tabular data, masks each word …

abstract applications arxiv black box box concerns cs.ai cs.cl decisions emergence ethical explainability importance industries language language model language models large language large language models llms nature prompts reliability study transparency type understanding word

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore