all AI news
Word Importance Explains How Prompts Affect Language Model Outputs
March 6, 2024, 5:48 a.m. | Stefan Hackmann, Haniyeh Mahmoudian, Mark Steadman, Michael Schmidt
cs.CL updates on arXiv.org arxiv.org
Abstract: The emergence of large language models (LLMs) has revolutionized numerous applications across industries. However, their "black box" nature often hinders the understanding of how they make specific decisions, raising concerns about their transparency, reliability, and ethical use. This study presents a method to improve the explainability of LLMs by varying individual words in prompts to uncover their statistical impact on the model outputs. This approach, inspired by permutation importance for tabular data, masks each word …
abstract applications arxiv black box box concerns cs.ai cs.cl decisions emergence ethical explainability importance industries language language model language models large language large language models llms nature prompts reliability study transparency type understanding word
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (Digital Business Analyst)
@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore