March 13, 2024, 4:43 a.m. | Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias St\"ahle, Daniel A. Keim, Oliver Deussen, Mennatallah El-Assady

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.07627v1 Announce Type: cross
Abstract: Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation. However, the considered output candidates of the underlying search algorithm are under-explored and under-explained. We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs. To support these tasks, we present generAItor, a visual analytics technique, augmenting …

abstract algorithm arxiv auto chat cs.hc cs.lg explainability explained however language language model language models large language large language models llms loop search tasks text text generation tree type visual writing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

RL Analytics - Content, Data Science Manager

@ Meta | Burlingame, CA

Research Engineer

@ BASF | Houston, TX, US, 77079