all AI news
generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation
March 13, 2024, 4:43 a.m. | Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias St\"ahle, Daniel A. Keim, Oliver Deussen, Mennatallah El-Assady
cs.LG updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation. However, the considered output candidates of the underlying search algorithm are under-explored and under-explained. We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs. To support these tasks, we present generAItor, a visual analytics technique, augmenting …
abstract algorithm arxiv auto chat cs.hc cs.lg explainability explained however language language model language models large language large language models llms loop search tasks text text generation tree type visual writing
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 14 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
RL Analytics - Content, Data Science Manager
@ Meta | Burlingame, CA
Research Engineer
@ BASF | Houston, TX, US, 77079