April 16, 2024, 4:49 a.m. | Xingyu Fu, Ben Zhou, Sihao Chen, Mark Yatskar, Dan Roth

cs.CV updates on arXiv.org arxiv.org

arXiv:2305.14882v2 Announce Type: replace-cross
Abstract: Recent advances in multimodal large language models (LLMs) have shown extreme effectiveness in visual question answering (VQA). However, the design nature of these end-to-end models prevents them from being interpretable to humans, undermining trust and applicability in critical domains. While post-hoc rationales offer certain insight into understanding model behavior, these explanations are not guaranteed to be faithful to the model. In this paper, we address these shortcomings by introducing an interpretable by design model that …

abstract advances arxiv bottlenecks cs.ai cs.cl cs.cv design domains dynamic however humans insight language language models large language large language models llms multimodal nature question question answering them trust type understanding visual vqa

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - New Graduate

@ Applied Materials | Milan,ITA

Lead Machine Learning Scientist

@ Biogen | Cambridge, MA, United States