April 4, 2024, 4:46 a.m. | Jinge Wu, Yunsoo Kim, Honghan Wu

cs.CV updates on arXiv.org arxiv.org

arXiv:2401.05827v2 Announce Type: replace-cross
Abstract: The recent success of large language and vision models (LLVMs) on vision question answering (VQA), particularly their applications in medicine (Med-VQA), has shown a great potential of realizing effective visual assistants for healthcare. However, these models are not extensively tested on the hallucination phenomenon in clinical settings. Here, we created a hallucination benchmark of medical images paired with question-answer sets and conducted a comprehensive evaluation of the state-of-the-art models. The study provides an in-depth analysis …

abstract applications arxiv assistants benchmark clinical cs.ai cs.cl cs.cv hallucination healthcare however language large language medical medicine question question answering success type vision vision models visual vqa

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City