all AI news
Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding
March 28, 2024, 4:45 a.m. | Xintong Wang, Jingheng Pan, Liang Ding, Chris Biemann
cs.CV updates on arXiv.org arxiv.org
Abstract: Large Vision-Language Models (LVLMs) are increasingly adept at generating contextually detailed and coherent responses from visual inputs. However, their application in multimodal decision-making and open-ended generation is hindered by a notable rate of hallucinations, where generated text inaccurately represents the visual contents. To address this issue, this paper introduces the Instruction Contrastive Decoding (ICD) method, a novel approach designed to reduce hallucinations during LVLM inference. Our method is inspired by our observation that what we …
abstract adept application arxiv contents cs.ai cs.cl cs.cv cs.mm decision decoding generated hallucinations however inputs language language models making multimodal rate responses text type vision vision-language models visual
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior ML Engineer
@ Carousell Group | Ho Chi Minh City, Vietnam
Data and Insight Analyst
@ Cotiviti | Remote, United States