all AI news
Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding
March 28, 2024, 4:45 a.m. | Xintong Wang, Jingheng Pan, Liang Ding, Chris Biemann
cs.CV updates on arXiv.org arxiv.org
Abstract: Large Vision-Language Models (LVLMs) are increasingly adept at generating contextually detailed and coherent responses from visual inputs. However, their application in multimodal decision-making and open-ended generation is hindered by a notable rate of hallucinations, where generated text inaccurately represents the visual contents. To address this issue, this paper introduces the Instruction Contrastive Decoding (ICD) method, a novel approach designed to reduce hallucinations during LVLM inference. Our method is inspired by our observation that what we …
abstract adept application arxiv contents cs.ai cs.cl cs.cv cs.mm decision decoding generated hallucinations however inputs language language models making multimodal rate responses text type vision vision-language models visual
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York