April 17, 2024, 4:42 a.m. | Ke Zhu, Liang Zhao, Zheng Ge, Xiangyu Zhang

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.10501v1 Announce Type: cross
Abstract: This paper makes the first attempt towards unsupervised preference alignment in Vision-Language Models (VLMs). We generate chosen and rejected responses with regard to the original and augmented image pairs, and conduct preference alignment with direct preference optimization. It is based on a core idea: properly designed augmentation to the image input will induce VLM to generate false but hard negative responses, which helps the model to learn from and produce more robust and powerful answers. …

abstract alignment arxiv augmentation core cs.ai cs.cl cs.cv cs.lg direct preference optimization generate image language language models optimization paper regard responses type unsupervised vision vision-language vision-language models visual vlms

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada