Feb. 13, 2024, 5:48 a.m. | Changdae Oh Hyesu Lim Mijoo Kim Jaegul Choo Alexander Hauptmann Zhi-Qi Cheng Kyungwoo Song

cs.CV updates on arXiv.org arxiv.org

Robust fine-tuning aims to ensure performance on out-of-distribution (OOD) samples, which is sometimes compromised by pursuing adaptation on in-distribution (ID) samples. However, another criterion for reliable machine learning -- confidence calibration has been overlooked despite its increasing demand for real-world high-stakes applications, e.g., autonomous driving. We raise concerns about the calibration of fine-tuned vision-language models (VLMs) under distribution shift by showing that naive fine-tuning and even state-of-the-art robust fine-tuning hurt the calibration of pre-trained VLMs, especially on OOD datasets. We …

applications autonomous autonomous driving concerns confidence criterion cs.ai cs.cv demand distribution driving fine-tuning language language models machine machine learning performance raise robust samples vision vision-language models world

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US