Feb. 13, 2024, 5:48 a.m. | Changdae Oh Hyesu Lim Mijoo Kim Jaegul Choo Alexander Hauptmann Zhi-Qi Cheng Kyungwoo Song

cs.CV updates on arXiv.org arxiv.org

Robust fine-tuning aims to ensure performance on out-of-distribution (OOD) samples, which is sometimes compromised by pursuing adaptation on in-distribution (ID) samples. However, another criterion for reliable machine learning -- confidence calibration has been overlooked despite its increasing demand for real-world high-stakes applications, e.g., autonomous driving. We raise concerns about the calibration of fine-tuned vision-language models (VLMs) under distribution shift by showing that naive fine-tuning and even state-of-the-art robust fine-tuning hurt the calibration of pre-trained VLMs, especially on OOD datasets. We …

applications autonomous autonomous driving concerns confidence criterion cs.ai cs.cv demand distribution driving fine-tuning language language models machine machine learning performance raise robust samples vision vision-language models world

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Data Science Advisor

@ Blue Yonder | Hamburg

Data Analyst

@ Sinch | São Paulo, State of São Paulo, Brazil - Remote

Data Engineer - Híbrido

@ SGS | Callao, Peru

Senior Analytics Engineer Brazil

@ Hiflylabs | Blumenau, Hungary