all AI news
Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study
March 18, 2024, 4:41 a.m. | Chenguang Wang, Ruoxi Jia, Xin Liu, Dawn Song
cs.LG updates on arXiv.org arxiv.org
Abstract: Pre-training image representations from the raw text about images enables zero-shot vision transfer to downstream tasks. Through pre-training on millions of samples collected from the internet, multimodal foundation models, such as CLIP, produce state-of-the-art zero-shot results that often reach competitiveness with fully supervised methods without the need for task-specific training. Besides the encouraging performance on classification accuracy, it is reported that these models close the robustness gap by matching the performance of supervised models trained …
abstract art arxiv benchmarking clip cs.ai cs.cl cs.cv cs.lg foundation image images internet multimodal pilot pre-training raw results robustness samples state study tasks text through training transfer type vision zero-shot
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN