Web: http://arxiv.org/abs/2205.02456

May 6, 2022, 1:10 a.m. | Yuhang Liu, Wei Wei, Daowan Peng, Feida Zhu

cs.CV updates on arXiv.org arxiv.org

In recent years, the pre-training-then-fine-tuning paradigm has yielded
immense success on a wide spectrum of cross-modal tasks, such as visual
question answering (VQA), in which a visual-language (VL) model is first
optimized via self-supervised task objectives, e.g., masked language modeling
(MLM) and image-text matching (ITM), and then fine-tuned to adapt to downstream
task (e.g., VQA) via a brand-new objective function, e.g., answer prediction.
The inconsistency of the objective forms not only severely limits the
generalization of pre-trained VL models to …

arxiv cv question answering

More from arxiv.org / cs.CV updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California