Web: http://arxiv.org/abs/2209.07526

Sept. 16, 2022, 1:15 a.m. | Junke Wang, Dongdong Chen, Zuxuan Wu, Chong Luo, Luowei Zhou, Yucheng Zhao, Yujia Xie, Ce Liu, Yu-Gang Jiang, Lu Yuan

cs.CV updates on arXiv.org arxiv.org

This paper presents OmniVL, a new foundation model to support both
image-language and video-language tasks using one universal architecture. It
adopts a unified transformer-based visual encoder for both image and video
inputs, and thus can perform joint image-language and video-language
pretraining. We demonstrate, for the first time, such a paradigm benefits both
image and video tasks, as opposed to the conventional one-directional transfer
(e.g., use image-language to help video-language). To this end, we propose a
decoupled joint pretraining of image-language …

arxiv foundation model image language video

More from arxiv.org / cs.CV updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Senior Research Engineer, Applied Language

@ DeepMind | Mountain View, California, US

Machine Learning Engineer

@ Bluevine | Austin, TX

Lead Manager - Analytics & Data Science

@ Tide | India(Remote)

Machine Learning Engineer

@ Gtmhub | Indore, Madhya Pradesh, India