Web: http://arxiv.org/abs/2205.01818

May 5, 2022, 1:10 a.m. | Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, Liyang Lu, Yujia Xie, Robert G

cs.CV updates on arXiv.org arxiv.org

Human intelligence is multimodal; we integrate visual, linguistic, and
acoustic signals to maintain a holistic worldview. Most current pretraining
methods, however, are limited to one or two modalities. We present i-Code, a
self-supervised pretraining framework where users may flexibly combine the
modalities of vision, speech, and language into unified and general-purpose
vector representations. In this framework, data from each modality are first
given to pretrained single-modality encoders. The encoder outputs are then
integrated with a multimodal fusion network, which uses …

arxiv code framework learning multimodal multimodal learning

More from arxiv.org / cs.CV updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC