Web: http://arxiv.org/abs/2205.06126

May 13, 2022, 1:10 a.m. | Yong Dai, Duyu Tang, Liangxin Liu, Minghuan Tan, Cong Zhou, Jingquan Wang, Zhangyin Feng, Fan Zhang, Xueyu Hu, Shuming Shi

cs.CV updates on arXiv.org arxiv.org

People perceive the world with multiple senses (e.g., through hearing sounds,
reading words and seeing objects). However, most existing AI systems only
process an individual modality. This paper presents an approach that excels at
handling multiple modalities of information with a single model. In our
"{SkillNet}" model, different parts of the parameters are specialized for
processing different modalities. Unlike traditional dense models that always
activate all the model parameters, our model sparsely activates parts of the
parameters whose skills are …

arxiv code image model sound text video

More from arxiv.org / cs.CV updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California