Feb. 27, 2024, 5:44 a.m. | Md Kaykobad Reza, Ashley Prater-Bennette, M. Salman Asif

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.03986v3 Announce Type: replace-cross
Abstract: Multimodal learning seeks to utilize data from multiple sources to improve the overall performance of downstream tasks. It is desirable for redundancies in the data to make multimodal systems robust to missing or corrupted observations in some correlated modalities. However, we observe that the performance of several existing multimodal networks significantly deteriorates if one or multiple modalities are absent at test time. To enable robustness to missing modalities, we propose a simple and parameter-efficient adaptation …

abstract arxiv cs.cv cs.lg data multimodal multimodal learning multimodal systems multiple observe performance robust systems tasks type via

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA